I finally had the chance to read the preprint mentioned in this thread:
And wouldn’t you know it, they introduce a metric they call “high-order entropy” which is the difference of Shannon entropy and Kolmogorov complexity (as estimated via compression and normalized by string length). They demonstrate that this measure is useful for detecting phase transitions between populations with numerous unique and mostly random programs and populations that are dominated by many copies/variants of a self-replicating program.
(And while we’re at it, we can tie in another recent thread and note that this is a scenario Assembly Theory is intended to address/detect: when the copy number of an entity far exceeds what is expected from combinatorics and input abundance alone, suggesting reproduction and/or selection is involved.)