First of all, CSI is not a quantity, but a Yes/No assessment of whether SI exceeds a certain threshold. In bringing up linkage disequilibrium, you are correctly concerned with the effect of the null distribution. So far versions of SI that use uniform distribution over all possible sequences, such as Szostak and Hazen’s Functional Information thereby ignore LD. But they also have no way to show that normal evolutionary processes cannot achieve a high level of FI (Szostak and Hazen were not even trying to do that). Any distribution you propose should be related to the values of the scale, and not just an arbitrary distribution. I will await with interest what you choose.
Dembski (2005) refers to CSI as both a quantity and a property, but my meaning is the estimated probability will decrease towards the CSI threshold.
Don’t hold your breath, I’m pretty sure I won’t come up with a probability distribution for evolution.
I should be able to find how much deviation from the uniform distribution is needed to remove the possibility of detecting CSI at all. – actually that’s simple, but let me finish thinking it through.
As you noted in our last discussion, there is no good reason for evolution to favor the sort of highly compressible code that would trigger detection of CSI. My intuition is that high compressibility is not compatible with biological function, but I don’t know how to show it.
I think the best I will be able to do is start with Dembski’s uniform distribution and add assumptions to make to more like evolution. I also think the specified target sequence T ought to be reframed as the posterior distribution of sequences T that meet the specification. But then Dembski is allergic to Bayes, so I’m sure he would not approve.
I’m still chasing some ideas, not necessarily good ones.
Any sufficiently strong convergence in distribution will appear to be CSI relative to a uniform hypothesis. We could also keep a uniform hypothesis, reducing the number of possibilities and making those that remain more likely (reducing variance).