I think @Joe_Felsenstein will agree with your interpretation of what Dembski is trying to do. I think it’s still wrong, and it takes a while to explain why.
Which is a pretty good description of Specified Complexity as defined in Dembski (2005), and the subject of my blog post.
In that version P[E] is multiplied \phi(T), which is a function of KI.
I think this new version of SC is probably the same as the old to a constant multiplier, but I haven’t actually tried to work it out yet. Fom Dembski’s 2005 paper …
\chi = -log( M \times N \times \phi(T) \times P[T|H])
taking the log thru …
\chi = -log( M \times N) - log( \phi(T) ) - log(P[T|H])
and substituting the new terminology (I’m not sure about this step) …
\chi = -log( M \times N) - K(D) + I(E)
and rearranging gives …
\chi = -log( M \times N) + I(E) - K(D)
where MxN is the “multiplicative resources”, a constant multiplier, or additive constant after taking the log.
I would be curious to know if anyone in the ID crew agree with my math and substitutions here?
Maybe @colewd can ask someone?
.…
My objection in the blog post was that
M \times N \times \phi(T) \times P[T|H]
is not a probability, but rather the expected value of a binomial, which can be greater than one. At this point Dembski no longer has a probability, and his “information measure” is undefined for any event which is not specifically complex, defeating the whole purpose of using CSI to detect complexity.
I later modified my interpretation in a discussion at Panda’s Thumb; \phi(T) cannot be used as a multiplier of probability because it does not represent binomial trials. Dembski doesn’t even have an expected value, only an arbitrary number with no meaning.
[Edit: added link to PT]
This number with no meaning is the same place we end up when trying to take the difference of SI and KI, in Dembski new scheme.
This is a lot of material written off the top of my head, and so a good time for me to pause and think about it.