Durston: Functional Information

Good plan. Looking these, over I do not think I will be able to sign off.

This needs to be more precise. Do you mean the conditional entropy? Or mutual information? Or conditional mutual information? Perhaps write the equations, and refer to the specific equations in shannon’s paper. http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf

That might true if you have defined shannon information correct. You seem to be using an idiosyncratic/non-technical definition of shannon information, so we can’t be sure.

I agree. Also his equation assumes we have perfect knowledge of all functional and non-functional sequences.

I am not happy to work with it as this one major way your method fails. You even recognize this in following points. So no, I do not accept that this is an acceptable simplification.

Not clear. There is no way to assess this as true or false.

I dispute this on several levels.

  1. Duplicating a sequence can add the function of redundancy and error control, that is a new function.
  2. This directly contradicts point #4, demonstrating that sequences are not equiprobable.
  3. The method you use cannot correct for this effect.
  4. FI here is not well defined, to the point of creating errors in analysis.

This last point is important. We can understand FI in several ways:

A. The bits of information required to modify a system without a function to produce a new function.
B. The number of bits in a system that preforms a system in total.
C. The amount of “function” we see (measured some how).
mA: The measured version of A by some specific method.
mB: The measured version of B by some specific method.

Several unjustified claims are often made by lumping these five concepts together. All are false.

  1. High B does not mean A is high.
  2. Low A does not mean C is high, because a small amount of A can produce a large amount of C.
  3. mA and mB are not necessarily good estimates of A and B, and can be wildly off if the process that generates sequences is not modeled.

I could go on, but the key point is that FI has to be carefully qualified everytime the term is used. That is not done here. So I cannot agree with most of what you have written until it is clarified.

I disagree with this. We have very good estimates of M(Ex) for many functions. Regardless, this is all beside the point, because this tells nothing about how difficult it is to evolve a new function. The type of FI we need to compute is the conditional information from the sequences we already have to get to any sequence that works for any function, even if we have never seen that sequence before or never expected that function. @Kirk’s method does not compute this quantity.

I, however, did compute FI in cancer, as the mutual conditional information: Computing the Functional Information in Cancer. This is an important negative control going forward. Any method proposed needs to be able to show that cancer is possible without design. That does not appear possible without demonstrating Durston’s argument wrong.

2 Likes