The Law of Information Non Growth

Continuing the discussion from Explaining the Cancer Information Calculation:

Information Non Growth

Just to back a bit, it seemed worth explaining this. It is actually fairly straightforward. Let us say we have a deterministic program F, that works on arbitrary data X, to produce output Y. Let us say that H is a function that returns the information content of these things. Therefore, by definition:

F(X) = Y

What is the information content of Y? Well we know that If we have F and X we can compute Y. So what ever the information content we know it is less than or equal to:

H(X) + H(F)

Another way of putting it is that F(X) is equivalent way of representing Y, so we know that it can never take more than H(F) + H(X) bits to represent Y. Anything more than this and we know for a fact that we are using an inefficient compression. That is all the law of information non-growth is. We go a bit further if we know that F is a reversible function. Then we know that information is conserved as well. Information content never goes up or down.

An Example

Let us say that X and Y are random bit strings and F is a pseudorandom number generator that can be seeded by a bit string. Let us say we chose:

X to be the binary representation of the integer “12934824840”
F to be a function that returns the first billion bits of the random number generator seeded by X
Y is then F(X)

So, what is the information content of Y? Well, knowing how it was generated, this is really easy. It is just:

H(F) + H(X)

Which is perhaps surely much much less than 1 billion bits. That is what the law of information non growth tells us. What if we do not know what F and X are, and can only observe Y? Well, now we have no idea how much information there is here. We won’t see any patterns in the data, so we will incorrectly (but empirically) conclude that Y has a billion bits of information. This isn’t correct, but we have no way of knowing this.

In fact, we can never really know if we have the best compression. We can therefore never know for sure in examining Y alone if we are looking at a billion bits of information, or a compressible object that reduces down to the seed and the algorithm.

How this Maps Back

The information no-growth law applies where:

  1. We are in a deterministic world
  2. Taking into account the whole system.
  3. Have perfect knowledge of every detail of the system.

The conservation of information law applies if also:

  1. The system is reversible.

In a deterministic world of perfect knowledge, taking into account the whole system, the information no-growth theorom applies. However in the real world:

  1. we do not have perfect knowledge
  2. we only look at parts of the system, and
  3. we do not even know if physics is reversible or deterministic (Predictability Problems in Physics)

Basically, in practice, in empirical work, not one of the assumptions required for information no-growth law to apply.

We can see information grow in DNA because it is not the whole system, because we are not modeling the whole system, and we do not have full perfect knowledge of the whole system. This is just a context where the law of information non-growth to apply. In fact, in just about no empirical scenario does the law apply. It is a theoretical construct that requires omniscience and determinism to apply. It would be difficult to imagine one where it would apply.

That is what I told @EricMH in the beginning.

3 Likes

Yes, quite right.

I’ve stated this many times, formally and informally, but your argument doesn’t work. FI is a lower bound on MI, so FI > 0 implies MI > 0. Since MI cannot come from natural processes, then FI > 0 indicates intelligent design.

As a side note, Li and Vitanyi prove your exact statement that MI is not calculable:

We show that the NID is neither upper semicomputable nor lower semicomputable.

However, this does not stop them from practically applying MI to derive a very useful feature free clustering metric based on compression.

Plus, they wrote an entire book on the practical application of algorithmic information theory, An Introduction to Kolmogorov Complexity and Its Applications, despite the fact algorithmic information is not calculable.

So, both your claims are shown to be false:

  1. FI is limited by the law of information non-growth, since it is a lower bound on MI.
  2. MI is quite useful in a practical setting despite it being not calculable.

If the above is the extent of your argument against ID, then your argument against ID fails.

Chipping in: We are discussion the expectation of information growth, which is zero or negative. In practical settings random variation might increase information, but that’s not the question here.

2 Likes

So then you believe that cancer is intelligently designed? FI increases, even as MI decreases.

You realize the error, right @EricMH? MI is an upper bound on FI, but is such a poor upper bound as to be irrelevant. It can move freely up or down without putting any practical constraints on FI. Cancer demonstrates this directly.

Yes, and I make use of information theory every day to do practically useful things. Information theory is very useful. It is just not useful for what you are trying to do with it.

We’ve demonstrated now in several long exchanges that you don’t know how it works in practice. Your simulations have not born out what you predicted. This is exactly what I explained from the beginning. In your own simulations, we see empirical MI increasing.

1 Like

It appears to be a by product of the design of the eukaryotic cell in multicellular organisms. It can occur when an embryonic developmental pathway gets initiated in a mature animal.

The information required to correctly turn on the right patterns at the right time for cancer is not encoded in our genomes. That information is acquired by a natural process that creates a cancer genome, demonstrating that new functional information can be created by natural processes.

1 Like

The cell is losing regulation in its mature state. The functional information to keep it regulated is lost. Cancers that I have studied are due to a loss of functional information in my opinion as the cell is no longer functioning properly.

The WNT pathway is a case study as a loss of proper gene expression and transcription/translation like APC or DKK will cause this loss of regulation.

This all gets down to the definition of function. Is cancer function or disfunction.

@colewd that is not how information works. Sorry. You are using an intuitive understanding that ends up being incorrect in the math.

Math depends on precise definitions and I think you are not defining function properly.

I think @swamidass is using ‘function’ appropriately. As I noted elsewhere, one needs to precisely define a metric of ‘information’ and stick to it, lest one fall into the problems Lee Spetner did with his varied assessments. And even with a precisely defined metric, one needs to recognize that it may not capture what was intended.

1 Like

So you both say I’m right and wrong in the same sentence. Strange indeed.

And to make a clarification, while I’ve come up with a way that one can get MI through randomness and determinism, that does not apply to your cancer argument. You claim the germline is not equivalent to a fair coin flip, so my scenario is no longer relevant to your argument, since the fair coin flip requirement is necessary for making the LoING only apply minimally to my scenario. Once that requirement is removed, then the LoING applies again.