We Are Mystified by Eric Holloway

I have mathematically proven to you over and over again this does not contradict my argument.

This is the crucial issue. How can AMI be useful if the calculable approximations are not measuring AMI? This appears to be what you are claiming, and it makes no sense to me. I submit that even my vastly limited, yet successful, experiment indicates AMI when it gives a positive measurement. With Li and Vitanyi’s clustering metric, it would not work if its approximation of AMI had no relation to the actual AMI between objects. Thus, in order for AMI to be real world useful, our approximations must bear some correlation to real AMI. Thus, the effectiveness of the approximations indicate the real AMI > 0, and the LoING applies. So, I am mystified by @swamidass and his arguments.

Here is Erics view. What is not exactly connected is the relationship between MI and FI.

But that misses the point.

Yes, we have two copies. Whether copying is new information – that is not what I was arguing. The fact that we have two copies rather than one is itself something new.

For me, the entire use of AIT is a mistake.

A biological organism uses information as a pragmatic construct to deal with uncertainty. Generally speaking, it is information about the environment. Using AIT, in effect you hermetically seal the system inside an AIT container with a very limited abstract formal environment, and where there is no uncertainty and therefore no need for information. So if you show that there is no information about the environment, that’s because the AIT model has removed the uncertainty from the environment and removed the value of information.

It amounts to a kind of circular reasoning. AIT can be useful in modeling computation, but it does not seem at all relevant to modeling biological life.

1 Like

That’s not true. It is very useful, just not for what Eric is using it for.

1 Like

Don’t you think this is patricianly true? My heart requires gobs of FI in all conditions to keep me alive. Its operation needs to be pretty certain.

As I understand it, you can cut most of the neurons to the heart, and it will continue to beat.

One of the problems here, is that we don’t have a clear meaning for “information”. From my perspective, information is that which informs. And whether something informs me depends on what I already know. So “information” refers to something that is unavoidably subjective.

This is pretty handwavy. How is AIT useful if approximations do not measure the real thing? Every practical application I’ve seen depends on a correlation between approximation and reality. Perhaps you have something else in mind?

If you get rid of the muscle motor proteins your in trouble.:slight_smile:

To your next point I agree we need to improve our understanding of information especially biological information.

Except when we ask for the correlation between your calculations and the physical reality of biological life. Then you are stumped.

That being said, @dga471 has a point. We can easily use this replication example to generate a counter argument.

We have a randomness oracle. From this oracle we can generate an arbitrarily large amount of entropy A. Since MI(A:A) = H(A), then replicating it gives us MI(F,A:A) = MI(A:A), and we can get unlimited amounts of MI by this combination of randomness and deterministic replication.

So, does this contradict LoING? No, because in the law’s formulation our target X must be fixed. However, in the above example, the target X changes to be whatever A is.

Thus, as long as X can change arbitrarily, LoING does not apply. On the other hand, this does not contradict LoING because a necessary premise is violated.

That’s because I’m not a biologist. But, insofar as DNA is a digital code my argument applies. And, the very existence of a digital code is itself evidence of design.

HOW do your calculations apply? All you keep doing is repeating the same empty assertion.

Another empty assertion. Do you ever do anything besides recite these tired old DI talking points?

Great. Do you notice that the necessary premises of LoING are violated in empirical reality? So why are you so convinced it applies everywhere?

Two examples:

Actually, DNA is not the whole system. It is just part of the system. So LoING does not apply.

And again:

So why are you so insistent that the law of information non-growth applies when all its foundational conditions are violated?

2 Likes

The conditions you list either don’t apply to LoING, or it’s unclear how they affect the ability to apply the LoING.

Regarding point 1, determinism is not a necessary requirement. The law takes into account both determinism and randomness.

Point 2 and 3 are possibly an issue, but possibly not. We’d have to look at a case by case basis to know whether it applies. You’d have to argue that imperfect knowledge necessarily or in most cases means we cannot know anything about the true AMI of the system, but such an argument doesn’t sound plausible to me. I’d certainly like to see such an argument.

Point 4 does not apply, system does not need to be reversible.

It occurs to me we maybe are talking about different conservation laws. For reference, I am referring to Leonid Levin’s law in " Randomness conservation inequalities; information and independence in mathematical theories". You’ll see points 1 and 4 don’t apply in his paper.

At any rate, I’ve thought of one way to connect the dots between the different points you made that seems consistent.

Basically, you are saying everything originates from a fair coin flip, which is deterministically operated on by a replicator, mutation and selection function, and the selection function is essentially a hamming distance from another fair coin flip. This minimizes the MI that LoING applies to, yet generates many individuals that share significant amounts of MI. Is this what you are saying? I can expand on this during the weekend, possibly with an empirical simulation, if the description doesn’t make sense.

If that is close to what you are saying, then yes, the LoING would not apply (very much) in this case, because none of the targets are independent: random initial individual (I), random mutation (M), random fitness (F).

And, if evolution does reduce to the above, where I, M and F are purely randomly generated, you do have a case for saying the LoING does not apply.

Now, this does not disprove ID, nor the LoING. What it does mean is that ID would always measure 0 or negative CSI when taking into account all relevant factors. So both ID and the LoING are still empirically valid, but would produce a null result. In which case, this would actually demonstrate the validity of these approaches, justifying the claim that ID is a practical science.

@EricMH your conclusions are always the same, but they do not follow from your reasoning, which often seems self-contradictory.

Mind pointing out the contradiction in my last post? I can’t have made too many that time, so can address them all :slight_smile:, although perhaps not until the weekend.

OK, it doesn’t contradict LoING, but then how does LoING prove ID? You agree now that replication (which is just randomness and determinism) can result in infinite amounts of MI. Where is the “mind” in DNA replication then?

2 Likes

Yes, in this case there is no mind necessary to get infinite amounts of MI.

The main problem here is me. I mistakenly over applied LoING.

It is true that when X is independent, you cannot naturalistically produce mutual information with X. However, if X is not independent, then it is trivial to generate mutual information.

And again I note this does not contradict ID, which requires the specification to be independent of the event.

1 Like