We Are Mystified by Eric Holloway

Quite. Natural selection has long been viewed as a mechanism for transferring information from the environment into an organism.

4 Likes

I like this phrasing a great deal.

Mutable feedback loops.

1 Like

I still don’t understand your rebuttal to the issue of DNA replication, in which you claim that a copying function F has mutual information between itself and what it is copying, A:

Again, this doesn’t make sense to me because the function itself is a very simple set of instructions which could be applied to copy anything.

1 Like

I agree @dga471. It looks like an example of goal-post moving. Clearly a copy is a deterministic function. However, a deterministic function is not supposed to be able to increase MI. Yet it can. This gets back to some old questions I asked @EricMH that he never answered about the simulation: what are valid choices for X, Y, and E? He never answered.

Another very puzzling set of statements:

Compression is not possible except in ergodic sequences, period. Ergodic, however, is a strange term. Clearly all these big strings are compressible down to the random seed, so they are ergodic. They are just not ergodic in the way that LZ understands ergodic (key point!). We have to introduce some order into the sequences, which my first implementation did (by encoding as strings of 0 and 1) to get any compression from any algorithm. This gets to an interesting catch-22 that @EricMH falls into:

  1. He wants compression to reduce the size of the random bit string.
  2. He doesn’t want there to be any “computer perceivable” order in these bit strings.

These two requirements are not consistent. These are logically contradictory requirements. We have to pick one or the other, but we can’t have both. I actually produced an implementation that satisfied #1 but he complained about #2 not being satisfied. Then I produced an implementation that satisfied #2, and he complained about #1 not being satisfied. So which one is it @EricMH? Which of these two mutually contradictory requirements is how you want to run the simulation? You can’t have both. You have to choose. And no, I did not make an error. I’m just trying to follow these self-contradictory requirements.

This is in error. This hackish “fix” does not produce a compression algorithm by your definition. It creates instead a function where the compression function will almost always increase the size of the input by one bit. It is not a compression algorithm then, violating one of your requirements.

Moreover, his last experiment demonstrates another error in interpretation:

Once again the claim is wrong, again. Actually it will never cross for copies of pseudorandom bit strings, and this is a problem because MI supposed to equal len(Y), no zero. H(Y+Y) is suppose to approach len(Y) not len(Y+Y). That will actually become less likely as the length increases. So it is both a false claim, and if it were true, that will still demonstrate the error in his implementation.

I do believe that @EricMH is an honest character here. However, it is hard to make progress when basic and demonstrable errors are being made over and over.

1 Like

Somewhat stunningly to me too is that I explained already how to fix it for some narrow cases.

This actually would produce a reasonable MI in @EricMH’s simulation , though, as I said, is easy to break with a different E function (a rotation, and many other sorts of shuffling). What just is hard to for me get my head around is @EricMH’s reasoning here. It really does look like epistemic closure, but there is such confidence exuding from him I’m unclear exactly what is going on in your head, @EricMH. I hope you can enlighten us.

There is a point here too. It is not possible to fix for all cases, just some. You have to actually model the process to know the compression size / probability function.

1 Like

Quite a lot here, but it might help if you weren’t so intent on claiming everything I state is a contradiction. Perhaps try thinking of a way the statements don’t contradict, or perhaps I didn’t say what you think I said. I.e. the principle of charity:

In philosophy and rhetoric, the principle of charity or charitable interpretation requires interpreting a speaker’s statements in the most rational way possible and, in the case of any argument, considering its best, strongest possible interpretation.

I attempt this with your posts, so I’d appreciate a returned favor here.

Otherwise, you’ll just “win” through throwing up too much flak, because I don’t have enough time to address everything, which is a hollow victory indeed.

I’m honestly confused by you. It seems I am not alone. It is not flack but honest confusion on how to make sense of you claims.

If I were in your shoes I would have understood myself in error a long time ago. Yet you continue on. I have no idea what to make of it.

The basic point is the law of information non growth (LoING) means you cannot get net increase of algorithmic MI from deterministic and random processes (DRP). That point is proven. The point of controversy is how this translates over to the calculable empirical realm.

Your first argument is we only see a portion of everything, so we cannot differentiate between MI and conditional MI, which you call FI. However, FI is a lower limit on MI, so any measurement of FI > 0 indicates MI > 0, and per LoING FI > 0 indicates some cause that is not reducible to DRP. So this argument does not work.

Your second argument is we can never truly measure algorithmic MI, therefore empirical measurements do not tell us anything about the algorithmic MI of X. This argument is more interesting, but so far has only revolved around demonstrating that I have difficulty constructing ways to empirically measure algorithmic MI, which other commentators have pointed out is not conclusive of anything. If you are correct on this premise, then you certainly have a good argument, but you need to do more than pointing out flaws in my experiments. If you cannot, then you have no grounds for saying ID is fundamentally flawed other than an argument from personal incredulity, and that argument is like Bill Gates saying we only need 64k of memory in our computers.

And personally, since LoING is proven, this whole discussion seems pretty moot to me. I find it inconceivable that algorithmic MI has no bearing on the empirical realm, especially since it has resulted in useful technology and applications (i.e. Li and Vitanyi’s work).

I am happy to be convinced otherwise. But, if you don’t have any positive arguments why algorithmic MI has no empirical validity, then I don’t see a great deal of progress here.

The problem here, at least from my perspective, is that @EricMH appears to be making an argument about an abstract mathematical model, and most of us are concerned with empirical questions related to biology and evolution. So there appears to be a huge communication gulf.

For myself, I do not see how the abstract model connects to the empirical questions. I do not see how AIT (algorithmic information theory) could possibly be relevant.

4 Likes

Yes it has been proven, however you are misunderstanding us. Algorithmic MI is a very useful concept, just not for what you are doing. That is the difference. It just cannot be used the way you are using it. So the fact that it is useful for some things (which it is) does not help you at all in demonstrating that it is useful precisely in a place that it cannot be useful.

4 Likes

Your conclusion does not follow. MI > 0 does not imply a change to MI, only that it is greater than zero, so LoING is not violated. (Unless by MI > 0 you actually mean ΔMI > 0, in which case your conclusion doesn’t follow for a different reason.)

1 Like

I don’t see any empirical(biological) data that contradicts Erics model do you?

1 Like

We have pointed you over and over to cancer. This definitively contradicts his model. We see FI increasing by natural processes.

1 Like

I have pointed out I think you are wrong and Kirks analysis agrees with me. You have failed to make and argument that cancer is a function vs a loss of function.

@kirk just discovered a math error. He mistook delta H for KL. We will see now if he is honest or not.

2 Likes

We are missing a clear specification of how to connect empirical data to the model.

2 Likes

The basic theory is that if LoING is > 0 it is the result of conscious intelligence and not chance and determinism. There is no empirical data that violates this that I have seen. If you could model a chance and deterministic model that generates known functional information then you would falsify Eric’s model.

I think he is probably right as information of any length like the argument here lives in almost infinite mathematical space. It seems empirically very unlikely that it can be generated without a mind. Although the design argument is limited it is in itself pretty powerful.

1 Like

Have you read an understood this thread? The Law of Information Non Growth

1 Like

The behavior of a bacterium cannot be fully explained by chance and determinism. Are you implying that we should consider the bacterium to have conscious intelligence?

1 Like