The EricMH Information Argument and Simulation

Nothing he has written is clear enough to draw that conclusion. I also do not know that it is “obvious” that information is “lost” to the biological system. That does not make a lot of sense to me. My questions still stand.

Here is my example. A single mutation changes a single gene. Information is lost, not gained.

Umm. I’m not sure what definition you are using. In information theory, as would usually be understood (at least by me), information was gained by definition.

To some degree it depends on how you define the function. If you mutate your best friends phone number and the function is defined as calling your best friend Harry then mutation will always be a loss of information. If you define function as calling any friend then the mutation may be neutral as Harrys number gets substituted for Freds. The likely hood is improbable but possible. If you are looking for a girlfriend and your function is to find the most compatible relationship possible in this case a beneficial mutation to the number of the girls in your cell phone is possible but indeed improbable.

Genes can work with this same principal depending on their defined function.

1 Like

Entropy is not information!

What we call information is technically known as mutual information. Entropy is necessary for mutual information to exist, but high entropy does not entail mutual information actually exists.

Evolution can only cause mutual information to decrease. Likewise with the second law of thermodynamics.

That is why I don’t get the controversy over ID. Dembski is just repeating these well established information laws. The only discrepancy is he uses non-standard terminology.

What you could prove is there is a probability of generating mutual information from a random source. But, this probability drops off exponentially for each bit added. Besides this, nothing more can be proven.

In my argument I’m referring to what you call theoretic algorithmic information. My formulation may be novel due to its reliance on non-novel definitions of information, but in essence is no different than the standard ID argument.

Regarding the definition of complexity, I define it as the Kolmogorov minimal sufficient statistic (KMSS). Only compressible bitstrings have a non trivial KMSS, but most bitstrings are incompressible so their KMSS is trivial. Due to the law of information non growth (I(X:Y) \geq I(f(X):Y)) combining randomness and determinism doesn’t help generate KMSS any more than flipping a coin does.

Exactly.

Not according to Shannon. :smile:. Why not try and test it with a simulation? It will demonstrate I am correct here. Right?

Um no. That is not the counter example. You have a lot of questions to work through. Looking forward to seeing your answers. Until then you haven’t even made a clear argument.

Entropy is a measure of uncertainty, the opposite of information.

What is the counter example?

Still waiting on answers to the questions above.

Working on it. You gave me many more questions than I gave you. We should establish some sort of question time reciprocity. If I spend T time answering your questions, you spend the same answering mine. Otherwise, you can just throw out an arbitrary number of questions as a prerequisite to answer any of mine, and so progress is never made.

If we did that, I would be far too long before I’d have to answer any of your questions. I’ve already engaged with you quite a bit answering question. You are trying to make a case. It is not clear. I’m telling you what I need to know to just understand what you are getting at. You don’t have to answer if you don’t care about making your case.

We are both making cases, and my questions to you are basic. I’m not expecting drawn out proofs. You say something other than chance and necessity can generate information, which is also not a halting oracle. What is it?

What do you mean by “information”? Entropy is not information. I’ve provided a clear definition of information as “mutual information”.

Well as I teach it at a leading science university, it entropy is precisely information, just as Shannon explained in his seminal work, and has been demonstrated in simulation after simulation. Mutual information is merely shared entropy. This is demonstrable with simulation, math, and included in every standard textbook (though entropy goes by many names). The formulas are literally identical.

So what ever you mean by information theory, it is something different than we teach students at washington university. You’ll have to put out the formulas and show your derivations.

1 Like

One more thing. I’m not making a case. I am just stating a hard cold fact:

I am an expert in information theory, whose scientific work applies information theory to solves problems in biology, chemistry, and medicine, and training PhD students in how to do this too. From that vantage point, none of the information theory arguments against evolution make any sense to me. Most seem to be in clear mathematical error.

Maybe I am wrong, but that is not an argument. That is a just a brute fact. If you want to change it, you have to make your case. I’ll look at it, but I have no expectation or need to convince you that you are wrong. So I am not making a “case”.

1 Like

When people talk about information in DNA, what they are referring to is DNA that is functional in some environment. There is a correlation between the DNA and environment, and that is mutual information. Entropy must be high for there to be mutual information, but high entropy alone does not entail mutual information between DNA and environment.

I think people are talking past each other. Just for clarification about ‘mutual information’ - From Scholarpedia:

Mutual information is one of many quantities that measures how much one random variables tells us about another. It is a dimensionless quantity with (generally) units of bits, and can be thought of as the reduction in uncertainty about one random variable given knowledge of another. High mutual information indicates a large reduction in uncertainty; low mutual information indicates a small reduction; and zero mutual information between two random variables means the variables are independent.

I’m not sure how to connect this with ‘information non-growth’ to tell us about changes in the information (functional) content of an organism, but I guess that’s part of a proof.

1 Like

That turn out to be not true. Yes, that is what you mean, and what most ID proponents meant. There is, however, no settled information theory concept of “function.” Usually when we discuss information in DNA, all we mean is entropy.

Yup that’s right. I want to know what he means by all his terms, because he uses them in very different ways than do I. Your definition too is consistent with information = entropy and mutual information = shared entropy. Notice in that definition that “randomness” and “entropy” and “information” are semantically equivalent.

Well, not my definition. I’d rephrase ‘mutual information’, probably imprecisely, as similar to the correlation between numerous variables.

For sure, ‘functional’ information is a difficult thing to describe.

Regardless of whether there is a precise definition of function, it is clear that randomly generating DNA by flipping the coin is not going to create organisms. So, entropy is an insufficient definition of the information in DNA. Otherwise, a randomly generated string of DNA has just as much information as a functional string of DNA, which is clearly false.

But, let’s leave aside the question of function. Let’s just talk about complex structure. This can be precisely defined as the Kolmogorov minimal sufficient statistic, and it cannot be generated by random and stochastic processes.