Define "information"? Creationists aren't even willing to define it

So was listening the other day to a live stream on @dsterncardinale channel wherein he talked to Paul Price. The topic of discussion was Sanford’s concept of “genetic entropy”. The discussion about 1:20 hours long, but there was (IMO) one critical moment. After 38:00 minutes, Dan points out an important point that Sanford completely ignores, which is a non-constant rate of beneficial mutations.

To explain this: We can describe a genome with a particular sequence being situated in one place within sequence space, which is enormous. Even modest genome sizes. A genome with the size of N base pairs can be in 1 of 4^N possible configuration. Although, that is assuming a constant genome size, which is also able to change thereby making the size of sequence space a variable as well. So a mutation can be taken as a ‘step’ within the sequence space, causing the genome to go from one place to another place. Furthermore, each position within this sequence space has a fitness associated with it, which is also not constant (it depends on the context).

But let’s simplify for the sake of the argument. Let’s say that the specific region within the sequence space where fitness is optimal is fixed. In other words, the shape of the fitness landscape doesn’t change. The subset of mutations that causes genomes to “walk” across the sequence space while remaining within this optimal region are neutral. Mutations that causes the genome to step outside and move further away from the optimal region are deleterious. Note that genome which is within the optimal region can only experience neutral and deleterious mutations. However, once a genome goes outside the optimal region, it leads to two things: (1) The number of possible deleterious mutations (and thus the rate of deleterious mutations) goes down. You can’t take the same step twice. More significantly, (2) beneficial mutations are now possible, such as a mutation that reverses the initial deleterious mutation. The genome moves back to the original spot in the sequence space. Alternatively (and more likely), there can be one of many compensatory beneficial mutations. The genome steps back into the optimal region, but not necessarily to the exact original starting point.

The key thing to bear in mind is that the number of potential beneficial mutations, and thus the rate at which they occur, increases immensely for every deleterious mutation. In other words, for every step away from the optimal region in sequence space, there are an increasingly greater number of possible steps to move back towards the optimal region; and the further away you are from the optimum, the greater the effects are of beneficial mutations. Eventually, the rate and effects of beneficial mutations reaches a point where it balances the rate and effects of deleterious mutations. The rates between the two also don’t have to be equal since selection puts a hand on the scale. Thus, even if beneficial mutations are very rare relative to those that are deleterious, a population can still be at an equilibrium point: where deleterious mutations fixed due to drift are continuously corrected and/or compensated for by beneficial mutations that are fixed due to positive selection and drift; such that fitness remains constant.

Sanford doesn’t take this into account. His model assumes that the rates of both beneficial and deleterious mutations remains more or less constant, such that deleterious mutations accumulate at a constant rate. However, when considering the fact that the rate and effects of beneficial mutations would increase (and the rate of deleterious ones decreases) correspondingly, the accumulation of deleterious mutations wouldn’t be linear. The line would curve downward and reach an asymptote at the equilibrium state. When this is pointed out, Sanford says that it doesn’t matter since he assumes that any population will go extinct long before it reaches this equilibrium state… but he doesn’t test this assumption. This also ignores the fact that this equilibrium state (called the mutation selection balance) is empirically observed in the lab… unlike genetic entropy.

Dan pointed this out, but in his own way (and more more concisely). Paul’s responded with the following at 42:10 minutes:

PAUL So, that’s one aspect of it, but the other aspect… and I think this is the more important aspect, is that you’re thinking in an overly reductionistic way about the genome itself. You’re thinking of the genome like it’s a big ocean of switches that can be flipped. In a sense that is true, but in another sense that is not true. Because what we’re really talking about again is information content. So, when you bring information into the picture, it becomes clear that if the surrounding context… the surrounding informational context has been lost… so in your example, you’re talking about reaching an equilibrium because you’re talking about a genome that is so saturated with deleterious mutations that now it is reaching a point where you’re probability distribution is starting to do like this [holds hands up at equal heights] and you’re starting to get this equilibrium that you’re talking about. By the time you get to that point, you’ve done an unbelievable amount of damage to information content of the genome. We are not talking about switches, we are talking about words and sentences in effect.

So, now genetic entropy is measured a as a ‘decline’ or ‘degradation’ of ‘information content’ in the genome. At this point I was just screaming in the chat… Please define INFORMATION for me and in what units are you measuring this. This is the one of my sticking points I have with ID-creationist who use the word “information”. They never define it in a way that can be measured objectively. And no, they don’t use Shannon Information theory. Thank goodness Dan asked Paul for the definition at 47:24 minutes:

DAN: What are you… How are you defining information? Because we need a quantitative definition of information in order to make this work.

Dan did an excellent job of clarifying that he is asking for a definition that can be used to quantify whatever Paul is referring to. Paul’s immediate response to this question is astonishing:

PAUL: Yeah… so… I don’t think that is correct. Yeah, that’s not correct.

Not only does he not define what he means by “information”. He argues for why he doesn’t need to provide a definition. To me, that killed the conversation right there and then. If you can’t define the terms of your claim, then your claims hold (literally) no meaning. Such claims can neither be confirmed, nor disputed, nor even discussed in any rigorous way. An instance of not even wrong.

If find such an admission astonishing, since Paul is essentially stating: “I claim that information content in the genome is declining due to genetic entropy, but there is no way to measure it”

Again, it kills their own argument. Paul also mentions an article he wrote on the creation ministries website along with Robert Carter. Paul says that in this article they make the argument that they don’t need to define what they mean when the use the word “information”. I was curious so I started reading it. The article starts with dismissing examples of the evolution of new functions (such as yeast evolving a new ability to digest a type of sugar) because these don’t represent anything… quote… “genuinely new”… whatever that means. Next, the article states the following:

Information is impossible to quantify!

Skeptics often challenge creationists, “If information is decreasing, what is the rate of its decrease?” Another similar objection is, “Can you quantify the changes in the information content of the cell?” This line of questioning successfully cuts to the heart of the matter. They claim our inability to define information robustly means information does not exist.

I cannot speak for all the skeptics of course, but that shouldn’t be the claim. The claim is not “if you cannot define it, it doesn’t exist”, the claim should be "if you cannot define it in a quantifiable manner, you cannot determine objectively whether the thing in question is increasing, decreasing, or remains constant".

For example, if we observe two genomes of two generations, one person A claims the information content when down, person B claims information increases, and person C says it remained constant. How can we tell who is right? How can we test these hypotheses without defining “information” in a manner that is quantifiable at least in some way? We just can’t. There is no way to objectively resolve such disagreements. One might as well argue which color is “cooler”, blue or red?

Next, they discuss why we can’t use Shannon information as the definition. They bring up an example in languages, like how the German (Eichhörnchen) and English (Squirrel) words have the same ‘meaning’ in semantics… or as they put it… they “both ‘code for’ the same information content” (referring to the same animal)". But the information of these words would be different if we measured it in terms of Shannon Information.

However, there already are a few issues here. Tthese words aren’t necessarily referring to the same animals (or identical taxa). Germans use the word to refer to one specific genus Sciurus, whereas ‘squirrel’ in English is used to refer to members of the whole family Sciuridae. At least, that is the taxonomical usages. The family also includes “ground squirrels” which are often not referred to as ‘squirrels’ in common vernacular. Instead, they are more often called ‘chipmunks’, ‘prairie dogs’, or ‘marmots’. The taxonomic word for the Sciuridae family in German is Hörnchen, which literally translates to ‘little horn’, and this word is coincidentally also used to refer to various horn-shaped pastries in German. If you click on the previous link to the German wiki site and use a browser with a translator, sometimes the translator fails to recognize the context and “Hörnchen” is mistakenly translated into “croissant”.

It’s also fun to look a bit into the etymology. English ‘Squirrel’ and Latin “Sciurius” have the same origins in Greek ‘skiouros’ which literally translates from ‘skia-oura’ to ‘shadow-tail’, so basically “shadow-tailed”. The reason for why ancient Greek referred to (presumably) squirrels like this isn’t clear. It’s difficult to ask them to clarify, of course. One proposed explanation is that these people believed that the large bushy tails of squirrels were use to shade themselves. Whatever reason, when English speaking people say “Squirrel” they don’t think “shadow-tailed”. The original metaphorical meaning has been lost. It’s rather like how we have come to refer to computer mice as ‘mice’, but it is still easy to recognize the metaphor.

The point I am making is that words actually don’t possess “information content”, regardless what is meant by “information content”. Such a statement implies that words “contain” something that gives them meaning. No. Meaning is determined by how words are used, which changes drastically depending on the time, place, and context (as shown earlier). Meaning is determined by the individual, or more predominantly, at the sociological level. I can make up a new word on the spot, or use a words in a novel way. However, this matters little unless it catches on like a meme, or I have a friend group with our private slang. That’s how meaning is established in language. This makes “meaning” subjective, and un-quantifiable in an objective manner. So, it literally makes no sense to say that a word has an “amount” of meaning.

But how does this apply to genomes? It doesn’t. This is another common mistake that actually most people make, not just creationist. We often describe nucleotide and amino acid sequences as ‘words’ and that DNA sequences ‘code’ for proteins. However, these are metaphors (much like the aforementioned ‘computer mice’). A genome isn’t a “book”, or a “blueprint”, or a “recipe”. And this is where creationist are running their own arguments into the ground through equivocating the semantic meaning of words and sentences with sequences in genomes and proteins. That’s exactly what Price does when he said “We are not talking about switches, we are talking about words and sentences in effect.” It’s one big confusion resulting from a category error and/or mistaken the map for the place.

Moving on. In one sentence, Carter and Price admit that they cannot quantify their concept of “information”. Yet, in another sentence, they insist it can be quantified using simple examples.

So, on the one hand, the answer is no. When considering the decay of biological information over time, we cannot quantify the rate of decrease, because information, at its base, is an immaterial concept which does not lend itself to that kind of mathematical treatment. On the other hand, the answer is yes, we can sometimes quantify information when we have something simple to measure.

Alright. We may not get a method to measure, but we at least get an example of a measurement… right? No, sorry. We don’t. The examples they provide actually don’t provide any measurements. Here, they just ask you which of two things have more information, and they simply assert that one has more information. Why exactly? Because it’s just “clearly obvious”… that’s why. No explanation. No measurements. Just use your “intuition”.

Let’s illustrate that information can increase and decrease

Example 1:

A man in a coma, existing in a dreamless unconscious state, compared to a man who is conscious

During a 24-hour period, which of these two men will have had more information, or ideas, go through their minds? The answer is clearly the second man. The first man will not have had any information in his mind during that period of time.

Example 2:

A 30-page children’s book compared to a 1000-page encyclopedia

Which of these two books contains more information? Clearly the second.

These examples don’t illustrate anything. The “measurement” here is nothing but asking one to make a good guess. However, suppose someone gives the opposite answer than they give. Their intuition is wrong? How would they demonstrate that? How can you demonstrate who is right? Once again, still no answer.

Next up, they do something that is so completely backwards, it’s infuriating.

Information is carried in so many complex ways (syntax, grammar, contextual clues, etc.) that it staggers the mind to contemplate actually trying to quantify it in an objective way. Yet this is what the skeptic asks us to do. This is an attempt at obfuscation to avoid grappling with the obvious fact that life is built upon the foundation of information. In fact, life is information.

Yup, that’s right. They say that asking them to define their terms is “obfuscation”. No… That’s not obfuscation. It’s literally the opposite. Obfuscating is when one is making claims based on terms that remains ambiguous or undefined, obscuring what the content of claims even entails… which is exactly what Carter and Price are doing. ZERO self-awareness.

I find this very irritating since it feels like gaslighting. If you ask them to define “information”, they are basically saying that you’re being dishonest. They will act as if you already know full well what “information” means, that you’re just playing dumb, and that your question to define “information” is simply asked in bad faith.

Next, they respond to skeptics who doubt that DNA contains any information. Once again, I can’t speak for all “skeptics”, but until we know what they mean by “information” then we just can’t tell either way. If you ask whether DNA has semantic meaning like words and sentences, then I submit that the question commits a category error. However, the argument they make to claim that DNA contains “information” (in their undefined sense) is also fallacious in another way:

Some skeptics will resort to simply denying that the DNA truly carries any information, claiming this is just a creationist mental construct. The fact that DNA data storage technology is now being implemented on a massive scale is sufficient to prove that DNA stores data (information).4 In fact, information can be stored more densely in a drop of DNA-containing water than it can on any computer hard drive. To allow that humans may use DNA to store our own digital information, yet to disallow that our genomes contain ‘information’, would be a blatant instance of special pleading.

The link they provide goes to a Scientific American article mentioning the potential to use DNA as a medium of information storage. The “information” in this context is measured in bits, i.e. Shannon Information. So hod up… previously they state that Shannon Information is “not truly a measure of information”, but in this section they use Shannon Information to argue that DNA does have “Information content”, presumably in the sense of “Information” that they are arguing for. So which is it? Are they the same or are they different? Pick one. Stop the equivocation fallacy.

Next up is a section by which they attempt to explain what a “real, genuine” increase of information look like?

What would a real, genuine increase look like?

To get back to the skeptics’ main question: what would real increases in information look like? I submit that to answer this, just sit at a computer and watch yourself type out a paragraph in a word processor. Mutations are incremental; they are small changes that happen in a stepwise fashion as cells divide and generations multiply. The genetic code consists of letters (A,T,C,G), just like our own English language has an alphabet [NOTE: though the A, T, C and G in the DNA molecule aren’t actually letters. The letters are our way to abbreviate the sequences of bases. Don’t confuse the map for the place]. But here is the central problem—it takes hindsight to recognize whether function or meaning is really present. Watch this transformation:

  1. H
  2. HO
  3. HOU
  4. HOUS
  5. HOUSE
At what point in that series did you understand the meaning? Perhaps you guessed it at step 4, but you would have been lucky, for you did not know if a word like *housing* or *household* was about to appear. It didn’t become totally clear until step 5, when a full word was spelled and the program ended. There’s no real way to say, before you’ve already reached step 5, that ‘genuine information’ is being added.

This is completely asinine. At what point did I understand the meaning? My answer: At every point AND at no point, depending on the context. Each and every step can be (and are) used to convey meaning. “H” is the eighth letter of the Latin Alphabet, and can be used as the symbol for the element hydrogen. “HO” may refer to many things: A village in Denmark, Santa’s laugh, or a derogatory term for a sexually promiscuous woman. “HOU” can refer to a common Chinese surname. or the AITA airport code for the William P. Hobby Airport in Houston. Even with “HOUSE” it may refer to a structure for habitation, or it could refer to the medical drama series with Hugh Laurie playing the main character. The answer can be YES at every step if the context provides the meaning, or NO if there is no context provided.

So how can they possibly assert that “real, genuine information” has only appeared at step 5? They even point out that “household” was a possibility. So there is no way to infer what the intended word is, let alone the intended meaning of the word. Not even reaching step 5 would be enough to know. And that’s the key point. The only way to know is if Price or Carter tell us what they intend to write down. However, this would end up in a circular argument. Step 5 is where “genuine information” is being added, because they have preemptively defined this as the step where “genuine information” is being added.

How would they apply this thinking to a genome… or a smaller segment like a gene? Even if they did, how would this not be circular as well? Well, here they try to apply this:

Mutations suffer from this same problem. But there’s an even bigger problem: in order to achieve a meaningful word in a stepwise fashion (let alone sentences or paragraphs), it requires foresight. I have to already know I want to say “house” before I begin typing the word. But in Neo-Darwinism, that is disallowed. Mutations must be random and unguided. Due to the sheer number of possible nonsense words, you cannot expect to achieve any meaningful results from a series of random mutations. What if you were told that each letter in the above example were being added at random? Would you believe it? Probably not, for this is, statistically and by all appearances, an entirely non random set of letters. This illustrates yet another issue: any series of mutations that produced a meaningful and functional outcome would then be rightly suspected, due to the issue of foresight, of not being random. Any instance of such a series of mutations producing something that is both genetically coherent as well as functional in the context of already existing code, would count as evidence of design, and against the idea that mutations are random.

We end up with the crux of the problem. Their concept of “information” of a genome is in reference to some unspecified goal that requires some foresight. A specific spot or region within the vast sequence space that is presumably the intention of a designer, and the only way for that goal to be reached is by a designer with foresight. However, this relies on the assumption that the sequences we see, or the sequences of our ancestors (going back to creation), were the goal. This is committing the Texas Sharpshooter fallacy… the name is in reference to someone seeing random bullet holes and draws a bullseye after the fact. That’s what they were previously doing with the step 5 of the HOUSE example (they drew a bullseye around step 5). They are also drawing a figurative bullseye around the original genomic sequence(s) which they believe Adam and Eve would have possessed, and that any deviation from this genome is missing the “goal”, and thereby a decline of “information”.

This is my best attempt at inferring what they mean by “information”. It is basically the distance of a sequence relative to the original spot in sequence space that was intended by their creator. The closer the sequence of a genome is to that spot, the more information this genome has. The further you move away from it, the less. But then again, this relies on the presumption of drawing this bullseye after the fact. It’s useless in a scientific discussion.


I’m pretty sure we’ve been thru a similar discussion with Paul Price on this forum . I’ll look for it tomorrow m

1 Like

That’s a good summary, and the conclusion:

is spot on. That’s exactly how information is treated by creationists, though they rarely acknowledge it, specially when they’re pretending to be scientific. Sanford’s genetic entropy works that way, and so do various other creationist writings on information, including those in Biological Information: New Perspectives, where one paper explicitly says that the original configuration is optimal and any change away from that configuration is considered deleterious, regardless of the result[1].

This is one reason why creationists won’t give a quantifiable definition of information, it also explains their frequent (false) claim that mutations cannot increase information content. The other main reason they won’t give a quantifiable definition of ‘information’ is that they know from experience that as soon as they do, they will be deluged with real-life examples of ‘information’ increases.

  1. We also introduced a notion of optimization, so that the overlapping letter had a probability, p(optimal), of already being the ‘optimal’ letter at that position, meaning that for all the possible words that could occur by varying that letter, the current one was already the best. If a word was already optimal, then any mutation at the shared letter counted as deleterious, regardless of whether or not it resulted in another English word. ↩︎


This is something I’ve noticed. I don’t believe the characterisation of skeptics replies is arrogant. Mine is much like yours. Without some form of measurement it is impossible to tell whether a change is an increase or decrease. Which also makes the claim unfalsifiable (and I suspect that is a deliberate tactic, too)

I’d also add two simple counter-arguments which do not rely on measurement.

  1. if a mutation causes a decrease in information then the reverse must cause an increase. Point mutations are always reversible - so unless the creationist proposes a mechanism to prevent such mutations occurring then it is impossible to say the mutations cannot increase information.

  2. duplication and diversification can create new functional genes. Even if duplication is argued to create new no information because the duplicate is an exact copy the diversification defeats that objection. Even if the subsequent mutations would be a decrease in information if the duplication had not occurred.

With regard to measurements of information Dembski’s use of improbability is obviously useless in this context. Lee Spetner’s use of specificity - while only one aspect of the information in the genome - was potentially workable but his attempt to support the idea that specificity couldn’t increase was woeful.

I think that the obvious failure of the latter is why creationists don’t even try now. The argument obviously can’t be supported, but they won’t give up on it.


“Relative distance from a reference point” could describe the Kullback-Leiber divergence. I don’t see any reason why they couldn’t use K-L divergenc except that would pin it to Shannon Information.

… to the original spot in sequence space as intended by the creator.

Given this interpretation, it become crucial there actually be such a point. I think it would have to be in the genome of a “Created Kind”, which is conveniently unavailable for reference. Since no one can define such a point, “information loss” is meaningless, and I’m pretty sure Sanford and Carter like it that way.

I should be writing “information change” rather than loss. insertions or duplications will add information, deletions will lose info, everything else is change relative to the starting point.


I put some comments responding to Paul up on YouTube. Maybe he will reply?

Oh my. “Not even wrong” is exactly what it is.

Herb Morrison nailed it: “Oh, the humanity.”

The article ends with a false-dichotomy rant which includes this gem:

“Ultimately, it is not science that determines what people believe about the past; it is their heart.” [What would count as 'new information' in genetics?]

If willful defiance of the evidence comes from the heart, maybe. But I consider it more a matter of being stuck on particular hermeneutical traditions concerning the Book of Genesis. I know a lot more about Biblical exegesis (and hermeneutics as well as information theory) than I do about genomics—but nothing in my “heart” or brain has any problems with the massive evidence for the amazing evolutionary processes which brought about the biosphere we observe.


I really think that the chosen measure of information is the issue, rather than definitions. Different measures will work differently. An increase in one can be a decrease in another. If he weasels on that he really doesn’t have anything.

1 Like

This is very right. Currently the Dembski-Marks-Ewert group use specified information. If the specification is fitness, this is a relevant definition of information. (They have since moved away from fitness as a specification and to an unworkable defnition involving algorithmically specified information, which has no known relation to fitness).

1 Like

I think a belief in a creator comes with a worldview that does not fathom information increasing without a source of that information. Frankly, I don’t comprehend how anyone can think information increases out of nowhere or nothing or the universe begins to exist without a cause.

I would define an increase information as an increase in value of a function(s). But value may be subjective and not objectively defined by scientific observation. It is decided between two beings, just as the value of a group of letters is decided between people who have given those letters a function as a word and decided thar word has value in communication in their language and culture. (I think it’s amusing when my children ask me what a group of magnetic letters on my fridge means when it’s not an English word.)

A creationist decides value of function based on their belief about God and the world around them. They decide based on what God has communicated to them. If you’re a creationist, and you disagree I’d like to know how you think of it.

If you would like to explain how information can increase without a value judgment or communication or God, please feel free. Because I do not understand your way of thinking.


That is a good topic to raise. So I will try to address it.

Using the tools of science, the evidence of information increasing in biological systems is overwhelming. We see it everywhere. No amount of handwaving will make it go away. But the question of whether the biosphere and the evolutionary processes within it require “a value judgment or communication or God” is a philosophical question, not a scientific one. If God is truly transcendent as we Christians claim, that means he is not a matter-energy component of the universe he created. So the tools of science and the Scientific Method have no means to make empirical assessments of his intervention (no matter how hard IDers try.)

Can gravitational forces function “without a value judgment or communication or God”? How about chemical processes in general or biochemistry processes like photosynthesis in particular? Do they require God? Depending upon one’s philosophical analysis and stance, some will say yes and some will say no. But I don’t see how the tools of science can be applied to resolve these types of philosophical questions. So I think many here are as perplexed by your view of “how information can increase without a value judgment or communication or God” as you are perplexed at their position.

The Intelligent Design movement and creationists in general have had many decades to publish a scientific explanation for why information increases in biological systems require an external/transcendent agent of some sort (and present serious quantifications and proposals for empirical falsification testing of their claims.) Yet, they’ve made zero progress. And in the article they’ve instead fallen back on a lame excuse that basically says [I paraphrase here rather than quote]: “Who really knows what information is, let alone knows how to quantify it?” That’s a really embarrassing position for writers who claim to grasp the basic tools of science, which always involves the pursuit of quantification.

And as I’ve explained countless times on PS over the years, which is nothing special because this should be an obvious truism: Philosophical arguments (including theological philosophical arguments) which happen to discuss scientific phenomena with scientific language do NOT represent scientific arguments. (The Price article at is a prime example.) And that is why I can hold a theological/philosophical position about God’s role as a designer of the universe who I assert has designed its processes to operate “intelligently” while at the same time absolutely lambasting the pseudoscience of the Intelligent Design movement. I don’t claim that my philosophical position is demanded by the tools and methodologies of the scientific method because that would be pretending that science is something it is not. (And I’ve always pointed my students to the historical fact that the Christian philosophers who developed the subfield of Natural Philosophy, which eventually developed into modern science, stated from the beginning that it would always be restricted to certain types of questions about the natural world. For the most part, they didn’t confuse science with the much larger academic field of philosophy. Natural Philosophy is just a subset. But that’s a tangent which would require its own OP and thread.)

There are many good reasons why the vast majority of working scientists who identity as Christian and who assert their belief in a God who created everything nevertheless find ID arguments—and very similar arguments by creationists who go to great pains to make clear that they are NOT IDers—quite lame. And the Price article provides even more of them. Very embarrassing reasons.


A quick question: is “specified information [where] the specification is fitness” even potentially calculable (even if “relevant”)? If you cannot, even potentially, calculate a value for something, it is hard to see how its definition would help answer questions as to whether it is increasing, remaining constant, or decreasing, under certain circumstances.

Data. That’s how. For example.


I don’t know what “out of nowhere” is supposed to mean, but I haven’t found such a statement in any biology textbook I have come across.

If, in your view, a designer can “create” information by setting up a particular DNA sequence (ACGTCCTTAA or whatever), and this is information to you because this DNA sequence does something useful or interesting that contributes to the life of the organism in some way, then it’s pretty easy to see how evolution can do the same thing.
Members of a population have a genome (DNA sequence), it accumulates mutations over generation, and some times these mutations cause the DNA sequence to do something useful or interesting that contributes to the life of the organism in some way. When this happens, natural selection means those sequences wins over other sequences in the population that are not as beneficial as this one.

That’s it. That’s all there is to it. Mutations, which are basically just chemical reactions happening to the DNA molecule, change the sequence, they accumulate, some times they have functions, natural selection can then make them rise in frequency and stick around in the population.

Nothing “out of nowhere” is going on. It’s chemistry happening to DNA, and the result has a functional effect. Whether it’s God making the DNA, or chemistry happening to it, then regardless of the source the fact is the DNA has some measurable physical effect and we call that information.

I don’t know what you mean by “value” of a function.

The DNA that constitutes our genomes has the physical effect it does on the organism regardless of how much you and I might or might not “value” it, or what we think about it. The molecule isn’t controlled by our thoughts, and it won’t stop having it’s biochemical and physical effects because someone might “value” it more or less in the future.

There is a way to connect the magnitude of a function to the quantity of information. Hazen, Griffin, Carothers, and Szostak basically defined that as functional information back in 2007.

With this in hand, they show examples of how changes to a system can increase or decrease information as those changes produce a change in the magnitude of the function. It can be applied to DNA to show how evolution can increase information.

Information in the genome was never intended to mean “has value to human beings”.

Okay. But that’s not really what we speak about when the topic is molecular biology, genomes and the biology of inheritance. Here we are speaking strictly about what the DNA does at a physical and chemical level, how this affects all the physical attributes of the organism (including it’s behavior), and how those attributes are connected to the physical sequence of nucleobases that make up the genome of the organism.

Evolution is not a theory that says there should be an increase in the subjective value a human being might assign to something over time in the genomes of organisms.

1 Like

Define “systems.” Obviously sexual reproduction producing a new organism increases information but that’s not what we’re talking about.

Then define “information” and how it can be scientifically observed and give an example where an observed mutation increased information.

That’s why I posted. I also agree the definition of information is likely a philosophical question and not a purely scientific one. So I asked people to explain how it is scientific, from their point of view.

Good point. And I admit to not being entirely confident in this area of terminology because it is my understanding—and I will leave it to the professionals here to correct me if I’m wrong—biologists distinguish between biological systems and living systems. I will dare to quote Wikipedia on these two terms because it seems to summarize well what I have noticed in various scientific literature:

A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is.[1] Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.

And then:

Living systems are open self-organizing life forms that interact with their environment. These systems are maintained by flows of information, energy and matter. Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work.

Yes, I have the reckless gall to consult and quote Wikipedia because I generally find it quite useful and accurate in the fields within which my opinion is sufficiently professionally informed. But, again, I will defer to others on its accuracy with this topic of terminology.

Well, it’s a creationist argument so shouldn’t it be the creationists who explain why their argument is scientific? Just as it is the creationist’s job to explain what they mean by information and explain why it can’t increase naturally.

Without that there is simply no basis for any response other than to point out that the argument is so ill-defined as to be meaningless.

1 Like

What @AllenWitmerMiller wrote. And …

The common use of “Information” usually describes a meaningful message. while the mathematical definition (Shannon Information) is a measure of randomness from a source. SI doesn’t measure a single message, but the variability in all the messages from a given source. Used in biology, Information is describing something like the variability within a genome of a single species.
In SI, more information is more random (less predictable), so it pretty easy for ID/Creationists and those using the math to be talking past one another. It’s easy to create new randomness - new information - but this has nothing to do with creating meaning.

You are correct about the subjective aspect of information in common use. In mathematical use we have to be more careful. More information literally means more digits (zeros and ones, if you like). If there are two sources of information, A and B, we can measure the difference as the additional information needed to make the information from B look like the information from A.

Suppose you could measure the information in each of your kids (bear with me!). Assuming we could figure out how to do it, their information would be different, and we could measure that difference. Any value we assign to that difference is going to be entirely subjective. Is one cuter? Is one better at math? Does one have more freckles? :wink:
Fortunately, we can’t do that with Shannon Information, but it makes a point about subjectivity and relative differences.

Sanford writes about information being lost, but he isn’t using any formal definition to describe exactly what he means. In @dsterncardinale’s video, he describes an experiment where a virus was bombarded with radiation, and then tested to measure the fitness of the virus afterwards. The radiation is causing random mutations, and a lot of them. In the sense of Shannon Information, we can say the radiation is adding information to the genome of the virus by making it more random. MOST of the virus samples lost fitness as a result, BUT SOME gained in fitness. Specifically, we can say that the information in some virus changed in a way to allow increased fitness.

The criticism of Sanford, is that his argument fails when a mathematical definition of information is used.

1 Like

There is nothing in this definition which would apply to, and therefore preclude, biological evolution or complexity.

In the statistical sense (as with Shannon), the information is everything you need to know in order to evaluate fitness. Just knowing what information is needed wouldn’t tell you the value is, increasing, decreasing, or constant.

For example, if fitness has a normal distribution N[\mu, \sigma], and the parameter \mu is the average fitness, then we need some data and the sample mean \bar{x} to estimate $\mu$. The data could tell you if fitness is changing.

Shannon Information doesn’t deal with estimating values. Specified Information doesn’t estimate anything either. The version of Specified Information I am familiar with isn’t even a measure as defined in measure theory, because it doesn’t define a probability distribution.