# The Semiotic Argument Against Naturalism

A while ago @swamidass commented on a blog post I wrote back in 2013 (https://standard-deviations.com/2013/10/07/the-semiotic-argument-against-naturalism/) to help me make sense of John Lennox’s argument in his book God’s Undertaker. He asked me if we could continue the conversation here.

I’ll give a short summary of the blog post below. Lennox’s book is a much better version of it, which I would recommend. It’s interesting to me because it uses mathematics and algorithmic theory to critique evolutionary theory (I’m an engineer, not a biologist). Although I have no vested interest in it being valid, I think it is fun and have yet to see a strong refutation.

I’ve called it the semiotic argument against naturalism, since it uses information theory (semiotics) to argue against evolution as a plausible mechanism for abiogenesis (the origin of life), but probably also for increasingly complex organisms. The short version goes like this:

1. Genetic systems are information systems
This is widely acknowledged by most biologists, so I’ll leave it at that.

2. The information contained in a DNA molecule is algorithmically incompressible.
Some information can be compressed by algorithms. For example, a string saying ‘ILOVEYOUILOVEYOU’ can be compressed since there is repetition. There may still be information in there, but the more you can compress it, the less information it contains (see https://pudding.cool/2017/05/song-repetition/ for a fun illustration of this). Then there are other strings can’t be compressed. For our purposes, there are two kinds of incompressible strings: ones with random letters, and ones with information, for example this post. The random string is as complex, in the sense that it is one of many different combinations of letters and getting that specific combination is unlikely/difficult to repeat with an algorithm. But it doesn’t convey meaning the way this English does. There are many ways to spill ink on a page: few of them turn out to be letters in meaningful sentences. Also, there is nothing in the physics and chemistry of ink and paper that makes ink molecules self-assemble to write English sentences, so the underlying building blocks are independent of the meaning their combination conveys.

There are 10320 sequence alternatives for the genome to code the simplest biologically significant amino acids, and only a few of them work.

3. Such information-producing algorithms aren’t present in nature
Again, no-one has been able to come up with a counter-example as far as I know, so I’ll leave it there.

4. Algorithms that produce incompressible pieces of information have to themselves be more complex, or receive a more complex input of information, than that which they produce, and therefore do not produce new information.
There is some serious mathematical firepower behind the much of this premise, but from personal experience writing such algorithms, this is self-evident.

5. Therefore the algorithm of evolution by natural selection (or any other unguided process) cannot produce any new information, including that contained in the DNA molecule.
This implies that the information contained in DNA had to be present before evolution took over, and that it had to come from somewhere outside such natural processes.

@swamidass thinks that some of these premises are problematic, but I’ll leave it to him to critique them

1 Like

Great to see you here @herman.

To be clear, I have no reason to defend naturalism, because I am a Christian, and not a materialist. However, I see no value in bad arguments against naturalism.

Relevant to this argument is a thread from a while back on information theory: Information = Entropy - Scientific Evidence - The BioLogos Forum

Now, just briefly, I’ll point out two things.

Counter example is easy. “Random” noise is in-compressible. This does not require ontological randomness, but just random from our limited human view. It could be quantum noise, readings from a chaotic system, or just pseudo-random data for which we do not know the generating function. Radiation from cosmic background radiation, for example, is in-compressible.

And yes, you may have guessed, mutations to DNA are essentially in-compressible too. Not entirely, but nearly so.

This turns out to be false.

It turns out that DNA is compressible, but you need a special algorithm (zip and bzip will not do) to compress it efficiently. There are a lot of them out there, and some achieve nearly 1000x compression (from 2GB to 2MB).

Ironically, the best compression algorithms are based on evolutionary theory. Not knowing this, one would assume that DNA is much higher functional/semantic information content than it actually has. It is compressible if you store the differences between related genomes. We could call this the “semiotic argument for common descent” if we wanted to.

Also, knowing the distribution of mutations, we can use that to make it more compressible too. So no, DNA is not incompressible. Random mutations are incompressible, but shared mutations from shared history are, however, very compressible.

Of course, I think Naturalism is wrong. We do not know this from this argument.

1 Like

I see from your previous posts that you tend to equate noise with information by defining both as incompressible. You have thought about information theory more than I have, but this seems to me to be the fallacy of affirming the consequent. Its practical implications bear this out. Randomly typed letter strings (noise) then have the same information as English text. Maybe in the strict sense of the definitions you give. Both may be equally incompressible. English might be slightly more compressible, so according to your definition English then contains less information than random noise. But not in the colloquial sense in which we all mean (excuse the pun) when we say “information”. So I disagree that random generators are information generators (seems weird to have to disagree with that). If this were the case, I could write very long, information-rich posts in no time at all by generating random sequences.

Sure, there are compression algorithms that work – with reference genomes, as I understand it:

But that’s exactly the point: the compression algorithm is importing more information (through the reference genome). Not only do you just move the explanation back a step: how was the reference genome produced, and how do you compress that? But also, that the algorithm is still more complex than the information it produces, confirming (4).

1 Like

@herman it does not work to mix and match definitions of “information” this way. Information Theory has very clear definitions, and that is what you are relying on when you point to things like incompressibility.

Randomly typed letters have MORE information than English text. In the standard definitions of Information Theory you are relying upon to make your argument. That is why English text is much MORE compressible than random text. The more semantic information, usually, the more compressible text is.

This is where a error lies. The colloquial meaning of “information” is most similar to something called “semantic information.” Here is the really interesting thing. “Information” and “semantic information,” from a mathematical point of view, work very differently.

This turns out to be false too. Semantic information is about “meaning,” not about bits. At the start of a football game, when a referee flips a coin, he is randomly generating a heads or tails. That heads or tails has meaning as to what will be done next, so much so that some will be excited or sad based on the results. This is even more true when dice are rolled at a craps table. So in these cases, a random process is generating semantic information.

The definitions are critical to be consistent with or errors start to accrue.

You missed the point.

First off to agree with you, but also give context, the algorithm size does need to include the size of the genome. However, we amortize (divide) it over the number of genomes we are compressing. Because we use the same reference genome, for example, to compress the genomes of the US population, the genome only injects a negligible amount into the process. That is why we call it a “compression.”

Second off, is how it is compressible, as this gives deep information about the data. The best way to compress DNA it is by looking at related genomes. That is because, if common descent is correct, we actually share a common history. To be clear, this is not an argument for naturalism (with which I disagree), but an argument for common descent, which is clearly a signal in the data.

Third, the fact that DNA is hard to compress without a reference genome is very strong evidence that DNA sequences are not composed primarily of semantic information, or “information” as you mean it in the colloquial sense. It is, instead, much more like a random string of letters than English text.

Because of the third point, it seems to dislodge the entire premise of your argument.

Several months back, I had a chance to engage Lennox on this. He even when and read your article. We both have a common connection in Veritas Forums. It appears, he would agree with the definitions as I am using them, and did not think your argument represented his position. Lennox and I still have our disagreement, but it is critical to keep in mind that “information” is a technical term that does not map to common use. Thinking otherwise will lead to some grand logical errors.

By your definition of information as noise, the whole argument is flawed, as you say. But I don’t think that Lennox means information in the Shannon sense. ‘Information’ in the Shannon sense is a bit of a misnomer (as is ‘entropy’ in the Shannon sense). Such ‘information’ is uninteresting for design inferences. We’re interested in information with semantic content.

How would you define the difference between information and noise?
Do you think that DNA has semantic content?

This is true, since in the context of the games, the outcome of the random process is agreed by the participants to be an instruction, and therefore has some content ascribed to it, outside of the natural processes of celluar life etc. However, in the context of instructions (DNA) read by a biological or a computer system, this analogy breaks down. You cannot add more 'information" by corrupting the instructions with random noise - even if this would increase its (Shannon) information by your reckoning. At least, the likelihood that it will do so is negligible.

This seems like a problem to me, because repeatedly applying the algorithm to different cases does not decrease the amount of information contained in it. The algorithm is unaltered by its repeated application. The amount of information in the file containing the Oxford English Dictionary is not amortized by the number of copies that is printed. Neither does the amount of information imported into my algorithm change each time I apply it to a new dataset.

That’s very interesting! I’ll have to go back and read that part of his book again to see where I misunderstood him, if at all. As I understand it, he refers to semantic information:

So I’m wondering about this “strong evidence” thing. Semantic information is compressible because the rules of grammar in English necessitate the repetition of words for it to be intelligible to humans. However, the string: “This sentence contains semantic information” is pretty much incompressible, but does have semantic content and is even written in English. So it seems to be a counter example.

Different grammatical structures such as computer programs (or DNA), have different rules, and they may be less compressible, since this repetition may not be necessary. But they may still contain semantic information, regardless of their compressibility. As far as I can tell, semantic information arises from the relative positions of the letters (nucleotide bases in DNA, or alphabetic letters in English), which, due to their particular arrangement, signify something when read by a program (a person knowing English, for example). It seems to me that DNA may have less compressible semantic information than English, but I can’t see that it follows necessarily that it is not semantic in nature simply because it is incompressible? On the contrary, it seemst that if DNA codes for proteins and stuff in cells, it really does contain semantic information.

Does “noise is information” even work when one considers Shannon information alone, whose original conception was to do with information loss in communications channels?

So, suppose we transmit a message of completely random, incompressible, information across a very poor digital telephone line, and at the other end find it completely altered by noise, which is also incompressible and random.

By that measure, if noise is Shannon information, we’ve got ourselves an extremely efficient channel without the addition of noise or the loss of a single byte of information, which is absurd.

Noise, though, is not absolute, but measured against the standard of what is input into the system. If the original message carries meaning (even though Shannon’s theory may be blind to that in principle) it matters not that the degeneration of the signal makes it less compressible and so containing more Kolmogorov complexity- it’s still noise, or the increase of uncertainty.

Typically a message which is organised, rather than random or ordered, will have a compressibility somewhere between the latter two, but tending to be closer to the random end in complex systems. The last need not necessarily be true, though - if repetitive elements in DNA are functional, they will be highly compressible strings mathematically - but necessarily long functionally.

1 Like

You are correct that “information loss in communication channels” relates to loss of signal integrity through a system. Basically, it is about the ability to reproduce the original signal.

Random noise tends to be difficult to compress and thus requires longer data streams to precisely reproduce than say, compressible data. In information theory à la Shannon, randomly generated streams contain more information. They require more greater transmission length signals than compressible streams. See this Wikipedia section here.

The nice thing about Shannon information theory is that it provides a fairly robust mathematical basis for quantitation of information channel performance. As it also considers distribution of possible states, it can also reach into thermodynamics, physics and quantum theory. This is in contrast to quantitation of ‘semantic information’ which has proven to be much more difficult to assess. These different definitions are not exactly the same beasts. I suspect it’s a category error to conflate the two, as Josh mentioned earlier. However, it’s possible the measures of compressibilty can yield insight into the origin and content length of a signal with semantic content.

1 Like

Conflation is what is being done here, in nearly every argument. There is just so much to deal with, I’m not sure I respond all at once. I’ll respond in pieces when I can. It is not merely…

Rather, they are totally different. An analogy from one can not be drawn to the other. For example, semantic information is compressible. Information (saying “shannon information” is an anachronism) is not compressible. If you care about “semantic information,” and not “shannon information” (by which we mean “information”), the argument falls apart.

I will respond in more detail later, but perhaps @herman can restate the argument specifying at each stage the information to which he is referring. Preemptively, I will tell you that if information = “semantic information” in the original post, then…

1. Point one is false. Genetics systems are not semantic information systems.

2. Point two is false. There is no semantic information in DNA to compress. Incompressibility would not matter any ways, because semantic information is very compressible. Compressibility is only a way of measuring the entropy/information of a sequence, not its meaning (semantics).

3. Point three may or maynot be true, but this is does not really matter because #1 and #2 are false.

4. Point 4 is false, by demonstration, I’ve already presented a counterexample. Do you remember it? Moreover information here cannot possibly be construed as “semantic information”.

5. Point 5 may or may not be true, but this is irrelevant because #1 and #2 are false.

So, if you still think this argument holds water, please try making it while specifying at each claim which type of “information” is being referenced. You cannot conflate the two because they just mean different things.

1 Like

Noise is information, but information is not always noise by definition. INFORMATION = ENTROPY by definition. And the highest entropy entity is noise. In contrast, semantic information is low entropy. Because noise is high information, then the whole argument is flawed.

Information = Entropy. Noise is the high entropy. Other things have entropy too, just lower amounts of it that noise. Whenever we observe information in nature, the most likely source of it, therefore, is noise.

DNA only has semantic content in the way that a flipped coin at football game has semantic content. It only has “meaning” in that we bring meaning to it. Other than that, it does not convey ideas or meaning. It does have meaning, but only what we as intelligent beings with minds bring to it. It does not have meaning on its own, in the same way that a random bit from a coin toss has no meaning on its own.

You are now confusing an “english sentence” with “semantic information.” These are different things too. English conveys meaning, but it is not a particularly efficient representation of meaning.

This is false. English is very compressible, down to about 1 bit per character with the right algorithm. The sentence you wrote is compressible. However, the semantic information in it is even lower than its compression.

A large amount of our disagreement here is that you are trying to apply a mathematical framework (compression theory) to study your personal intuition surrounding DNA and (semantic) information. You are intuiting your way through this, but your intuition seems off.

Here is a quiz which might clarify things, which of the three strings has more information? Which of these three strings (of the same length) has more semantic information? Which one is most compressible? Which one is least compressible?

STRING 1:

Blaise Pascal, Pensees, Section 7, 547. We know God only by Jesus Christ. Without this mediator, all communion with God is taken away; through Jesus Christ we know God. All those who have claimed to know God, and to prove Him without Jesus Christ, have had only weak proofs. But in proof of Jesus Christ we have the prophecies, which are solid and palpable proofs. And these prophecies, being accomplished and proved true by the event, mark the certainty of these truths and, therefore, the divinity of Christ. In Him, then, and through Him, we know God. Apart from Him, and without the Scripture, without original sin, without a necessary mediator promised and come, we cannot absolutely prove God, nor teach right doctrine and right morality. But through Jesus Christ, and in Jesus Christ, we prove God, and teach morality and doctrine. Jesus Christ is, then, the true God of men.

STRING 2:

B ,isenPaecal, Pensees,lSection 7, 547. Wy khowyG7d only by Jesus4Chsyst. Without this medi;tor, all commcnson with God is tsken away; through Jesus Christ wr knok God. All thos. who havesAlaimed to knJwaGod, and to prope Him withoutnJCsul Cwrist, h5ve had only weak proo7s. But inhproof of wesus Christ we have the propPecies,rahich are soCid andapalpable prooBs. And tiGse prophemies, beivg accompkirhed and proved true by thm event. maCk the certainty nf tIese truehs a d, therefhre, tme diviyity of ChrIst. In Him, then, and throughPiim, we know God. Apart from Hiw, anp ri,hout the Scripture, wuthoAt originrl sin,ewithyuP a nycessaryhmWdiator promised and come, we cannot absolutely Gro5e Grd, n,r tshch r,ght doetr,ne and ri7ht moralityn Gut mhrough Jeyus Christ, and indJesus Chrhsa, we pkoveIGod, and teach moralgty and docyrioe. Jesus Christ is, thenl thg tdue God ofcmen.

STRING 3:

AW,5AenPfecal, venseHs5lSJGtion k,bv47.AWo khobyl5d onlB my JeJSk4Chsyst,aoiikoui this me y;tgrc a5a cplmcnsod Pith G biis tbs5.kawa4u thIuugh cJsye ghr st S47kgwk Cld. All thSs. whokhSvesAraired to knJ.a7od,nanm to prope 5im witCouSnJCsWl Cwristk h5veyhad onJk wean 4rooy;. But inhproof .f ;elus ChrGst webhnve the wa.pPecies7rBhCcr ara ,oyAdBandyvalSabJeiproo;s.wA5ddWiGse Prophemie;, beivglaccomp5ivpaw ond pr.ved gr5egmy thm .;ent; maCP vhe Pe7tainty nf eIese ruehs apPu there hre, tme HivWyitH of C,rIkt. I4 Jik, thenG4andAthr4ugHP5imfewb kbow God7 kpaBtlfrom;Hiw, unpPii,couWsthe ShriiPuoe,skuthoAt orIglnryokin,ewithbuc4. nycessaSyhmfdiatorWproaised and comeo wW c5nnPA absolstelyAGsobe Grdt Hhr tshchwrn htwJoe7r,.J and ri7ht 4orWmiAlk Gut cJuodAyideyuwkCh5ist, ag4 in Jesus C rhsa, br pwovCIGhd,pand teachmm.ralBty CndpdocH4tob. 4esus4yBwAsh is, thenl7tBg Hpuh God ofcmen.

1 Like

Argon

My own feeling is that semantic information is hard to define (though it undoubtedly exists in something like this post) largely because science does not deal in final causation.

Without that, all one can do with, say, a passage in English is, as far as the information goes, the kind of statistical analysis that might tell you it’s neither random nor definable by a simple algorithm, but somewhere between: you would be able to pin it down as English from the letter count and so on, and given a long enough passage even, perhaps, tie it to an individual style.

You could, I suppose, in a simplistic way tie “message” to “function” in certain cases (correlating “Quick March” to the fact that soldiers always walk immediately afterwards in real life), and yet have no idea how that message causes the walking, though perhaps in theory those words will be held to stimulate particular neural networks connected to the motor cortex and so on, thus leading to some kind of “causal chain”.

But as to defining the kind of information represented, it is all to do with intention, final causation, meaning. If that isn’t the explanation, some similar limitation of the nature of science must be involved, because it’s absurd that the thing that governs most human activity, including the thoughts about science - speech - and our present phase of culture - information a-go-go - should be indefinable scientifically.

1 Like

I don’t think that this is entirely true. As I explained in my previous post, we ascribe subjective meaning to those events. They don’t have any causal power independent of our game. Not so with DNA. DNA sequences produce proteins even if no-one is looking. And you corrupt the sequence, it no longer does that, no matter how much (subjective) ‘meaning’ you ascribe to it. The whole point of the specified-complexity/information argument is that the sequences contained in DNA are highly, highly unlikely. And not in the way random noise is unlikely, because those complex sequences do very specific things. So the semantic content of DNA didn’t become meaningful only when we started understanding them. The ‘meaning’ in them isn’t due to human understanding (subjective), but due to how those four nucleotide bases are arranged to form specific instructions. It is objective.

Ok, so I played your game and compressed the progressively more corrupted paragraph. I used zlib in Python. The first paragraph became 49% of its original length, the second 58.8%, and the last 69%. No surprises there - I even predicted it in a previous post. For fun, I also compressed my example sentence “This sentence contains semantic information” using the same algorithm. It became longer. Yes, longer. Just an artifact of compressing the incompressible. Can you compress it to become shorter? I would be very interested to see it - also simply out of curiousity. (I guess you could by doing the “reference sentence” trick like your reference genome trick. But we all know that that doesn’t really count.)

I am not confusing them. English sentences do contain semantic information, even if not efficiently. However, that is exactly my point: compressibility is a measure of efficiency of the language. Very efficient languages are less compressible. But the compressibility is an unreliable indicator of language-ness, because both random noise and my example sentence are both incompressible. DNA happens to be an example of an incompressible language. But the fact that it has semantic content and is not very compressible does say something.

If only it were my personal intuition! But I read parts of Lennox’s book again. It isn’t just me. And this guy that three doctorates. Here’s just one quote regarding DNA:

Then there’s Bernd-Olaf Kuppers (PhD Biophysics):

And Manfred Eigen (Nobel laureate, biophysical chemist):

Also our old friend Richard Dawkins:

Also is the geneticist and evolutionary biologist John Maynard Smith, a former engineer:

Then there is Francis Crick who co-discovered the signficance of DNA with Watson. He was a physicist before he went into biology, and had the following to say in his ‘central dogma’:

I could go on, but I think that makes the point.

Once again, I just have to mention that the Mathematical Theory of Communication (MTC) (Shannon) definition of information/entropy is irrelevant for the present discussion, because ‘information’ is a misnomer. Some quotes from the Stanford Encyclopedia of Philosophy’s entry on semantic information:

Hi herman,

I’d just like to make a very quick point. If you look at Dembski’s The Design of Life, you’ll see that there’s nothing in the definition of specified information that requires it to have semantic content. The essential features are that the event is highly improbable (high probabilistic complexity) and that the pattern is easily described (low descriptive complexity). There’s nothing in the definition about the pattern having a semantic meaning as such. DNA has letters, and parts of it might be likened to instructions (for producing proteins) - or in other words, “sentences” of a sort. But where are the words?

1 Like

Great! To be clear, you did not actually report the compressibility. Your reported the number of bits from running zip, which is different than compressibility. Also, how do you know the text is being “corrupted”? What evidence do you have that it is being corrupted?

Try and answer the questions I gave you. This is not to be condescending, but to make a carefully thought out point.

1. Which of these three strings (of the same length) has more semantic information?
2. Which one is most compressible? Which one is least compressible? (you did not answer this yet)

To answer #1, you have to determine if I added another “semantic” message in what you, currently, think is “noise.” Did I or do this or not? What process can you apply to determine that I did not add another “semantic” message in the noise?

That is their argument, but I reject the premise at the starting point.

No convincing evidence has been presented by Dembski (or others) that there is high specified-complexity in DNA. There is a lot of evidence against it. Compression by a zip program does not quantify how much “complex specified information” or “functional information” is in DNA.

Perhaps there is, but has not been demonstrated, nor has it been quantified. How many “bits” of “semantic information” do you compute is in a genome?

That is right! That sentence, compressed by zlib, ends up being longer. Why? Because you used the wrong algorithm. That sentence is not incompressible. You just used an algorithm incapable of compressing it. Interesting, right?

Which of them tells us that that there is semantic information in DNA?

Most these quotes are claiming that DNA is an information (i.e. entropy) bearing molecule. It does not, however, bear semantic information. Because it is high entropy, it is much more difficult to compress than DNA. And we have still yet to determine how to compute how much of the information (i.e. entropy) is not noise.

When we measure entropy with compression, we are computing an upper bound (it is an overestimate) of all the entropy sources in DNA, including information that is functional and noise. So yes, there is quite a bit of entropy and function in biology, but we have no way of quantifying the amount of functional information.

All the quotes you offer I agree with because they are all talking about information as entropy , or possibly at times information that has function. However, there is no known way to quantify functional information vs. noise information.

And I like Lennox a great deal. None of this is personal against him (or you). He has three doctorates and had an academic career doing mathematics. I do not think he worked with DNA even one time in his professional life. I have two doctorates, and have focused my entirely academic life on studying chemical and biological information, including publishing papers on information theory and compression and on DNA.

If you mean to define information separate that information theory as laid out in MTC, that is fine. However, if you are disconnecting yourself entirely from that body of work. Central to your claim is that “information” as you define (not entropy by semantic) is not compressible. There is no proof this anywhere, because it is false.

Compressibility is determined by entropy of the “true” structure of the data, and has nothing to do with your definition of entropy.

If you want to maintain the original argument, you will have to produce the papers or some sort of proof that demonstrate semantic information is incompressible, without relying on information = entropy and MTC. Please show me the formulas you use to compute the amount of semantic information (not the entropy) in a sequence. Please show me the proof the compression correlates with semantic information. You will also have to produce a definition of semantic information.

In other words, you would have to start from scratch. The claims of the original argument (e.g. information can be measured by compression) use the definition that information = entropy and rely on MTC. That is where the claim that compression can put an upper bound on entropy derive.

1 Like

I agree. However…

To understand why, it is wrapped up closely in with this question:

The request here, to put it in the terms of Complex Specified Information (CSI), is this…

Given the three strings I’ve given you, can you tell me how CSI is in it? Just from the strings, and not knowing how I generated them, can you tell me how much CSI they contain? If the answer is “yes”, then you should be able to answer if I added any CSI to the messages with all the tweaks or not. Rather, there should be a formula that can be applied to these strings to determine this answer.

As it is, there is none, which is why you cannot actually determine the CSI. If you knew my precise intent with those strings, and how exactly I constructed them, you might be able to estimate CSI. However, I have not told you. WIthout that knowledge, there is actually a proof in information theory that demonstrates there is no algorithm capable of compute CSI in the scenario I just laid out. Positing the existence of an algorithm capable computing this leads to logical contradiction (like positing a square circle).

1 Like

“Meaning” is hard to quantify. But it clearly exists and is real. You are right this does have to do with final causation, but it also has to do with the inscrutability of our creaturely “ultimate” purposes too.

In the same way, it is hard to quantify moral claims and define injustice in scientific terms too. It is hard to even define the “mind,” and to measure it.

There is a limitation of the nature of science involved.

Science has a very difficult time with logical and moral facts, because that are essentially unobservable. For example, how do we scientifically prove the Holocaust was wrong? How do we scientifically demonstrate that murder is evil? How do we make a scientific case that segregation is wrong?

Science can establish facts. For example, the Holocaust did actually happen, and perhaps we can estimate how many people were murdered. We can quantify the impact segregation has had on the african americans in saint Louis too. How does it tell us these things are right or wrong? It cannot.

Meaning is very much the same. It is a real thing, just as moral facts are real. However, it is inaccessible to science in the most salient sense.

Thinking ahead to how this lines up with CSI.

It turn to measure the 'information content" or “complexity” of a sequence is impossible in some critically important ways. To compute the CSI in DNA, one has to have nearly omniscient knowledge. God knows the answer, but we do not. There are some indirect ways of getting at it (see your critique of Doug Axe’s work), but ultimately we just do not know the answer.

1 Like

This “semiotic argument against naturalism” looks like a classic case of “a little knowledge is an dangerous thing”. I have a Masters degree in information science, and I can tell when people don’t understand Shannon.

I know how much (actually how little) an MS says about competence. Even a PhD (yes, I also have one, as well as three other engineering degrees in different disciplines). So let’s check our degrees and our egos at the door and focus on the arguments.

@Jonathan_Burke, I still have a lot to learn about Shannon information, but if you would be so kind as to explain my blind spots, where you disagree with the the people I quoted, or how I am misapplying what they are saying, I would really appreciate it.

I’m just trying to go with what Lennox said. He was actually Dembski’s external examiner for his PhD, so I’m betting that his understanding of the topic is reasonable.

This is a bit of a red herring. To get back on topic, I reread Lennox’s argument. He argues for two things: (syntactic) complexity, as measured by incompressiblity; and specification, as indicated by semantic content: DNA coding for amino acids and proteins. Notice he doesn’t argue for information based on incompressibility. It is possible that I did so above, but in that case I was wrong. The argument as a whole still stands in my mind, though. Lennox invokes Gregory Chaitin’s work to measure semantic information as the algorithm size required to generate the string which has semantic content, as I understand it. To quote Lennox (emphasis his):

That seems to me to get to the heart of the current debate. I’d be very curious to read any of the contributers’ thoughts on this.

No you don’t. Someone with a Masters degree in information science is sufficiently informed and competent to know what Shannon meant and what he didn’t mean, and to tell when people are misunderstanding or misapplying Shannon. This is not about egos. Qualifications actually matter in the real world. They do actually provide you with more information about a subject than you had before.

It is ironic that you write this now, when previously you made a specific point of appealing to qualifications, highlighting Lennox’s three doctorates, and Kuppers’ doctorate, and Elgen’s Nobel. You were the one to start making the appeal to qualifications.

In this case people are using Shannon’s information theory in order to try and pursue a personal agenda. When non-information professionals use information theory (!), to try and argue against the scientific consensus in biology (!), red flags fly. It’s the same when philosophers try to use philosophy to argue against evolution (!).

Yes you do. I suggest doing that first, before trying to use information theory to overturn the consensus on evolution. I strongly suggest that if you want to actually overturn the consensus on evolution, you learn biology. You’re not going to overturn evolution with information theory, or philosophy, or interpretive dance. The original post in this thread is a classic case of “a little knowledge is a dangerous thing”.

I don’t think you would appreciate it. You’ve already decided I am incompetent, on the basis that I have a Masters degree in the subject (!). Joshua already went through your post and showed you exactly how you were misunderstanding information theory, and how you were misreading the people you quoted, but I’ll pick out the key points which were most prominent to me at first sight.

1. You didn’t understand how the term “information” is defined (very strictly), in information theory; you didn’t understand, for example, that noise in information theory is information (though information is not always noise). This is one of the most common mistakes non-specialists make about information theory, and it is due to them using the colloquial understanding of information rather than the technical definition (which they never learned, because hey they never did a degree in the subject).
2. Consequently, you confused semantic information with the concept of information as it is used in information theory.
3. This led to your argument that random processes cannot produce information (a key argument in the ID case against evolution), when in fact random processes can produce “information” in the sense used by information theory.
4. Due to your confusion of these two definitions of information, you also misunderstood the compressibility issue when Joshua tested your understanding of it.
5. You demonstrated no knowledge or understanding of entropy in information theory.
6. With regard to the people you quoted in your attempt to argue that DNA has “semantic information”, I didn’t see anywhere that any of them said DNA has semantic information, so you appear to be completely misreading them. If they do say it elsewhere, I would be interested in seeing that.

Where? I haven’t seen a single instance of this.