The Semiotic Argument Against Naturalism

Neither have I seen an instance like this.

Let’s try and hold of the appeals to authority (i.e. degrees) and ad hominems. Who is making the argument is really relevant. The key question rather is the content.

This was in response to @jongarvey and is actually central to my point. Science has limits. One such limit is that we do not know how to distinguish and measure types of information (semantic, functional, and noise) when they are mixed together.

No actually, I really do. I’ve spent enough time in academia reading Master’s dissertations and PhD theses to know that all are not equal. If I have to appoint someone in my company, I won’t be doing it on the basis of their MS, but based on their capabilities. I’m not saying yours is invalid, I’m just saying that it is not an argument.

Yeah I thought that too. But then, I’m quoting their arguments, not relying on their degrees.

My personal agenda is that I never use ID arguments. I thought that this was fun, because as Lennox puts it, if it works it is a positive proof against evolutionary theory (he tends to jump between that and abiogenesis.

I’m a bit more sympathetic in that case. Biologists use philosophy without realising it.

Ah, you don’t know me very well :slight_smile:

This is a bit uncharitable. I said your MS isn’t an argument. Focus on the arguments. Which you did (save the odd jab), so thank you for that.

Lennox does state this explicitly, actually.

Ok, I’m going to leave it at that.

Please do leave that rabbit trail alone.

The invitation still stands though…

And about your quotes, remember none of them are claiming “semantic information”, near as I can tell.

I’m going to do that as soon as I can.

1 Like

Yes you are right. He jumps between evolution, abiogenesis and naturalism. It’s not terribly clear in his text, but he does not claim this is a good argument against evolution. He thinks (it seems) that this is more about abiogenesis. Though I am loathe to appeal to a private conversation like this. His text is not really clear on this point, and you would not be the only one confused by his intent in that chapter.

To some extent, if you think it is a solid argument, you are going to have to lay it out even more clearly than Lennox did. Once again, the prominence on “compressibility” here is really the heart of my critique of your blog. So if you are backing away from that, then perhaps I’ve made my point.

Remember, ultimately, we agree. I do not think naturalism is true. I just think there are so many good arguments against it, that holding to bad arguments just weakens the case.

Having read some number of proofs for or against some number of metaphysical propositions, I’m a bit skeptical about things titled along the lines, ‘The “X” argument against (Naturalism/God)’. OK perhaps as apologetics, but as definitive proofs, they’ve had a history of falling short. [Perhaps it’s another point in favor of my ‘Ironic Designer’ hypothesis :grin:]

1 Like

First penned by Pascal of course.

Ok, so let’s see if I can piece this together:

1. Biological systems are information systems
This would only be interesting if the kind of information biological systems contain are not just random noise, but meaningful in some way. You seem to say that this meaningfulness needs to be subjective/depends on humans. However, as I’ve come across it, it has to do with the specific arrangement of the letters in the words. They call it semantic information, which you take to mean subjective. But as far as I can tell, this need not be the case. To quote the SEP on semantic conceptions of information:

How I understand this is that semantic content does not need to be human readable to contain information, just like computer programs don’t need to be human-readable to execute instructions. The readability is for us to interact with them. But assembly or code-golfing languages still contain this kind of semantic content.

You mention that DNA does not contain semantic information. How would you describe the specificity of the orderliness of the arrangements of ‘letters’ in DNA strings?

How would you define the difference between information and noise? Granted, in MTC there seems to be no difference, as noise is maximally entropic (even if only aleatorically, as you say). However, consider the string ~.@q: in the language J. This is a command (it calculates the unique prime factors of a number). Does it contain more or less information (in any sense) than, say, :q@.~ (its inverse, which probably produces nothing)? I’m guessing you would so ‘no’, while Lennox and other would say ‘yes’?

2. Algorithms that produce incompressible pieces of information have to themselves be more complex, or receive a more complex input of information, than that which they produce, and therefore do not produce new information.

(I swopped the premises around, hopefully it isn’t cheating, but it makes more sense to me this way)

The information here refers to semantic content, I guess. Otherwise a whole bunch of experts missed the obvious noise-algorithm thing.

Peter Medawar:

Or Leonard Brillouin:

And then Bernd-Olaf Kuppers:

This is also where Gregory Chaitin’s work on algorithms compressing semantic information is invoked.

3. The information in the DNA molecule is algorithmically incompressible
One could use a reference genome, but you have an information-rich algorithm, which contradicts 2.

Here I asked you what algorithm you would use to compress “This sentence contains semantic information”. I am really curious. I know that zip won’t work, but what will? What would it look like?

4. Such information-producing algorithms are not present in nature

I think this premise is redundant.

5. Therefore the algorithm of evolution by natural selection (or any other unguided process) cannot produce any new information, including that contained in the DNA molecule.

@Jonathan_Burke, I really appreciate you taking the time with your criticism, thank you. If you think I misunderstood any of it in the summary above, please point it out. I tried to straighten out my confusion of semantic and syntactic information, and how you use ‘entropy’ and ‘information’ in your field. Are there any bio-informatics textbooks that you (or anyone else) would recommend? I’m reasonably comfortable with statistical mathematics from my academic work, but more in the line of Bayes than MTC. I have some other spare-time academic reading to do for the next while, but will put it on my list.

This was my written-in-haste version of it. I wish I had more time to think about it, but I’m going away for 10 days with family, but will read it again when I get back.

The first question I’m going to ask you before anything else is exactly what you think this has to do with evolution. What’s the conclusion we’re supposed to arrive at as a result of all this?

Thanks for giving this another shot, but this is not clear. You to clarify to which type of information and compressibility you refer.

In reference to information, do you mean which of the following: semantic, functional, syntactic, or entropic information? Or something else?

It seems you mean “semantic information”. Is that correct.

Please specify here…Do you mean empirical compressibility (that is measured) or theoretical compressibility (that is not knowable)?

Your argument is not fully clarified, but we already know that semantic information is compressible. We already know, also that both empirical and theoretical compressibility does not measure semantic information content.

I’m demonstrated this to you with my with 3 strings. It is unknown (and unknowable without my revealing it) how much semantic information are in them, even though the empirical compression increases. Moreover, compression only gives an upper bound to information content. If you use the wrong algorithm, it will overestimate the compression size.

To understand my critique, focus on the 3 strings. If you were right, it should be easy to compute, using compression, how much semantic information is in them. This turns out to be provably impossible.

1 Like

Some assorted notes. There is some confusion with terminology and questions about system definitions.

A) ‘Information’ is a relative, not absolute measure. Further, it is always in reference to some context. For example, I can isolate and put in a bottle 1 kilobase stretches of DNA from E. coli, human and random sequences. In the context of a bottle, they have the same ‘information’. We can measure their thermodynamic (e.g. entropic) equivalence thermodynamically, perhaps in a bomb calorimeter. Context matters and a full description of any system must include the organism plus its environment.

B) No ‘information’ said to be contained in a particular entity can be assessed in the absence of the environment or context in which it exists or was transferred. For species, each generation retains some record of the influence of subsequent environments. This is information flow from the environment to the organisms.

C) We speak of DNA programs and such as ‘algorithms’. But what are the actual algorithms being considered with regard to abiogenesis or evolution? Is it the DNA sequence? Is it metabolism or the regulatory networks? Is it variation and selection in the context of a replicating system? What do we know about the capabilities of a particular algorithm? It seems to me this is one part that needs to be specified before we can assess anything.

D) From an evolutionary viewpoint and in the context of living organisms, I see how variation and selection can support the transfer information from the environment into the genome and basic biology of organisms. When people ask “where life gets the information to adapt and even increase in complexity” many biologists think, “the environment”. Bear in mind, the environment at the surface of a planet is not a wildly random, unordered, amorphous thing. It’s not a gaseous plasma. It is not the same as a random number generator. It’s actually quite structured, thanks to the basic laws of physics and chemistry. It has niches, gradients of many types (i.e. temporal, spatial) and transitions between local environments. For living organisms, the environment(s) provides the baseline for which variants are tested, leading to retention of information about the environment(s). This is a tremendous source of complexity. Additionally, the presence of other organisms must be considered part of the environment. Comparatively, I suspect the amount of ‘information’ or ‘complexity’ in the environment dwarfs the amount transferred and retained in organisms. This is certainly true thermodynamically and so I would hesitate to assume this doesn’t apply in a information theoretic sense as well. Some years ago I wrote: “If you’ve got enough spare energy to play Nintendo, you’ve got enough energy to evolve”.


@Argon I"m in agreement with you, I think. On several points.

We cannot really deal with information in any system (let alone biological systems) without getting into all these details: providing context, discussing actual algorithms, specifying exactly what we are trying to measure and how, in addition to bounding our estimates. That is exactly how we are supposed to apply information theory.

However, keep in mind that we are responding to an argument that attempts to ignore all this, and reduce a poorly defined version “information” down to just the number of bits compression program outputs on DNA, and claim that nature cannot produce this magical type of information. I say “magical” because it is simultaneously not found in nature, and can be accurately measured by generic compression programs, without any knowledge of the system itself. The argument is, essentially, we can ignore all the details of the system and make confident statements about it.

That detail free approach does not work. If the goal was to understand if the proposed mechanisms of evolution or abiogenesis could generate the information we do measure in DNA, we would do things differently. We would start to model those proposed mechanisms, finding ways to instrument these models with predictions of what we could observe. And we would then test them.

However, the focus in this thread is this specific compression argument. Not the larger, and more interesting question, you are getting at here. Perhaps we need a new thread.

1 Like

But I wasn’t using it as an argument. I wasn’t saying “I have a Masters degree, therefore you are wrong”.

You were very obviously relying on their degrees, which is why you kept citing their degrees and awards.

But you started this entire thread by using an ID argument.

No amount of philosophy can overturn scientific facts. As soon as people try to use philosophy to overturn something like evolution, it’s clear they are not doing science, and they are avoiding the science because they can’t disprove it. This is a form of intellectual dishonesty.

I didn’t see that in anything you quoted from him, but perhaps he has said it elsewhere.

Yes. That’s what I see as a ‘magic bullet’ approach, the idea that there is a critical weakness in a theory that can nullify it entirely. Perhaps such a weakness exists but it hasn’t been demonstrated yet.

There are other, critical issues with some of the stated propositions about information transfer that don’t seem addressed. However if you’d like to stick with the “compression” argument in this thread, that’s no problem. Just one last comment: Some of what I’m seeing seems related to the ‘no free lunch’ brouhaha of Dembski and Marks several years ago.

1 Like

Can you clarify what you as specifically pointing to, perhaps one at a time, on this thread or another as you see fit?

I would defer the​​ taxonomy of the information​ ​to you.​ ​I am quite comfortable with ‘semantic’ as I understand it from the SEP (as well as from Lennox and Dr Miller on the parallel thread), but a text may contain more than one kind of information simultaneously. To refine the argument, I would say the incompressibility indicates syntactic information. The sensitivity to the ordering of the letters indicates semanticity. The fact that it works as specific instructions indicates functionality, maybe? The combination of these, as in the Lennox quote provided above, indicates a non-algorithmic origin for their combination.

Here is an imperfect analogy: it is like if you were to receive instructions to a specific, obscure address of a friend, in Russian. You try to compress it, and maybe you see that it doesn’t work (not that knowledge of its compressibility and that kind of information content would be very helpful to you). If the text is long enough it probably would, but possibly not. But then you get a Russian guide, and he takes you to the right place, verifying that the text contains some kind of information other than entropy. What are the chances of arriving at the correct address using an incompressible text? You conclude that there is more to the instructions than purely syntactic information, based on the result. The incompressibility then becomes all the more impressive. Maybe a closer analogy would be a package with a QR code containing instructions for how the package is to move through a warehouse, but be that as it may, I would say that it is fair to conclude that the instructions weren’t created by a random process, given the mathematics of how algorithms work with syntactic-information-rich semantic instructions.

I’m guessing this is a rhetorical question?

Ok, so why not do the three-string trick with a computer program? It contains semantic information, right? You can even intersperse parts of a second computer program into the first. What would the output be?

I apologise, but I’d rather just ignore these comments - I see that debating them with you would not be productive.

1 Like

@herman somehow it missed my attention that you responded here. Sorry for the late reply.

That is the challenge. Because we cannot use the same word to refer to different things and expect a coherent argument to come out the other end.

That is a major error. Incompressibility does not indicate syntactic information. As we have shown, random noise is incompressible, and does not have syntactic information. Ironically, the opposite is true about syntax. The more syntax in a sequence, the compressible it is.

That is unanswerable question because the incompressibility of the text has nothing to do with how likely it is arrive at the right address. That is like asking for the probability me typing at a computer given that a dog is barking.

I’d take this further, but the key point is your starting premis is just false. Incompressibility does not indicate syntactic information. Noise is incompressible.

Good call there @herman.

It is not. Measured (e.g. empirical) compressibility is not the same as theoretical compressibility. One of the key points of compressibility theory is that theoretical compressibility is uncomputable and not empirically determinable (

I’m not sure I follow what you are getting at.

I gave you three strings, and asked which one had more semantic information. You replied that it seemed like I had added noise to

I am not sure what you mean in your response. But my question still remains, how do know the strings are being progressively corrupted (vs. me adding an encoded message in what looks to you as noise)? How do you know you used the right compression algorithm? How do you quantify the relative amount of semantic information between the three strings?


Let’s suppose for a moment that this was true …

Aren’t we discussing how God might use Evolution to accomplish his goals?

Do we have atheists on this list?