The Semiotic Argument Against Naturalism

I would defer the​​ taxonomy of the information​ ​to you.​ ​I am quite comfortable with ‘semantic’ as I understand it from the SEP (as well as from Lennox and Dr Miller on the parallel thread), but a text may contain more than one kind of information simultaneously. To refine the argument, I would say the incompressibility indicates syntactic information. The sensitivity to the ordering of the letters indicates semanticity. The fact that it works as specific instructions indicates functionality, maybe? The combination of these, as in the Lennox quote provided above, indicates a non-algorithmic origin for their combination.

Here is an imperfect analogy: it is like if you were to receive instructions to a specific, obscure address of a friend, in Russian. You try to compress it, and maybe you see that it doesn’t work (not that knowledge of its compressibility and that kind of information content would be very helpful to you). If the text is long enough it probably would, but possibly not. But then you get a Russian guide, and he takes you to the right place, verifying that the text contains some kind of information other than entropy. What are the chances of arriving at the correct address using an incompressible text? You conclude that there is more to the instructions than purely syntactic information, based on the result. The incompressibility then becomes all the more impressive. Maybe a closer analogy would be a package with a QR code containing instructions for how the package is to move through a warehouse, but be that as it may, I would say that it is fair to conclude that the instructions weren’t created by a random process, given the mathematics of how algorithms work with syntactic-information-rich semantic instructions.

I’m guessing this is a rhetorical question?

Ok, so why not do the three-string trick with a computer program? It contains semantic information, right? You can even intersperse parts of a second computer program into the first. What would the output be?


I apologise, but I’d rather just ignore these comments - I see that debating them with you would not be productive.

1 Like

@herman somehow it missed my attention that you responded here. Sorry for the late reply.


That is the challenge. Because we cannot use the same word to refer to different things and expect a coherent argument to come out the other end.

That is a major error. Incompressibility does not indicate syntactic information. As we have shown, random noise is incompressible, and does not have syntactic information. Ironically, the opposite is true about syntax. The more syntax in a sequence, the compressible it is.

That is unanswerable question because the incompressibility of the text has nothing to do with how likely it is arrive at the right address. That is like asking for the probability me typing at a computer given that a dog is barking.

I’d take this further, but the key point is your starting premis is just false. Incompressibility does not indicate syntactic information. Noise is incompressible.

Good call there @herman.

It is not. Measured (e.g. empirical) compressibility is not the same as theoretical compressibility. One of the key points of compressibility theory is that theoretical compressibility is uncomputable and not empirically determinable (Kolmogorov complexity - Wikipedia).

I’m not sure I follow what you are getting at.

I gave you three strings, and asked which one had more semantic information. You replied that it seemed like I had added noise to

I am not sure what you mean in your response. But my question still remains, how do know the strings are being progressively corrupted (vs. me adding an encoded message in what looks to you as noise)? How do you know you used the right compression algorithm? How do you quantify the relative amount of semantic information between the three strings?

@herman

Let’s suppose for a moment that this was true …

Aren’t we discussing how God might use Evolution to accomplish his goals?

Do we have atheists on this list?