Yeah… however when these situations arise in real life. Real life will comply to the math as a rule…

The area of a rectangle will not be different in math and the real world.

So there are some truths that math can arrive at that is necessarily true.

It’s just a clarification. You are probably right about this particular equation.

Actually, the area of a rectangle in the real world *will* generally be different, because the real world generally only has approximate rectangles. How well your mathematical model matches the real world is a separate issue from the mathematical proof of statements within the model.

The issue is with CSI not Levin’s proof. The consistent mistake you make is equivocation. I’ve explained this to you before. More than once I’ve shown how the math does not map to reality the way you assume. Even the same terms can mean very different things in information theory, but you are not sensitive to these distinctions. I don’t think you are equivocating dishonestly, but merely because you lack training and experience inappling information to real world problems.

For you, IT seems to be pushing symbols around and philosophizing. You seem to promoting a pre-copernicant view of science at times. Can you give me some evidence I’m misreading this please? Can you show us a paper or a project where you used IT to solve a real world problem? I can produce several published papers where I have done just this. If the answer is no, we should just start with the working assumption that you don’t practically know how to apply IT to the real world, until proven otherwise. Right?

In real life, if I stake out a rectangular part of my garden, the actual surface area of that garden is greater than what the math says, because of all of the ups and down between the grains of soil. However, what the math tells us is more useful than the actual surface area.

That’s because there is a rectangle in the garden.

The ups and downs are just not part of it, that’s all.

The area of the rectangle does not change.

Totally false… as a rule math is frustrated by the real world. Didn’t you know?

Nope… didn’t know that… works totally fine for me.

Last time I checked 2+2 was 4 no mattwr where I went.

It’s not about a lack of “natural causes” as such but about the inadequacy of particular proposed natural causes (e.g., blind searches by variation aided by natural selection). And the movement from the inadequacy of efficient-cause explanations to final-cause explanations is not at all uncommon. If someone wins a game of chance, say, a game of dice, a hundred times in a row, we infer, from the massive improbability of that result if only chance were involved, that the game has been somehow fixed, i.e., that the dice are loaded. Our analysis of what would have happened if the dice have not been tampered with is a purely efficient-cause analysis (e.g., what would happen to tumbling dice under Newton’s Laws), but our conclusion based on that analysis (compared with the actual result of 100 straight wins) leads to a conclusion of teleological behavior, i.e., that someone has rigged the game with an end in mind. And of course, the final-cause conclusion is not incompatible with an efficient-cause analysis of its own, i.e., one can figure out what it would take, in terms of purely efficient causes, to make the dice keep coming up with winning combinations – how they would have to be weighted, etc. The cheater is still making use of natural laws with his weighted dice. So the assertion of teleological behavior doesn’t imply the breaking of “natural laws” as such, but is an inference based on them, i.e., non-weighted dice would behave in one way, weighted dice in another.

The question, of course, is: Is it sometimes helpful not merely for heuristic reasons, but because there really is something in nature that is teleological? That is what physiologist Scott Turner’s new book, *Purpose and Desire*, is about. (And he’s not an ID proponent.)

All other ID claims reduce to that claim. For example, Dembski points out irreducible complexity is a form of CSI.

All ID arguments are of the form that some sort of chance + determinism cannot generate feature X. Feature X is always some sort of mutual information. That is the same as saying chance + determinism = mutual information is false.

This core ID claim is proven to be mathematically correct.

It’s not about me. IT is a well established academic discipline, entirely independent of ID. Li and Vitanyi published a whole book on application of IT.

It is interesting that we never think that God was influencing those rolls.

I think that question is a bit too vague. The consensus view is that feedback loops, like those found in the process of evolution, will produce systems that amplify the feedback. Once you have imperfect replicators competing for limited resources the inevitable outcome is an increase in fitness. That is the only thing that can happen, and it will happen spontaneously.

So does this mean that teleology is built into nature because nature is capable of producing environments where imperfect replicators can compete for limited resources? That seems to be a difficult question to get around.

It does not appear Egnor’s arguments against bad design reduce to CSI. @Agauger’s arguments about population bottlenecks do not reduce to CSI.

In your view, if everything connects to CSI, does disproving one CSI-connected argument disprove them all? If not, which are the CSI-connected arguments that are incorrectly applied?

Algorithmic complexity is also a mathematical construct that provably does not map to observable reality because compressibility is uncomputable. The only way this has relevance to DNA the way you mean it is if the abstract concept of compressibility is equivocated with the observable compression. The two are probably not equivalent.

ID proper is the claim intelligent design can be empirically differentiated from artifacts from non-intelligent sources. This all reduces to CSI. There are many arguments made by ID people, but not all necessarily fall under the core claim that makes ID a science.

So yes, if you can prove that chance + determinism = mutual information, then you’ve disproven ID in its entirety. However, that is impossible since the contrary is proven mathematically.

The point is you haven’t proved anything. I have followed all your derivations and I am claim to be qualified in the subject area as I do have a PhD in Electrical Engineering, took several graduate courses in Information Theory, received lectures by Claude Shannon himself, made technical contributions to Forward Error Correcting Codes, contributed to fiber optics systems that approached the Shannon limits on information transfer, and my conclusion here as a retired Electrical Engineer is that your efforts to tie ASC to ID is not even useless, it is POINTLESS.

Compression is an upper bound on Kolmogorov complexity. If your complaint is we cannot estimate it precisely, then this is a complaint that applies to all of science, not just Kolmogorov complexity.

I’m very interested if you can demonstrate this in some way besides appealing to your credentials and opinion.

That is easy to do @EricMH. Try testing your claims with simulations. The last time we did this you know what happened. Do you want to build an experiment to test this?

Compression shows that entropy is higher than it can be. Work on a better compression algorithm for DNA sequences. There are over 2 million genomes that have been sequenced. We need 1000 Petabytes of storage to keep them. How about you come up with a better compression technique for DNA sequences. If the sequences were designed, clearly you can reverse engineer the design and come up with the lowest entropy compression algorithm.