What is the ID Definition of Information?

They are giving answers to this…that is not the problem. Are you following the thread: Durston: Functional Information?

No, but I’ll try to get caught up today.

NB: I did not create this thread and I should not be held responsible for the title of this thread.

Just out of curiosity, do you think anyone has a problem with this title?

I think it’s a great question. I have a problem with the title because it’s not a question that I asked. So yes, I have a problem with the title.

It is a question that is answered by the thread. And you are free to change it as the owner of the original post.

I’ve stated that I did not create this thread, so I do not think I am the owner of the original post. If I can change the title of threads that I did not create that is welcome news to me!

This arose as a separate thread because it was off topic of the one where it started. If you don’t want this to happen again, stop going off topic.

Speak to whoever I responded to. Their post was off topic,

You did ask about various definitions of information at one point. From SEP article on information

See the linked article at SEP for citations.
Edit to add:
I believe the ID community uses either Shannon or Kolmogorovorv information. There are mathematical relationships between these two which allow them to be inter-substituted under certain mathematical constraints.

You will also find the term “functional information” which was introduced by Szokstak. The Shannon and Kolmogorov approaches can remove the redundancies of a a genetic sequence treated as a string of symbols, but alone they are insufficient to deal with biological redundancies resulting from from the fact that “different sequences can encode RNA and protein molecules with different structures but similar functions.”

To deal with such redundancies, Hazen proposes to use the -log2 of “the probability that a random sequence will encode a molecule with greater than any given degree of function.” To make the approach apply to a specific situation, one has to specify the function of interest and then empirically estimate the number of sequences which has that function (the example function "bind ATP’ is used in his paper).

Hazen et al extend Szostak’s work to propose it as a candidate for measuring complexity in general. Rather than limit to sequences of nucleotides, they apply functionally information generally to configurations of systems with many interacting components. Now the degree of function is a “measure of a configurations ability to perform function x”. Again, one has to pre- specify the particular of function of interest for the system of interest, then somehow determine what proportion of all possible configurations have that function, then take -log2 of proportion to get bits.

As far as I know, only Durston directly refers to functional information.

1 Like

I’m not sure who created the thread or its title. I did find the “self-proclaimed” phrase somewhat out-of-tune with what I understand to be the goals of the site. That’s just my personal, emotional reaction.

Has anyone every summarized in one place the various approaches of the ID community, how they inter-relate, and where they fail according to the consensus of biologists and information scientists?

1 Like

Much ado about nothing, as far as I can tell.

I am supposedly the originator of several threads, which I didn’t actually originate – they were split off from other threads. And the title of one thread that I did originate was changed.

I am not finding anything that concerns me about those splits or title changes. I’m not sure why it bothers @Mung .

1 Like

In Dembski’s 2005 paper, he defines CSI in a way that is opposite of Kolmogorov Complexity. It’s a direct function of KC/KI, but what Demski defines as MORE CSI implies LESS KC. What Dembski calls information is lack-of-information to the rest of the mathematical world.
The first part of this blog post is my summary of this version of CSI. Dembski has several definitions for it; this is the one I have studied in the most detail.

Edit: fixed the link to point to the correct post.

1 Like

Thanks Dan. Do you know of an answer to the question I posed earlier in this thread to Dr. S asking whether anyone had ever done a summary of all the information arguments put forward by the id community, starting with the math and then moving on to the models and words used, the supposed biological consequences, and the replies from the consensus-scientific community?

I think your point about the word ‘information’ being used in subtly different ways is an important one. I believe that many arguments one sees in these forums, eg about information or randomness, get nowhere because many of the posters ignore the math.

The math has to be understood first. Then how the math is used in a scientific model. Only then can one discuss in detail how words might correctly be used to describe the model and the resulting science.

2 Likes

I provided at least one reason why it bothered me. If my post was off-topic, so was the post that I was responding to.

The original thread was about “the oldest clue yet of animal life, dating back at least 100 million years before the famous Cambrian explosion of animal fossils.” It had nothing to do with Information or ID.

Another reason it bothers me is because it makes it look like I raised the question and wanted a thread created to discuss it. If I wander off topic the moderators simply should say so or flag it. I don’t need anyone to create a thread for me and yet many have been created “in my name” as it were.

I admit that I should be raising this with the moderators rather than taking the approach that I did, so I stopped doing what I was doing.

Why does this matter to you? I entirely acknowledge that I split the thread. You are the first one to have complained about this.

But we are all familiar enough with the frequent topic splitting here, that we should be able to take it in stride.

That is exactly right. It is now set up as a catch-22. If it is independent entropy, then is design. If it is mutual entropy, then it is design too.

The closest we have now is here: Information = Entropy and Chance = Choice. Be sure to follow the link in the OP to BioLogos.

@BruceS you have some knowledge here on this it seems. Perhaps we should start building that resource here on the forum. Is this something you and @Dan_Eastwood would participate with me on?

1 Like

I don’t care to make this a public conversation unless you just insist. :slight_smile: