Gpuccio: Functional Information Methodology

You are quite correct. My mistake was in treating the 60 bits as representing the probability of finding a particular antibody per infection rather than per B cell. In the former case, my calculation would be correct. (In the 100 safes scenario, the correct analogy would be the probability of unlocking all 100 safes by flipping a coin once as the thief encounters each safe. That probability is indeed the same as that for guessing the 100-bit combination by flipping 100 coins.) But since the 60 bits is per B cell, the probability per infection is much higher.

So let’s ballpark some numbers for the real case. We’re assuming the probability of hitting on the correct antibody is ~1e-18, which is 60 bits worth. How many tries do the B cells get at mutating to hit the right antibody? Good question. There seem to be about 1e11 naive B cells in an adult human. Only a fraction of these are going to proliferate in most infections. Let’s say 10% of naive B cells each proliferate 100-fold. That give 1e12 tries at a 1e-18 target, for a probability of randomly hitting the target of 1 in a million per infection. That corresponds to ~20 bits. So each week in this scenario only contributes 20 bits of probability, not 60, and the time to reach 500 bits is 25 weeks, not 8. (Note: this 500 bits represents the same probability as hitting a 500 bit target in a single try.) If my guess of the proliferation is off by an order of magnitude, knock off a few more bits. It still takes less than a year to get to 500 bits, and a lot less than 1e38.

4 Likes

glipsnort:

I have not time now, and will answer later.

For the moment, two things:

1. Up to now, I had not considered the problem of what the initial 60 bits meant. I have only answered about your wrong statement about probabilities in general. You are still wrong, and I will clarify better later.

2. In the meantime, I realize that your example is referring to the working of the immune system. So, not it is good time to deal with that, and I will do that later too. But I have real difficulties in understanding your ideas about the immune system, and how you apply probabilities to that example.

So, could you please give me a short layout of your thoughts? You have probably done that before, but I have not read it.

In particular, are you referring to the primary immune response, to antibody affinity maturation, or to the secondary response? Please, be as clear as possible.

To later.

This is a new one. Probabilities are multiplicative, so information is additive. Information is the log of a probability. So yes, 10 objects with 50 bits of FI each are exactly 500 bits of FI.

Their is a caveat here if the objects are dependent. In this case the total FI is less than 500. But that also means that the per object estimate of FI is deceptively high.

Why would you think different?

This it seems would explain quite a bit of the disagreement between us. If FI is not additive (but it is!!) there is no reason to consider cumulative scenarios, which is why you have not. Except FI is additive.

Whatever error @glipsnort made, he is entirely correct here.

1 Like

I think the confusion is that @gpuccio is not really interested in bits of information, but in probability of hitting targets with multiple attempts, and that isn’t multiplicative. Compare two cases, one where there are two independent events, each with probability p, and the other where there is a single event with probability p^2. The two cases have the same number of bits, and in the event of a single trial they have the same probability of success.

If there are multiple (n) trials, however, the probabilities are no longer equal. In case 1, Prob(success) = [1 - (1-p)^n] * [1 - (1-p)^n]. If np << 1, Prob(success) ~ (np)^2. In case 2, Prob(success) = 1 - (1-p^2)^n, which is ~ np^2.

3 Likes

@glipsnort that is not a well defined example. What do you think of this?

Case 1.Two independent events, each with probability p, and success requires both at same time, and there is no benefit to one alone.

Case 2. Two independent events, each with probability p, and each event is independently usefule, so it can be retained by negative selection when found.

Case 3. One event with probability p^2.

All else being equal, perhaps with some caveats to be clarified:

1. The FI is the same for all three cases (success at all events).

2. Single trial success is identical in all cases: p^2, with FI 2 log p.

3. Evolutionary wait time in Case 1 and 3 is the same: p^2, with FI 2 log p.

4. Evolutionary wait time in Case 2 is much less, approx, p * 2, with FI 2 log p. Note, the wait time is actually LESS than this, and it scales very well as we increase decomposability.

5. Case 1 is equivalent to the strictest (and known to be false) version of irreducible complexity (IC1). Even Behe acknowledges that this is not how biology works.

6. For very good reason, modern evolutionary theory works like Case 2, which had far lower wait times than Case 1 and 3.

7. FI does not correlated with wait time! The decomposability of the system breaks this relationship.

8. This result does not depend on fitness landscapes at all, just random sampling (tornado in a junkyard) plus NEGATIVE selection, not Darwinistic positive selection.

I bulleted out the points here so points can be disputed or affirmed more clearly. Everything I wrote here is directly verifiable with simulations, and experiments. We are not even including positive selection here, just negative selection.

This seems to be the heart of @gpuccio’s conceptual misunderstanding of Behe and of evolutionary science. He is working of a strawman theory of evolution, and a long ago dropped formulation of IC. Recall also he explains that his work depends on Behe’s work. He seems unaware that Behe acknowledges that IC1 does not reflect biology.

3 Likes

A few final observations. @gpuccio states that his purpose here is not scientific inquiry, but to confront neo-Darwinism.

This seems to clear up a lot. In conversation with us, he has not modified his analysis or worked to include any controls in his analysis.

He has been working to his goal of “confronting” neo-Darwinism. He has been successful, to a degree, because confrontation has nothing to do with being right or wrong. However, he failed in one major way. None of us are advocating neo-Darwinism!!! Neo-Darwinism as defined by the ID movement (and by @gpuccio) is not modern evolutionary theory. We moved on from Darwinism, meaning positive selection dominated change, in 1968. Behe was in high school, and ID had nothing to do with it.

So what we have here is a performance of confrontation, not a scientific inquiry, instead we have theatre. I am glad he is upfront about this. I respect his honesty.

It seems that there is agreement here:

So let’s end on some common ground. @gpuccio affirms common descent, and that might be a more useful conversation to continue on in the near future:

However, at this point, I think we can let this thread end.

@gpuccio is arguing against a version of evolution that is different from contemporary understanding of biology and evolutionary science. It may well be correct that his version of evolution is impossible (and likely so!), but this has nothing to do with modern evolutionary science.

This topic will now auto-close tomorrow at 6pm. There are several interesting subtopics we can continue discussing in other threads. Let us keep these new threads more focused in scope, focused on the subtopics that arose.

@gpuccio thanks for engaging here, and I look forward to continuing the conversation!

2 Likes

Final thoughts @gpuccio, @sfmatheson, @art, @glipsnort?

My final thought: ping me when you find an ID advocate who is actually interested in design and who understands evolutionary biology, at least at the level of an undergrad.

4 Likes

OK, let’s clarify this.

10 objects with 50 bits of FI each are not, in any way, 500 bits of FI.

10 objects with 50 bits of FI each are 500 bits of FI only if those 10 exact objects are needed to give some new defined function.

Let’s see the difference.

Let’s say that there is a number of possible functions in a genome that have, each of them, 50 bits of FI.

Let’s call the acquisition of the necessary information to get one of those functions “a success”.

These functions are the small safes in my example.

The probability of getting a success in one attempt is, of course, 1:2^50.

How many attempts are necessary to get at least one success? This can be computed using the binomial distribution.

The result is that with 2^49 attempts we have a more than decent probability (0.3934693) of getting at least one success.

How many attempts are necessary to have a decent probability of gettin at least 10 successes, each of them with that probability of success, each of them with 50 bits of FI?

Again, we use the binomial distribution.

The result is that with 2^53 attempts (about 16 times, 4 bits, the number of attempts used before) we get more or less the same probability: 0.2833757

That means that the probability of getting 10 successes is about 4 bits lower than the probability of getting one success. The FI of the combined events is therefore about 54 bits.

Why is that? Why do probabilities not multiply, as you expect?

It’s because the 10 events, while having 50 bits of FI each, are not generating a more complex function. They are individual successes, and there is no relationship between them.

That’s why the statement:

10 objects with 50 bits of FI each are not, in any way, 500 bits of FI.

is perfectly correct. Those ten objects have 500 bits of FI only if, together, they, and only they, can generate a new function.

In terms of the safes, solving the 100 keys to the samell functions generates 100 objects, each with 1 bit of FI. But finding those 100 objects does not generate in any way 100 bits of FI, because the 100 functional values found by the thief have no relationship at all with the 100 bit sequence that is the solution for the big safe.

I hope that is clear. We can rather easily find a number of functions with lower FI, but their FI cannot be summed, unless those functions are the only components that can generate a new function, a function that needs all of them exactly as they are.

Please, give me feedback on this point, before I start examining the example of affinity maturation in the immune system.

This is not only to Swamidass, but to all those who have commented on this point.

By the way, I was forgetting: using the binomial distribution, we can easily compute that the number of attempts needed to get at least one success when the probability of success if 1:2^500 (500 bits of FI) is 2^499, with a global probabilty of 0.3934693.

@gpuccio, we will continue this for a bit, will likely split it into a new thread.

False.

Exactly, such as the combination of their 10 independent functions. Therefore, by your own definitions, 10 objects with 50 bits are exactly 500 bits of FI (defining function as the sum total of their function).

You go on to explain something entirely unrelated.

Your math is all wrong here. In your example above, that of the 100 safes, the success probability for each trial is 50%, and if you have, say, 20 trials, you expect to get about ten successes.

This also is all wrong. You need to use a modified version of an extreme value distribution.

Because your math is all wrong. Very easy. These are basic homework problems in an intermediate probability course. In your education to become a physician, you probably never had a chance to learn this. Just turns out you are doing the math wrong.

The conceptual disconnect is so large here that I am tempted too call it confabulation.

Of course they can, in all cases, if we define the new function as the sum total of all the functions.

This will be handled, also, in a new thread.

Nope. As I just explained here: Information is Additive but Evolutionary Wait Time is Not. It appears you had some basic misunderstandings here of FI. I encourage you to catch up. This is an important area of science, and it is good to learn about.

@gpuccio, I overstated this one thing. I was expecting you were going to the 100 safe example. This one part of your explanation is correct, IF AND ONLY IF the specific 50 FI are 1) required with no substitutes allowed (which we know is not true of evolution) and 2) it is not decomposable. You still are incorrect, because you did not state this material caveats.

2^FI does not equal wait time, except in one very artificial situation.

Moreover you need to be using an extreme value distribution, not a binomial distribution. Your math is all wrong.

Are you kidding?

I have clearly said that I was computing for a probability of 50 bits of FI. The functions are analog to the samll safes, because the bif safe is the object with 500 bits of FI.

Can you just explain that? I have made all the computations using the binomial distribution. They are correct.

If you define the function as getting 10 objects with 50 bits of FI, which is what ahd been declared, that is not a 500 bit function.

If you define the function as getting exactly the following 10 functions:

etc,

each of them with 50 bits of FI, then that reasult has 500 bits of FI.

There was nothing in the original definition that stated that 10 specific functions had to be found. The simple statement was that 10 objects of 500 bits of FI, in general, are 500 bits of FI. That is wrong.

You cannot just say that my math is wrong. Do it yoursefl, explain how you have made the computation, and let’s see.

Your trick of partially quoting my reasoning is unacceptable and unfair:

I have said:

"Let’s say that there is a number of possible functions in a genome that have, each of them, 50 bits of FI.

Let’s call the acquisition of the necessary information to get one of those functions “a success”.

These functions are the small safes in my example.

The probability of getting a success in one attempt is, of course, 1:2^50."

Have you forgotten that the reasoning started with:

“Let’s say that there is a number of possible functions in a genome that have, each of them, 50 bits of FI.”

But you equivocate beacuse I say:

“These functions are the small safes in my example.”

meaning of course that they have the role of the small safes while the 500 bit function has the role of the big safe.

And then you say that my math is wrong because I should have computed for a probability of 0.5!

My math is perfectly right.

What is wrong here?

Why?

I see that you have realized that the discussion about the safe you made previously is wrong.

I have also clearly stated that all these reasonings, for the moment, are only about a random system. In a random system, wait time depends on probabilities and probabilistic resources, as in my model. There is no point in continuosly stating that NS can change some of these things. I know, and I have always analyzed those aspects too. Not yet here.

The role of NS, be it negative or positive, and of possible decompositions, is all another discussion. If we do not analyze the probabilistic context, how can we model the effect of selection?

I would like to discuss the immune system now, but I don’t know if there will be the time. Let me know how to proceed.

By definition it is. If each of those 50 bit function is independent, then the function that by definition includes all these functions is exactly 500 bit.

You are right, I overstated it, and explained in the next post:

On that last bit of computation (really all of that post), you need to use an extreme value distribution, not binomial, even if we grant you the unrealistic assumptions here. Remember, probability of success DOES NOT equal 2^-FI. Wait time does not equal 2^FI.

This is a basic misunderstanding you have that may take a while to sink in. That’s okay.

I never mentioned NS. I talked about other things. The system we are discussing is a decomposable system, and it is also entirely random.

Do you know what an extreme value distribution is? If you did, it would probably make more sense. When you start discussing the probability of at least X success out of many simultaneous trials, you need an EVD. There are ways to approximate it of course.

I think that if you explained it better, ti could be useful. Remember, I am not considering the effects of selection here. I have computed by the binomial distribution the number of attempts that is expected to give some probability of the defined result. That can be converted into waiting time knowing the mean time necessary for one attempt to take place.

What’s wrong with this?

We have worked it out in more detail here: Information is Additive but Evolutionary Wait Time is Not.

Note, the example there does not include positive selection. It is a clearly explained math problem. You just get the answer wrong. It is worth understanding what you got wrong.

It will be hard for you to see this with out being willing to see that wait time does not equal 2^FI. That is just false. The example you gave of the 100 safes is a beautiful example of why this is not the case, and it is your example. I’d focus on that new thread if I were you.

1 Like

Excuse me, I think that you should express more clearly why an Extreme Value Distribution should be used here. The binomial distribution is a discete distribution, and I believe that we have discrete values here. I am ready to consider your statement, but you need to motivate it.

We have explained here: Information is Additive but Evolutionary Wait Time is Not. A local mathematician (@nwrickert) is correctly noting your error is because you are not taking parallelism into account.

2 Likes

Briefly, imagine 10 identical runners on a team.

A binomial distribution (drawing an analogy) tells you how long it will take them to finish a ten leg relay race. The central limit therom helps you here, and you can get increasingly accurate estimates as the number of legs increases.

The corresponding EVD, however, would tell you how long for all ten of them to complete different races that all start at the same time. The max time to complete the race is given by the EVD, and that is what you want.

Paradoxically, in some cases, the estimates get less certain as the the number of runners increases. This is because as you increase the number of runners, outliers become more likely, and these outliers are derived from the part of the distribution we cannot usually model from data. This is the “Black swan” effect,

Regardless, evolution doesn’t have to open all 100 safes any way. It doesn’t need all the runners to get across the finish line. It just needs many runners to get accross, not even most, and we would observe FI increases.

1 Like