# Information is Additive but Evolutionary Wait Time is Not

Continuing the discussion from Gpuccio: Functional Information Methodology:

This is a simplified model, which a good starting point for refinement. To do: 1. extend to show math for more decomposability, 2. show my work, and 3. show how dependencies change the computation.

To be clear,

This is not true. If these objects are independent, than they are exactly 100 bits of FI.

Iâ€™d like to know in what context you think it is correct. Can you tell me @glipsnort?

Look at the example offered:

What is going on here is that @gpuccio has (wrongly) equated wait time with exp(FI). Wait time and FI are not the same thing though, and are not related this way. Wait time is much lower than this equation indicates if the system is decomposable. This is not precise (and I can derive the exact if you are curious),

1. In fully decomposible system (100 safes) the wait time is about log 100. Note that you can try all the safes at once. So 100 theifs on 100 1-bit safes.

2. In a non-decomposible system (1 safe, with a 100-bit combination), the wait time is 2^100.

The computation above is not precise (the log 100), and it has a distribution. I can work it out exactly if people want. Though, perhaps @Dan_Eastwood or @nwrickert wants to jump in.

What is the FI? In both cases the FI is 100 to achieve full function. It both cases there is one configuration in a space of 2^100, so the FI is 100 bits. The point is that FI is not wait time. You cannot equivocate the two.

As the OP states, for an independent multi-component system, FI is additive but wait time is not.

1 Like

@gpuccio, do you see your error in reasoning? You thought wait time (evolutionary difficulty) is about 2^FI. This is not true though, not if partial payouts are available, as in the case with 100 safes.

I can look at this differently.

@gpuccio appears to be looking at FI as something to be built up sequentially â€“ one bit at a time.

When you describe the system as decomposable, you are thinking of something more like parallel processing â€“ several bit at the same time.

Whatâ€™s important here, is that evolution is massively parallel.

2 Likes

Actually the opposite. he thinks it must happen all at once. He needs that, because it MUST happen at all at once for his wait time = 2^FI claim to be correct.

Decomposable and parallel are distinct concepts. A decomposable, but non-parallel, problem would change this:

To this:

• In fully decomposible system (100 safes) the wait time is about 100 * 2 . Here one thief is working, solving each safe one at a time.

Exactly, so this last case is just not relevant. There is at least two other simplifications in this case study, which if we fixed, it would even further decrease wait time to be sublinear.

2 Likes

I already played with the safes example and broke it. There I assumed a certain dependence among the safes, where the small safes each holds part of the function, and opening 100 small safes is equivalent to opening the single large safe.
I also noted the waiting time will be considerably shorter. Opening one safe at a time, guessing one more bit with each safe, is a Negative Binomial distribution with an expected value of 200 (at p=0.5).
The thief can do a lot better, approaching O[log(100)], by trying all the safes at once and guessing the bits he has not already learned. As I interpret the problem, the thief has a reasonable chance of guessing the next several bits; 50% chance to guess 1 bit and open 1 safe, 25% chance to open 2 safes, 12.5% to open 3, etc. The number of safes opened on each cycle this way follows a Geometric Distribution until the thief starts running out of safes.

Do you see the set up where he can get all the combinations in exactly 2 guesses, no matter how many safes there are?

This would match biology better than most versions.

Iâ€™m assuming the thief is flipping a coin to choose the next bit(s) and applying it to all the remaining unopened safes. If the thief is systematic he can do even better, toggling the next bit past the last one he knows to be correct, guaranteeing at least one success on each cycle after the first.

But no. I donâ€™t see the 2 guesses set up, nor where being systematic matches biology.

I may have a double post here - delete if necessary.

1 Like

What I meant is that he is correct that the probability of success is higher if the problem is decomposable. This was in the context of generating successive 60-bit antibodies â€“ the probability of finding 10 such antibodies by random search is much higher than the probability of finding one 600-bit protein. (Thatâ€™s because this is equivalent to whichever case you offered, in which selection fixes successful draws in one partition. My statement about the probabilities is also equivalent to your statement about the waiting times being shorter for the decomposed case, since waiting time and probability of success with a fixed number of trials are inversely related.) My point is that even if you look at probability of success via random search rather than FI, selection operating in the immune system is capable of producing results that @gpuccio says canâ€™t occur; that is, finding a fairly small series of antibodies by chance is as improbable as finding a single 500-bit protein by chance.

As far as FI is concerned, youâ€™re right. It doesnâ€™t matter whether the problem can be partitioned or not: the FI is the same. This can be seen clearly from @gpuccioâ€™s own definition of FI, the negative log of (target space / search space). In the case of the safes, the target space and search spaces are the same whether youâ€™re looking at one 100-bit safe or one hundred 1-bit safes. So the claim that FI for the 100 safes is lower is simply wrong. I really donâ€™t see the point of introducing FI to begin with.

1 Like

Adding: Gpuccio was clearly NOT making this assumption of dependence, where the 100 small safes unlock each have a piece of the same function in the one large safe.

Gpuccio seem to think the function cannot be unlocked at all until the large safe is unlocked - maybe not quite that extreme, but I think he would say some minimum number of safes need to be opened before any function ($) is gained. I do not completely disagree. We could play the game of Open N-safes to get K-function Dollars ($). That makes it more complicated, but the thief can still use his same tricks. It will take the thief a bit longer to make initial progress, and progress may be sporadic, but it it still much faster than guessing the 100-bit combination at random. It changes the setup of the problem, but not the form of the answer.

For example, every 5th small safe contains $5, with the others having$0. This is the same problem with the thief needing to guess 5 bits instead of one.

If every guess is applied to every safe (like a game of bingo), he just needs to guess 1 then 0.

If he is randomized, we expect him to open all safes with on average three guesses, no matter how many safes there are.

The reason this fits biology: if each safe is some functional unit, there usually is not a strong genome position dependence. The function could arise from any part of the Genome. Functions do not have to appear on the Genome in some predetermined order.

Of course, it takes more than 1 bit to open a safe. But the parallelism and position Independence completely changes the game.

It is as if there are:

1. a large (unobservable large) number of safes, each one a potential functions that could increase fitness.

2. Safe codes are larger, many bits.

3. There is a whole genome full of thieves checking combinations.

4. If any theif any time generates to the code for any function, the safe opens and there is an increase in FI.

The parallelism, and the position Independence of the theives with respect to safes allows us to open many large code safes in a relatively short amount of time, but certainly not all of them.

The high FI of proteins (large keys of the safe) help us immensely here, explaining why we are not seeing several new functions pop up every generation.

1 Like

So @Dan_Eastwood, for this last variant, if you know the number of safes, theives, and bits per code, can you compute how long before 1 safe is opened?

How long before 10 are opened?

How long before x are opened if x is much less than the total number of safes and the key size is non trivially long?

How does it scale with the number of theives (genome size)?

1 Like

Clearly Iâ€™m not thinking big enough!

If I understand you correctly, parts of the function could arise on different parts of the genome? Also at different times, or the same time, would seem to follow. I think we might even see multiple functions from some units, but Iâ€™ll leave that alone.

I didnâ€™t know there was going to be homework!

how long before 1 safe is opened?

For one specific safe this looks like a Geometric distribution, clearly longer codes will be less likely (unless there is repetition? ignoring that).
Wild guess: probability \propto1- {bits \over thieves }

No â€¦ that canâ€™t be right â€¦

For multiple safes, much faster, as I describe in Multiplying Probabilities.

How long before 10 are opened?

Given probability p of finding some function, this is a Negative Binomial probability distribution.

How long before x are opened if x is much less than the total number of safes and the key size is non trivially long?

Also negative binomial, or approximately so. With each discovered function the probability of the next will be slightly less, assuming a finite number of functions.

How does it scale with the number of thieves (genome size)?

Depends on which â€śITâ€ť you mean. But bigger genomes means many more opportunities to discover function, similar to my Multiplying Probabilities situation.

I think Iâ€™m failing this homework, but I havenâ€™t Googled.

1 Like