OK, I will try to simplify this point. FI, if correctly understood and applied, is related to the wait time. More or less as 2^FI

The point is, FI is the number of bits necessary to implement one well defined function. Without those bits, the function simply does not exist. That means that the function is treated as non decomposable. Therefore, the wait time is approximately 2^FI. Therefore, FI, used correctly, expresses the probability of finding the function in a purely random system, if no necessity intervention, like NS, intervenes.

That is the purpose of FI. That is the reason it is useful.

Now, if the function can be demonstrated to be decomposable, FI must be analyzed taking into account the decomposition. Which, in a biological context, means the effects of NS.

It is not true that decomposition of a function has nothing to do with selection. In the case of the samlle safes, the wait time is very short because the simpler functions are recognized as such (the safe opens, and the thief gets the money. In a biological system, that means that the simpler function must work so that it can be recognized and in some way selected. Otherwise, those simpler functions would not change at all the probability of the final result, or the wait time. If the thief had to try all possible combinations of 0 and 1 for the 100 safes, and become aware that something has happened only when all 10 safes are open, then the problem would be exactly the same as with the big safe.

So, intermediate function is always a form of selection and as such it should be treated. So, any intermediate function that has any influence of the wait time has also the effect of lowering the FI, if correctly taken into consideration.

Moreover, a function must be a function. some definite task that we can accomplish with the object. The simple existence of 10, or 100, simpler functions is not a new function. Not from the point of view of FI as it must be correctly conceived and applied.

The correct application of FI is the computation of the bits necessary to implement a function, a function that does not exist without all those bits, and which is not the simple co-existence of simpler functions. IOWs, there must be no evidence that the function can be decomposed into simpler function.

That said, 10 objects having 50 bits of FI do not mean 500 bits of FI.

And the wait time for a complex function, if FI is correctly applied, is more or less 2^FI.

If you want to conceive and apply FI differently, and apply it to co-existing and unrelated simpler functions, or to functions that can be proved to be decomposable, you are free to do as you like. But your application of the concept, of course, will not work, and it will be impossible to use it for a design inference.

Which is probably, your purpose. But not mine.

So, if you insist that FI is everywhere in tons, in the starry sky, in the clouds, maybe even in the grains of sands of a beach, you are free to think that way. Of course, that FI is useless. But it is your FI, not mine.

And if you insist that the 100 safes and the big safe have the same FI, and that therefore FI is not a measure of the probability and of the wait time, you are free to think that way. Of course, that type of FI will be completely useless. But again, it is your FI, not mine.

I believe that FI, correctly understood and used, is a precious tool. That’s why I try to use it well.

Regarding the EVD, I am not convinced. However, if you think that such an analysis is better than the one performed by the binomial distribution, which seems to me the natural model for bynary outcomes of success and failure, why don’t you try to make some analysis of that type, and let’s see the results? I am ready to consider them.

The objection of paralelism in some measure I understand. But you must remember that I have computed the available attempts of the biological system as the total number of different genomes that can be reached in the whole life of our planet. And it is about 140 bits, after a very generous gross estimate of the higher threshold.

So, the simple fact here is: we are dealing (always for a pure random system) with at most, at the very most, 140 bits of possible attempts everywhere, in problems that have, in most cases, values of FI much higher than 500 bits for proteins for which no decomposition has ever be shown.

Why should parallelism be a problem? Considering all the possible paralllel attempts in all existing organisms of all time, we are still at about 140 bits.

OK, I am tired now. Again, excuse me, I will probably have to slow down my interventions. I will do what I can. I would like to deal, is possible, with the immune system model, because it is very interesting. Indeed, I have dedicated a whole OP to that, some time ago.

**Antibody Affinity Maturation As An Engineering Process (And Other Things)**

And I think that this too is pertinent:

**Natural Selection Vs Artificial Selection**

And, of course, tornadoes, tornadoes…

Ah, and excuse me if I have called you, and your friends, neo-darwinist. I tend to use the expression in a very large sense. I apologize if you don’t recognize yourself in those words.

From now on, at least here, I will use the clearer term: “believer in a non designed origin of all biological objects”. Which, while a little bit long, should designate more unequivocally the persons I have come here to confront myself with. Including, I suppose, you.