But not enough to provide better explanations, obviously:
1 in 2 amino-acids are substitutable so you conclude that less than 1 in 174224571863520493293247799005070000000000 amino-acid sequences are functional.
No. Both wrong and false. Most of protein sequence space is functional in some context, and that isn’t the relevant issue. So, no Bill. That’s wrong.
That isn’t the relevant issue when it comes to divergence of functional proteins, as here the distance between functions is what matters. Most of the time, the distance is 0 or 1 mutations. Many functions overlap, so most proteins are already multifunctional. They intrinsically cannot avoid but be capable of multiple functions at the same time. A sticky surface is usually sticky to a lot of different other materials and surfaces, for example.
You’re demonstrably wrong about your beliefs concerning the fraction of sequences that are functional. It’s many orders of magnitude higher than you imagine and the bs numbers you get from reading DI-funded liars.
Folding proteins are somewhere in the range of 1-20% of protein sequences, and functions of those are most. Even proteins that can’t fold into well-defined tertiary structures still have functions.
The Texas sharpshooter fallacy permeates all creationist thinking on the matter of de novo discovery, where your focus on specific (targets) structure-function relationships isn’t what matters in evolution, where discovery of any adaptive function to any population in the biosphere is what matters.
And of course one could add that the fraction of sequence space that is nonfunctional isn’t directly relevant either in evolution, as even de novo discovery of novel functional proteins is usually aided by large rearrangements and fusions of pieces of existing genes, which means the “search” is intrinsically biased towards parts of sequences space already saturated in functional elements.
It’s worth noting that Bill’s estimates are many, many orders of magnitude worse than the BS numbers from DI-funded liars.
Douglas Axe’s misestimate of 10^-77 is nowhere near Bill’s misestimate of 10^-111. Bill’s esimates are so wrong that it’s not possible to illustrate how wrong they are. Comparing the sun to an orange or the age of the Earth to a single year doesn’t come close. Thinking the Higg’s boson is larger than the universe or the Earth is younger than a Planck unit doesn’t come close. Even the idea that the universe contains just one electron that doesn’t oscillate through time is closer to reality than Bill’s results.
If you had the universe’s age in Planck times for every Planck volume in the universe, you’d be approximating the order of magnitude by which Bill is wrong about basically everything.
What “details of the mathematics”? I suggest if you want anybody to respond to a calculation, you’d do well to begin by actually presenting one. It may not suffice until it is demonstrated that the inputs are in some way a fair approximation of the corresponding quantities in nature as observed, granted, but pretending like a mathematical argument has been made or stands when none has even been articulated will most certainly fail move things forward.
Use the equation N^L use 2 for N and compare skeletal alpha actin with what Axe was doing mutagenesis experiments with… L= the number of residues. Axe’s experiment was making a change to a 150 residue protein folding domain. Skeletal Alpha actin has 371 residues.
Assuming 50% substitution for residues therefor N=2.
I could start by saying:
You claim superior mathematical knowledge because you caught my error yet you seem to have no ability to understand why the functional sequence space of skeletal alpha actin does not equal the functional sequence space of Axe’s beta lactamase domain. You also made a post the shows a very big number but no conceptual understanding of the problem and looked like an attempt to deceive people.
Why don’t we drop the silly ad hominem attacks and have a productive discussion.
… intrinsically flawed by him deliberately manipulating the protein in question to be synthetically intolerant to mutation (he deliberately engineered his enzyme to be extra sensitive to temperature), which is bad enough by itself, but much more importantly it was physically incapable of answering the question of how many protein sequences are capable of performing the function that TEM-1 Beta-lactamase can do (we know metallo-beta-lactamases can do it too), and most crucially it is completely incapable of elucidating the probability that some protein in some organism will have some adaptive function.
Axe’s number is for those reasons completely worthless in any calculation you fantasize about being able to perform.
Because it’s impossible to have a productive discussion with someone genuinely too stupid, ignorant, and/or psychologically compromised to participate.
I don’t get the “therefor” (sic) part. For one, 50\%=\frac12\neq2. But let’s for no reason at all just pretend like there is any sort of reasoning at all here. Now, what on earth is the inverse of the substitution rate raised to the power of residue count supposed to reflect? What is the “detail of the mathematics” here? What is this N^L supposed to be? Where is your alleged “equation”?
It is the number of ways to to arrange as sequence with N elements possible at each position and L elements long. Ie a US telephone number has 10 possible digits in each position and is 10 digits long. The total possible numbers is 10^10 or 10 billion.
Which is an irrelevant number for all the reasons previously stated you ufathomable clown. How can you not fathom this?
What fraction of numbers are usable, and how are they clustered? The total number of possible phone numbers is irrelevant. For the millionth time how are you this utterly blasted in the skull?
I know how to interpret an expression in the context of combinatorics, Bill. Though, you could have picked a better example than one where base and exponent happen to be the same, but that’s neither here nor there. Anyway, I’m still not seeing an equation. Just randomly plugging in values for N and L does nothing to illustrate any point. And for someone who claims to have studied mathematics you seem to struggle embarassingly with actually articulating yours.
If there are L residues and \frac1N of them are substituted, then the number of substituted residues is \frac LN. The number of resultant possible substitutions in that case (discounting of what any residue was substituted with, mind you, merely counting up if a residue was substituted or not) is {L\choose{L/N}}=\frac{L!}{(L/N)!\mathop{\cdot}(L-L/N)!}. In our case with N=2 and L=371 this number is at least somewhere within the right ballpark of between 10^{108} and 10^{112} (what’s a factor of ~34k between friends, right?). Specifically for N=2 (and no other value of N, so your analogy with the phone numbers is completely off base) there certainly exist worse approximations than 2^L, to give you some credit – though something tells me you couldn’t render or comprehend a proof of that approximation being more accurate for higher values of L, but I digress…
Of course, because we are discounting what each residue in each case is substituted with, we are severely underestimating the size of the possible outcome space. One might naively think this may be an argument in favour of your position. However, it displays a profound misunderstanding: You measure this sequence space by a primitive counting measure. This may at first seem like an intuitive approach. Unfortunately, unlike mathematics, or logic more broadly, science is not rooted in intuition, but in data. You have presented no argument as to why this is an appropriate measure given the problem at hand, and there seem to be very strong arguments against it, such as:
It is not the case, that there exists only one capital-C “Correct” sequence all deviations from which are fundamentally unworkable.
It is not the case, that deviations towards unworkable sequences are as likely as deviations towards equally workable, better, or marginally less workable sequences.
Since you love analogies so much, your “math” is the equivalent of assuming that since between the two colours there are some twelve different pieces on a chess board and 64 locations, the space of all possible chess positions is therefore 13^{64}. Never mind that this includes such ridiculous positions as ones with pawns in the home rank, more than eight pawns of one or both colours, more pieces in general than both sides had at the outset, fewer than one king of each colour, or more kings than one per colour.