It’s only dangerous because the answer disagrees with your characterization of the study you cited. Specifically, the answer could range from 1 to some sum of the numbers you list further in your response to my question. (If you don’t understand how this statement is correct, feel free to ask.)
In the 10+ years since the Durrett and Schmidt article came out, how many other papers have been published that expand on (or correct) their assertions? What (if any exists) is the consensus in the field? @bjmiller, surely you know that Durrett and Schmidt are not the last (or first) word on this subject.
We’re not talking about an “argument” and we’re not talking about “extreme protein rarity.” We’re talking about the ratio of protein sequences that are functional to the total number of protein sequences.
Here’s an example of you misrepresenting the data. While the library was 2.7 billion, they had 5 hits, meaning that the ratio (what you’re allegedly, but not really, addressing) is on the order of 1 in only 500 million.
If Doug Axe’s extrapolation from lousy binary data had any validity, they would have never found any. Note that Axe didn’t measure enzymatic activity. Why didn’t he, Brian?
You’re misusing both “designed” and “stable” here. Are you saying that if the H and L V regions were entirely positively-charged residues, that the C regions would still maintain a “stable structure,” whatever that means? Is more stability always associated with better function?
Therefore, Axe’s sloppy extrapolation from an enzyme with the identical activity is wrong.
There’s no contrast. Only some do, and even worse for you, abzymes and many other enzymes have the amino-acid residues that form the catalytic site on separate subunits!
Gee, apparently you missed this paper:
https://www.pnas.org/content/93/11/5590.long
Now, given that Doug Axe, the first author, wrote this:
“Of the active mutants produced, several have no wild-type [of 13] core residues. These results imply that hydrophobicity is nearly a sufficient criterion for the construction of a functional core and, in conjunction with previous studies, that refinement of a crudely functional core entails more stringent sequence constraints than does the initial attainment of crude core function.”
How can you make the obviously and objectively false claim that “amino acids throughout the protein structure are specified,” Brian? Are you really not familiar with Axe’s own work, or are you cynically cherry picking?
Abzymes do not require “generating a new fold.” They are made up of repeats of the immunoglobulin fold, a very common fold. The immunoglobulin structure brings those amino-acid residues, which are on two different proteins, together properly in 3D space and provides structural support.
No difference there. You clearly don’t know what you are talking about.
How much grant money will the DI bet? You’ve stumbled onto a testable hypothesis that I’ll bet none of you have the slightest interest in testing.
I don’t see any contrast there. Why would you use the segue “in contrast” when there’s no contrast?
Again, how much money would you (and the DI) like to bet on your claim? Do you have faith in it?
We don’t need to “evolve a new protein fold.” A single fold has many different functions and different folds can have the same function. A single protein, functioning normally, can adopt two different folds–that’s how many proteins work. It’s a structural classification, not a functional one.
The first is demonstrated well in this paper:
https://www.pnas.org/content/107/32/14384
Note that the authors use the correct term, “protein fold.”
It all comes back to this ludicrous idea of pre-specification.
I’m pretty sure that the average waiting time for the origin and fixation of two particular pre-specified mutations is a meaningless calculation in the context of particular evolutionary transitions. Evolution isn’t (and weren’t) searching for those two mutations. It’s searching for any adaptive mutations. So what we really need to know is how often adaptive double mutants could be expected to occur on average.
The rate of fixation of neutral alleles is equal to the rate of mutation, which would be somewhere in the range of 100 to 150 every generation in the human population.
So it seems to me the real question is, on average, how many of those 100-150 singularly-neutral mutations are adaptive in conjunction? That would then tell you how many adaptive double mutants which are neutral alone, that you could expect to establish in a population over some given number of generations.
can it be that most of them are adaptive?