Gauger and Mercer: Bifunctional Proteins and Protein Sequence Space

You seem to place a higher emphasis on negative data than positive data, while the rest of us do the converse. Is there a reason why you do this?

That is perhaps a good starter for a follow-up topic.

Assuming this proportion of genes that can be substituted is generalizable, then can you take it as a reasonable estimate representing the proportion of missing enzymatic activities that can be provided for from the available starting enzymes? If so, and it’s the best estimate I know, having been measured, then 80% of the time new enzymatic needs would go unmet.

You see, every one assumes such plasticity of enzyme function was easy when evolution was setting things up, so that getting new chemistries was easily obtained in just a few mutations. But evidence for such plasticity now is limited to highly promiscuous enzymes, or enzymes with activity for related compounds. I have looked at examples of “newly evolved” enzymes, enzymes evolved in response to man-made chemicals. The ones I have seen can be traced back to a pre-existing sequence that acquired a modified substrate preference after one or two mutations.

I had an experiment planned to expand this idea, but it proved very technically challenging given our resources. The question I wanted to ask was this. Assume a starting minimal cell, with the machinery of replication, transcription, and translation, cell division etc, all the essential genes of E coli, but lacking the biosynthetic pathways. We already know which can be substituted for directly from Patrick et al’s research, and which can’t.

Which biosynthetic enzymes can be rescued by one of the 300 minimal genes after 1 mutational change?

We ended up dropping it because it would receive the same criticism as the bioF Kbl work–“We don’t expect modern enzymes to have the right sequence or plasticity to be close enough to make the transitions. Yet we know they must have, because they did.” That’s a paraphrase, not a quote.

Believe me, I’d rather have an experiment with a positive result.

1 Like

Even though we merely enlarged the binding pocket of our myosins, I’m certain that charge plays an equally important role in substrate specificity.

I don’t think it’s very generalizable.

Generalizing anyway, I would say this experiment suggests that >20% of new needs can be met.

For me the positive proportion is a minimum, as we can’t capture all the possibilities in a single experiment or series of experiments. Technical issues are almost always going to increase the negatives at the expense of the positives.

Yes, charge does, so does orientation of key catalytic side chains. I worked with a class of enzymes known as PLP-dependent transferases. They all used PLP as a cofactor–actually, it’s more than that because it remains in the active site.

The unparalleled catalytic versatility of PLP, the active form of vitamin B6, originates from its unique electron-sinking properties, which stabilize reaction intermediates, thus lowering the activation barrier during catalysis. At least five different protein scaffolds arose during evolution to bind PLP and harness its catalytic functionality. The role of the apoenzyme scaffolds is to assist in the proper orientation of the substrate’s reacting groups relative to the π -electrons of PLP, to promote reactivity and control reaction specificity. In addition, the active site residues interacting with the leaving groups provide either stabilizing or destabilizing interactions to direct catalysis [1].

As a consequence, PLP-dependent enzymes are unrivaled in the variety of reactions they catalyze and the highly diverse metabolic pathways they are involved in, including the conversion of amino acids, one-carbon units, biogenic amines, tetrapyrrolic compounds, and amino sugars. These biocatalysts play also a key role in sulfur assimilation and incorporation in cysteine, biotin, and S-adenosyl methionine.

Some of these enzymes reacted with more than one substrate, and some were very specific.

@T_aquaticus

And are those small steps selectable? One mutation at a time? I know of one experiment that demonstrated that and it was for enzymes that already had overlapping function (I think, it’s been years since I read it.)

No, I am not concerned with knocking out beta-lactamase. I am asking what the first cell encountering penicillin did to survive. That, by the way, is an easy problem to solve, since all that’s needed is hydrolysis.

I don’t see why they would all need to be selectable. Interactions between neutral mutations and new mutations can certainly change specificity. There are also large recombination events that can bring two folds from different proteins together in the same protein. We can also add protein-protein interactions that can produce enzymatically active proteins with multiple subunits.

It probably died.

2 Likes

I agree. They could even be negative or even null for diploids as long as there’s no haploinsufficiency.

Not true. It all depends on concentration. The first cell encountering penicillin likely encountered it at very low concentrations and faced essentially zero selective pressure to adapt.

1 Like

@Mercer @T_aquaticus

How many deleterious or near neutral mutations can be within reach of a random walk? Remember, back mutation and gene loss (when non-functional) are a problem. Crossing a valley of no function longer than 3 mutations is highly unlikely.

That’s a good point. There could have easily been a concentration gradient surrounding the fungi producing the penicillin, and non-resistant bacteria could have existed somewhere in that concentration gradient.

Obviously, beta-lactamase genes are not ubiquitous. If they were then fungi would have lost the genes for making beta-lactams at some point.

According to my understanding of population genetics, the rate of fixation of neutral mutations is determined by the mutation rate per unit of time. The more time there is the more neutral mutations you will have.

Why?

For a human being, how many mutations are fixed per year or per generation? Technically they don’t have to be fixed, they just have to co-occur.

Durrett and Schmidt gave an estimate of 6 million years to fix a single mutation in a DNA binding site, size 8 nucleotides, with a certain sequence required with 7 out of 8 correct, all within a 1 kb stretch of DNA. This is meant to model the evolution of a new transcription factor binding site, not a protein. But still.

60,000 years for the mutant to appear, 100 times that to fix. Their paper caused a stir. How do they solve the problem? By saying the site could have evolved anywhere in front of 20,000 genes. Is that realistic?

Pop gen isn’t my strong suit, but if I understand it correctly the number of neutral mutations fixed per generation is the mutation rate, independent of population size. Therefore, about 50 to 100 mutations become fixed per generation in humans.

Of course, a mutation doesn’t need to become fixed in order for another mutation to interact with it. if a mutation is found in 5% of the current human population that is 350 million people. You only need about 200 million people, on average, to produce all possible substitution mutations at every position in the human genome. Therefore, if there is a beneficial mutation that requires a neutral mutation you don’t have to fix the initial neutral mutation.

1 Like

I think the difference between your estimate and theirs is the specificity of sequence they are asking for, which seems more realistic than just asking how many mutations anywhere fix.

Good pop genetics questions @Agauger. I’ll answer elsewhere in a few days, when this thread closes.

That’s where the Sharpshooter fallacy comes in. You are focused on just one protein and possibly on just one amino acid in that protein. What is actually happening is that all proteins are accumulating mutations in parallel. You need to determine how many beneficial mutations require an initial neutral mutation across the entire genome before you can calculate the probability of getting just one of them.

1 Like

@T_aquaticus

You like the Sharpshooter fallacy, don’t you?

How are you going to estimate how many beneficial mutations there are? Are you just estimating how many neutral pairs there might be that are within a kb of each other, and say that one out of a thousand of those pairs is beneficial? OK, then what’s the number for three? Four? What’s the waiting time?

Don’t forget back mutation and epistasis.

I don’t think it can be calculated which is why I don’t put much stock in those who claim that these features are improbable in an evolutionary model. It’s a bit like calculating the probability of a specific winner of a lottery actually winning while ignoring all of the losers. If we did that, then we should only see a winner in every 200 million drawings, but we don’t.

1 Like