Gotta get back to work. I’ll move this into a new thread. See if you can pull a good set of proteins from swissprot or something. Perhaps we limit to to cytosolic proteins, because the predictors work best on them.
Sure, but it’s sleepy time for me now so we’ll have to wait till tomorrow.
It’s also true that mutations that increase stability of a purified protein in solution can easily decrease the stability of the complex in which most that protein functions.
So absent encyclopedic knowledge, 50/50 is the best estimate.
Note that all of these cardiomyopathy mutants are shifted, in other words MORE stable than wild-type (normal). Two cause hypocontractility, the rest cause hypercontractility; opposite functional effects. None are good mutations to carry.
I can see how a mutation with one effect in one environment, can have another effect in another environment, that makes perfect sense to me. But it’s not obvious to me why your latter statement should be assumed.
Attempts have been made at estimating the universal distribution of the stability effects of mutations on proteins. From figure 2 of the paper linked earlier:
They’re clearly shifted towards more destabilizing, than towards more stabilizing. The data comes from reference 31, which is this paper: The Stability Effects of Protein Mutations Appear to be Universally Distributed - ScienceDirect
I’d just like to add that I don’t really have any dog in this fight. The conclusions aren’t mine, I’m just relaying what I read in these articles. If their predictions are contradicted by better data, I have no compunction accepting that and moving on. Don’t shoot the messenger.
I don’t care about “improved function”. I’m perfectly fine with improved function or different function, or less function or no function. I’m even fine with mutations that increase stability. All I’m saying is that if you continue to lose stability on average, you eventually won’t be stable. That’s not really a controversial claim, and does not warrant extreme extrapolation in the opposite direction. Can’t I hypothesize this without also having to believe that natural selection will turn every protein into a neutron star?
If you are scoring for similarity toward things that by definition already fold, then the closer you are to that, the closer you are to something that folds and has been so far sustainable than you are to everything else. To characterize “everything else”, I think studies show that most mutations decrease stability. I read at least a few that seemed to agree that about 30% of mutations actually prevent folding on average. This would seem to indicate an extreme sensitivity toward sequence (most proteins are not 3 amino acids long) and gives me no reason to believe a mutation might not be expected to decrease stability in a protein it doesn’t break.
I would agree, in general. The two main properties are the stability of the tertiary structure and the chemical properties of amino acids in the active site.
What I am not sure of is if Polyphen can differentiate between stability and change in tertiary structure. I could see a scenario where Polyphen labels something as damaging that is in fact beneficial. There is also the strong possibility of compensatory mutations preventing damage from other mutations.
In the most extreme general sense, this is probably true. Of course, the reality is complicated. Different parts of the protein are going to have different sensitivities to changes, and it also depends heavily on what the new amino acid is. Swapping out a leucine for an isoleucine may not have any affect, but swapping out a leucine for a proline could have a drastic effect.
I don’t think you looked at the data.
There are two problems with that:
- We’re not talking about continuing to lose stability.
- Instability plays an essential role in protein function, particularly in regulation, so not being stable is not necessarily nonfunctional. The disease-causing conformation of prions (PrP) is incredibly stable. That’s not good.
I wouldn’t. Even random proteins fold. It’s what they do.
Again, “something that folds” is a non sequitur. You didn’t look at the data, did you?
Then how did I just show you 8 killer mutations that increased stability?
I think that you’re not understanding the basis of folding.
That’s because you are discounting the importance of instability in regulation. Proteins don’t just sit around in solution to be scored by a computer program. To understand these principles, I would suggest that you do a deep dive into the sarcomere, because it’s one of our best-understood systems and the system is fine-tuned to sit on a razor’s edge (instability), to be turned on or off by minute changes in intracellular calcium. Even the most structurally stable one, tropomyosin, is “detuned” to help it function as part of that razor’s edge.
I agree with all of this. I’m really interested in what you guys discover in the simulations you are going to run (by the way, it’s really encouraging to see people who want to experiment instead of just pontificate). I feel like Joshua’s hypothesis accurately represents the evolutionary perspective, that if sustainability has been achieved, then there should be roughly a 50/50 split, but I feel like you are more likely correct and the literature seems to support the idea that even conserved mutations tend to reduce stability in contrast to ancestral sequences. The question then becomes, does purifying selection have the strength to offset these reductions. I have backup hypotheses which are harder to falsify and so are weaker. One of those is that many proteins seem to be designed with a certain amount of binding affinity that tunes them to behave in beneficial ways, like fibrin or fibrinogen or thrombin or whatever the one is that falls apart seconds after being activated so blood only clots locally.
crap I just realized that’s not what you said. You do believe conserved sequences offset that reduction.
It’s because proteins interact with themselves, other proteins, substrates, etc. All of these things are missed when scoring stability of a protein, either in vitro or in silico, when it’s not interacting with anything.
But FoldX is only being run on the protein doing absolutely nothing in aqueous solution. What if wild-type protein becomes less stable when bound to substrate, but the mutant, while less stable in solution, becomes more stable but less functional when bound to substrate? What if the converse occurs? Can you think of any biochemical reason why all four combos wouldn’t occur?
Protein structure predictions are notoriously difficult, so I’m not sure how far we can really get. I have done very little work in this area, but have worked with others who have. Good protein modeling often requires time on supercomputers, so the best we may be able to do is simple comparisons for alpha helices and beta sheets.
One thing I noticed in the original paper and also the wikipedia writeup on ApoB, " Mice overexpressing mApoB have increased levels of LDL “bad cholesterol” and decreased levels of [HDL] ‘good cholesterol’. Mice containing only one functional copy of the mApoB gene show the opposite effect, being resistant to [hypercholesterolemia]." This would seem to indicate that more is going on here than a simple A takes care of B type analysis. It would almost seem to argue in the opposite direction from a gene dosing perspective. In other words, it may be that blunting the effects of the protein product actually improve the ability of polar bears to deal with the required increase in dietary fat. Of course that would seem to vindicate Behe’s rule, but doesn’t exactly address my concerns or the way they overlap with his in terms of stability.
So I’m having trouble keeping up with all the quotes and redundancy here. If I’m replying at the wrong levels and duplicating responses, I’m terribly sorry. This format is killing me.
Well, yes, starting with the important fact that the adjectives “good” and “bad” only have that meaning in the context of one’s susceptibility to atherosclerosis, not for all the rest of biology involving cholesterol.
That is right, which is why the whole notion of degradation of genes is poorly defined. What you are saying with stability is different. It has better definition, but still suffers, as @Mercer has explained.
As for @Rumraket clarifies he means there is about. 65%-70% chance that a muation will reduce stability in a folded protein. I don’t object to this. That is essentially 50/50, and it is self limiting too. Unstable proteins are more likely to get a mutation that increases stability. I had thought he meant that mutations that increased stability were exceedingly rare (that isn’t true).
If you keep negative selection in mind, we expect that proteins will remain as stable as they need to in order to work. I’m not seeing how this is a problem for evolution.
And they often need to be less stable.
Right, but why should that skewer the distribution of stability effects of mutations towards more destabilizing?
You appear to be suggesting that if we put a protein back in it’s natural environment in a cell, this will have the effect of reversing the stability effects of mutations more toward stabilizing, for destabilizing mutations, than it will have the effect of reversing stabilizing mutations towards more destabilizing. I see zero reason why this should be the case.
I would agree that putting any given protein in another context will have the effect of reversing SOME of the stability effects of certain mutations, but I don’t see why the cellular environment would push or bias this potential reversal effect of mutations more towards being more likely stabilizing compared to the distribution outside of this context.
No I agree all four combos can occur, but they can occur to both stabilizing and destablizing mutation.
Suppose mutations are ~30% more likely to be destabilizing than stabilizing in vitro. Now we move the protein to it’s “natural” environment where it evolved, inside a cell, and we predict that some of the mutations will have reversed effects. Some mutations that are destabilizing in vitro, are stabilizing in vivo, but why would it not also be the case that some mutations, in at least equal proportion, are stabilizing in vitro, but become destabilizing in vivo?
I don’t see why the intracellular environment should produce a counterbiasing effect on on the stability effects of mutationsmutations.