Detwiler: Questions Behe, Polyphen, and Ratchets

you realize I don’t have a problem believing that things are breaking down right? We already covered that some proteins have a balanced level of stability. I’m sure there are even quite a bit of neutral mutations that increase the stability of these as well. I’m just trying to see if there’s a large scale trend.

What premises were those?

Could you be wrong about this “seeming” and what follows from it?

Their level of internal “integration” in terms of each members contribution to the position of others and the active sites seems equally extreme.

It’s not clear what this “extremity” is about. I have to note here that your posts are very full of a sort of emotional content, but very short on rational argumentation. You speak in generalities, and in notions, and in seemings, but rarely do you quantify any of these things and show what follows logically. This is a problem.

The hierarchy in which they are controlled, and the interdepencies of their functions with eachother would seem to multiply the probabilities of their ever being found randomly by countless orders of magnitude.

And yet they are. So what “seems” to be the case to you, is not in fact the case, so your sense of what things seem like is not in any way a relevant point of data.

The VP40 gene in ebola is a great example of this. I has 3 distinct formations that all found targets in an environment of just 7 genes. It’s basically enough evidence to make me question alternatives, and not just assume a maximum likelihood because “other organisms have it too”.

What does it mean to “assume a maximum likelihood”?

I have no doubt that larger population sizes slow down Muller’s ratchet. In fact, I have a notion that bacteria may be able to overcome it entirely although I have my doubts.

So far, all you have is notions and seemings, but zero actual evidence.

This exhibits dichotomous thinking. You speak of stability as if it is all or nothing, and the mechanism that provides it in the same way. These things actually come in degrees, and even low levels or occasional stability can some times be sufficient as a basis for natural selection in improve it.

Proteins can start out with some low level of mutual affinity, where they only spend a fraction of the time associated with each other, being some times knocked apart by brownian motion, but this fraction of time can be incrementally increased with subsequent mutations and natural selection if this fraction of time spent together has even a slight positive effect on fitness.

But you’re stuck in this mode of thinking where if the whole things is not absolutely tightly packed and locked together one hundred percent of the time, you appear to think it can have no functional benefit at all, and that there is no way it could ever be gradually improved towards the present state from some less stable beginning state.

You have to understand that functions very often comes in degrees, and that even a weak one is often better than none at all in a way that measurably affects organismal fitness.

Inordinately they find matching patterns on themselves which I have yet to find a good explanation for since most evolutionary explanations for proteins is that they get lucky with a site already existing in a network.

Your problem is you don’t seem to understand the mechanisms of intermolecular bonding all that well. Mutual affinities at a low level is actually ubiquitous at the molecular level.

That reality of that phenomenon is what provides the basis for the possibility of the function of the adaptive immune system. Even very weakly fitting antibodies can be mutationally and selectively improved where the binding affinity and the structural match between antibody and antigen is incrementally increased until we get to a state that looks like the kind of “perfect match” shown in the nice figure you linked.

It is just not the kind of all-or-nothing thing you imagine.

1 Like

I realize that your problem is conceptual. Symptoms of that are your repeated equivocations between structure and function, along with your use of hopelessly imprecise terms like “breaking down” and “bad.”

For example, do you realize that all 8 of the mutants for which I showed increased thermal stability are functional, and that most of them cause cardiac HYPERcontractility?

Because if you are approaching a question scientifically, data that don’t support your hypothesis are far more relevant than those that do.

Yup.

1 Like

That those species evolved from a common ancestor. There is good evidence for it. Multiple lines of it that seem to fall under the umbrella of matching patterns, but are distinct version of just one type of matching pattern. Evolution is not a bad naturalistic theory. It’s really good.

Of course. I don’t know everything. It just seems the more I learn about it, the more that seems to support it versus refuting it.

In other words, just because a residue may make up only 0.3% of a protein does not mean it has the same magnitude of chance to affect other residues. It has a roughly 100 fold higher effect on destabilizing a protein according to the paper cited above. It also has a high chance to affect the position of scaffolding or the arrangement of active site constituents.

from the paper you cited; “Comparative genomic study supports the notion that novel protein genes derive from preexisting genes or parts of them” As I said, and the authors of your paper demonstrate, the more data you incorporate, the harder it is to discuss it in a way that is not a notion. In this discussion, I’m just trying to quantify one small part of a much larger picture to see how it might fit.

What convinces you that it wasn’t either lost in other lineages, a product of HGT, or wasn’t specially created for it’s niche? It does contain a read through stop codon. Wouldn’t that be a great indicator of it being eliminated in most lineages even in an evolutionary scenario?

picking the most likely phylogenetic tree, the one predicted to most likely produce results closest to what you observe if the species in question evolved from a common ancestor. It already assumes common descent. It is question begging.

I tend not to provide evidence of things I don’t think are controversial. Do you really think that larger population sizes don’t slow down Muller’s ratchet? Do you think it would be more difficult for bacteria to overcome it?

It does seem that it can often be very granular. A change in one residue is about as discreet as you can get. Again, I’m not denying a range of stability. But wouldn’t the fact that you can generate crystal structures at all seem to indicate there is a sort of spike around the “nativeness” of a protein? Or maybe that’s why some can’t.

Even at this point, wouldn’t most evolutionists argue that the coefficient of selection on something like this would be completely drowned out by the whole, duplication, recombination, etc…? Isn’t that the explanation for why there is such a search on for clear de novo proteins?

I was talking about the over representation of dimers.

I don’t deny the possibility of what you are saying, or that there aren’t site directed experiments that can demonstrate that to some effect in culture with careful controls. But if so, where are the all the de novo genes? Why is duplication, recombination, hgt the explanation 99.99% of the time? Are those established domains ubiquitous because they are just mushing around with just barely reliable shapes or promiscuous binding sequences?

The pattern of similarities and differences supports the conclusion that these endosymbionts share a common ancestor that existed millions of years ago. This is a scientific conclusion based on evidence, not an assumption.

You do deny there is evidence every time you claim they are making assumptions.

They are looking at the evidence, not choosing the premises. The evidence is consistent with common ancestry and evolution.

Let’s use a murder trial as an analogy. The prosecution presents mountains of forensic evidence linking the defendant to the crime and crime scene. The evidence includes a bloody fingerprint matching the defendant on the knife sticking out of the victim’s chest, the defendants DNA at the crime scene, bloody shoe prints matching the defendant’s shoes, tire prints matching the defendant’s cars, fibers from bloody clothes found at the defendant’s home, and so forth. In response, the defense attorney claims the prosecution is just assuming the defendant is guilty. Does the defense attorney sound credible?

Antibodies are really good at binding to other proteins, and yet they are the result of random shuffling of short DNA sequences. How do you square this with what you claim is extremely rare function with random DNA sequences easily finding binding domains?

This is a false creationist canard.

No, it is rarely that way. And the term is “discrete,” not “discreet.”

No and no. Perhaps you should consider the fact that many proteins are mutagenized to ALLOW crystallization.

I suggest that you consider T_aq’s question to you carefully, as what happens in real time in the immune system contradicts most of what you are claiming.

How is it that, starting with a library of only 10^8 antibodies, we can select multiple antibodies that after a few rounds of variation and selection can bind any antigen presented with incredibly high affinities? Affinities that are typically much higher than what we see in most functional protein-protein interactions?

How can this occur in only two weeks?

Oh man, I am so sorry. I completely missed this post. Thank you very much for this link. I think it didn’t show up because it was a reply to Joshua. I’ll check it out and get back maybe in a few days.

1 Like

Probably for a lot of reasons. Two that would seem to be very important are that most epitopes are small, 5-6 aa in length, and also because binding strength is precisely the variable selected for.

Could this also be because they are primarily selected for their binding strength… ie that is their function? Still, I would be interested if you could link me to a resource about this.

Fair enough.

This is Figure 2 from that paper, I have changed it to more clearly show the dividing line between stabilizing and destabilizing mutations.


While out of the total number of possible mutations destabilizing are more likely, clearly stabilizing mutations aren’t rare at all, with a proportion of about 15-30% of all mutations depending on the species in question (hyperthermophilic organisms living near superheated hydrothermal vents already have highly stable proteins, so the proportion of stabilizing mutations is lower for them at ~15%, which does make some sense). One in six to one in three of all mutations being stabilizing is really not a challenge to find for random sampling.

1 Like

I don’t follow. Wouldn’t small size make it harder/slower to generate tight binding by variation and selection?

Why would other variables give different results? Would you predict that catalysis would be different?

I have no idea what you mean here. My point is that variation and selection generate incredibly tight binding in only two weeks.

By contrast, the dissociation constant for the physiologically-relevant binding of tropomyosin to actin, from the paper I pointed you to after you claimed to be interested in data, is a much higher 210 nM.

Huge difference, wouldn’t you say?