I think that the inconsistency with the data you are talking about is not without assumptions you are using to filter it. Maybe I a wrong here but I am interested why you so passionately think your position is correct.
Natural selection helps with fixation. Certainly that was dramatically demonstrated in the Lenski experiment but that selection came late in the game. In order for that type of selection to occur a sequence has to find a strong functional benefit. Until then the changes are random and subject to genetic drift.
Sanford’s model, to the extend he can model it, predicts net fitness decline always under all realistic conditions of population size and mutation rate. Without exception. Given Sanford’s view of what the “correct distribution” of the fitness effects of mutations are, mean fitness can only ever and exclusively go downhill. Fitness increase should be essentially impossible.
How do I know this? Here’s how. From his book:
That’s what Sanford thinks the DFE for mutations is like. Consider that this is supposed to be an exponential distribution. The decline is exponential on both sides. The probability of occurrence of beneficial mutations is declining exponentially with the magnitude of their effect. Sanford clearly thinks that the probability of a beneficial mutation with a positive effect on fitness of 0.1% or more is completely outside the realm of what can reasonably be expected to occur. Bill can you try to calculate the odds for me, using an exponential decline of the magnitude depicted in that figure, what the probability of a beneficial mutations with a 0.1% positive effect on fitness is when so far out on the tail of an exponential asymptotic decline?
Not only are beneficials pretty much non-existant on that curve (and in fact Sanford claims he has deliberately exaggerated the size of the curve for beneficials on the right side of that curve just to make it visible, because he insists it would be invisible to the naked eye), to the extend they exist at all they invariably fall inside the so-called “zone of no selection”. The zone of no selection is this area where the effects of mutations are so small that it requires GINORMOUS population sizes for their effects to become visible to naturals election.
Sanford clearly believes this. He uses a distribution like this when he “simulates” evolution in his Mendel’s Accountant program. And sure enough, in his program, fitness only ever declines using his “correct distribution”. Sanford appears to think the DFE is essentially unchanging (the relative proportions don’t change with changes in fitness), and he thinks it’s overwhelmingly deleterious, with pretty much all beneficials being invisible to selection. Hence, obviously, fitness due to mutation accumulation, even despite natural selection, can only ever go down over time.
But that’s not what happens in reality. In reality we see things like this:
Fitness continues going up, but the rate slows down. So there are plenty of beneficial mutations visible to selection, and the DFE cannot be a constant otherwise the rate of adaptation would not be declining.
Hence, inconsistent with observation. Both his assumptions are wrong, and his DFE is a complete fantasy. The distribution isn’t constant, the area under curve for beneficials is not invisible to selection, and the zone of no selection does not extend that far out. Oh yeah while on that topic, he also insists he has deliberately under-stated the width of the zone of no selection. He thinks it should be even wider. It’s all in his book.
Life being 4 billion years old without having disappeared to GE is just another observation that contradicts his fantasy.
Sanford has trash excuses for why this happens of course, but his trash excuses are not part of his theory of genetic entropy. His theory of genetic entropy is effectively that figure.
This is most probably incorrect for since the beginning of the sars-cov2 story, one often observe the decline of one variant before the emergence of a new one, and this even before the vaccines were available.
Are you aware that Raoult is one of the top world experts in infectious diseases and that his CV and achievements are out of all proportion to that of most of the people on this site.
You are supporting your claim with bacteria from the Lenski experiment.
Where do you see the curve going in the next 50000 generations?
When Dr Sanford discusses genetic entropy are his targets more than viruses and vertebrates?
What is your evidence for that? I don’t speak french. Where’s the data?
By golly he must be an oracle with hotline to God then. Who are we mere maggots to question anything he says? Did you get permission to even use his name?
And Sanford’s GE says they should be declining in fitness. That’s not happening.
More up, at a declining rate. The rate fits a power law relationship so well that merely using fitness data from the first 5000 generations of the experiment can be used to predict the next 50 000 with high accuracy. That relationship predicts fitness increase indefinitely into the future at ever declining rates.
You can read more about that here:
We have also characterized the grand-mean fitness trajectory across the replicate populations and 50 000 generations (Wiser et al., 2013). Over this timeframe, the bacteria reached a relative fitness of ~1.7, meaning they grew ~70% faster than their ancestor when competing in the LTEE environment (Figure 1b). Their rate of improvement slows over time, as it becomes more difficult to achieve further gains after taking the low-hanging fruit. Does this slowdown imply that the bacteria have, or eventually will, hit some limit on their fitness? Wiser et al. (2013) compared the fit of two simple models with these data, each of which has two free parameters. One model, a rectangular hyperbola, has an asymptote. The other model, a power-law, has no upper bound, but the rate of increase declines with the logarithm of time. Both models fit the data well, but the power-law model fits the data better. Moreover, the power-law relation is much better at predicting the future than the hyperbolic model. When a truncated data set was used to predict the future trajectory, the hyperbolic model systematically underestimated the potential for further fitness gains. By contrast, the power-law model predicted with impressive accuracy the improvement out to 50 000 generations using only 5000 generations of data (Wiser et al., 2013).
The power-law model has no upper bound, and so one might reasonably worry that it predicts the bacteria will eventually grow at a rate that is biophysically implausible. However, it does not predict implausible growth rates for the foreseeable future because the rate of improvement scales with the logarithm of time. The ancestral strain used to found the populations had a doubling time of ~55 min in the glucose-limited minimal medium of the LTEE (Vasi et al., 1994). Wiser et al. (2013) extrapolated the model to 2.5 billion generations, which corresponds to 50 000 generations of scientists running the LTEE for 50 000 generations each. At that point, the projected doubling time is ~23 min. That value would be impressive for bacteria growing in a minimal medium, but it is no faster than many E. coli strains can grow in nutrient-rich media, and some species can grow even faster. Thus, the power-law model generates plausible predictions far into the future.
If one variant is better at transmitting between hosts, it will outcompete other less transmissible variants and this might even drive the less transmissible strains extinct. This is a fact and happened with Delta which outcompeted other variants to become the dominant one.
Luc Montagnier, a Nobel Laureate with an impressive research record believes water has memory and that DNA emits electromagnetic waves. I hope this makes you realize that one’s credentials or accomplishments doesn’t immunize them from crankery.
Great. This will be interesting to watch.
It is entirely legitimate to dispute Raoult or any scientist’s idea, of course. But there is a difference between challenging someone’s thesis and seeking to ridicule him.
You probably regard him as a hero and are therefore not willing to consider any evidence to the contrary that anyone mentions.
However, because you’re here on the forum and talking, I’m going to hope that you are in fact open to evidence. So consider:
Raoult's HcQ paper is rife with grave procedural errors
In strong clinical trials, the control group (who are given a placebo) and the treatment group (who are given the drug) should be as similar as possible so scientists can be confident any effects are from the medication alone.
Bik pointed out that patients should be of similar age and gender ratio, be equally sick at the start of treatment, and analysed in the same way, with the only difference being whether they received treatment or not. She said the treatment and placebo groups in Raoult’s study differed in important ways that could have affected the results.
Six patients enrolled in the treatment group at the beginning of the study were not accounted for by the end, missing from the data.
“What happened to the other six treated patients?” Bik said.
“Why did they drop out of the study? Three of them were transferred to the intensive care unit, presumably because they got sicker, and one died. It seems a bit strange to leave these four patients who got worse or who died out of the study, just on the basis that they stopped taking the medication … which is pretty difficult once the patient is dead.”
I consider the erasing of 4 test subjects with no good reason to be the moral equivalent of fabrication of data.
Do you have any good answers for Bik’s questions and objections, @Giltil ?
His publisher denounced his HcQ paper
After identifying ten (10) major flaws in the paper, the Elsevier reviewer concluded:
This is a non-informative manuscript with gross methodological shortcomings. The results do not justify the far-reaching conclusions about the efficacy of hydroxychloroquine in Covid-19, and in the view of this reviewer do not justify any conclusion at all. [emphasis added]
PLOS One retracted his 2013 paper because of problems with "integrity of data"
Twenty-two of his papers contain images whose integrity is in doubt
He has apparently performed human studies without following ethics procedures
His employer no longer thinks he is fit to serve
His colleagues think he is unethical
The professional society to which he belongs (SPILF), which has 500 members, has lodged an ethics complaint against him.
The data are overwhelmingly against Raoult. That you would extol such a scientist, @Giltil, makes me question your good judgment about anything and everything–to put it as gently as possible.
(1) This is true.
(2) It is also true that Lenski’s team has identified several series of mutations leading to increased fitness for the environment they are in.
(3) Statement #2 above is extremely strong evidence against the genetic entropy hypothesis. In other words, Lenski has given very strong evidence that the fact that selection may not act immediately is irrelevant. The empirical data show that mutation-driven increases in fitness happen anyway.
Hundreds of people do great work in infectious respiratory disease. If you insist on personalities, may I suggest for expert, renowned, distinguished, acclaimed, and authoritative insight to influenza and covid-19, that you spend perhaps less time on Raoult and more time familiarizing with papers from Jeffery Taubenberger and Trevor Bedford.
I think you might misunderstand Bill there. In his usual style I think he’s trying to say one thing but ends up saying something else. I assume he’s not talking about the LTEE and Genetic Entropy when he says “selection came late in the game”, but I have to assume about de novo gene evolution. Otherwise his statement doesn’t make sense. Selection through competition was in effect from the very first generation of the LTEE.
His point appears to be that selection only makes a difference if you have a selectable function already, and hence selection can’t be an explanation for how a sequence goes from nonfunctional to functional. That part would have to occur simply through accumulation of mutations.
His statement that a sequence has to find “a strong functional benefit” is wrong, the benefit can be incredibly small and still be visible to selection. Once a weak benefit is found it can of course be further enhanced by mutation, recombination, and selection.
But we’re again back to Bill’s fundamental premise he can’t ever let go of: He thinks functions are impossibly rare so can never be discovered with the “chance” accumulation of mutations. He’s just wrong about that and we can show that with evidence. When we do, he just moves the goalposts. Same exact thing happens every time. Then he just wants us to model or experimentally demonstrate the origin of some increasingly complex function he likes. It’s not enough to just select for a simple function like peptides that give antibiotic resistance, or bind small molecules. If we show that, we are asked about multi-domain proteins 2500 amino acids long, entire flagella, the spliceosomal complex, multicellularity, eukaryotes from prokaryotes, etc. etc.
He wants us to empirically disprove his assumption with a model that implicitly uses his assumption, and then still evolves a very complex function from nothing. You can’t start with something that’s near a functional sequence, even if it begins nonfunctional, and you can’t start with something already functional that’s near something more complex. We are to prove evolution using assumptions about the relationship between function and sequence space that make evolution impossible, for which no empirical evidence exists, and which literally contradict the laws of physics and chemistry.
You showed him the paper on de novo gene birth using a population genetic model and he dismissed it because, he said, it in his view assumed we were near a function in sequence space. That’s why it worked. Yes genes can evolve when we are near them, but we have to assume we’re near them. If genes evolve in your model, it’s because you’ve assumed you’re near enough for that to happen. But in Bill’s imaginary world functional sequences are virtually impossibly rare, so if you can show in some model they evolve that merely proves the model is using the wrong assumptions. Checkmate! Heads I win, tails you lose!
What he wants is a model that uses HIS assumption that we are FAR from functional sequences, and then only if we use HIS assumption that functions are almost impossibly rare, and yet still evolves, will he change his mind and think new functions can evolve.
That is of course completely irrational. And I’m being generous and pretending that would actually change his mind. He would of course then just claim it’s somehow evidence of design or something.
He is the responsible party for papers that contain fabricated data. Whether he did the fabrication himself is irrelevant.
Yes. Do you think he will win?
No, I am not.
Really? How many of our CVs have you examined to support this claim?
Did you pay attention to what I said? I said that D. Raoult is one of the world top experts in infectious diseases. This means neither that he is the sole or even the best one. But it means that he should be taken seriously on his area of expertise, that’s all. And note also that I don’t mean that all he says should be accepted without critical examination, even on matter of infectious diseases, for I know too well that expertise doesn’t protect against error, which, as Darwin said, can only be avoided by fully stating and balancing the facts and arguments on both sides of each question
You say a lot of things. Evidence supporting this claim?
The shine has gone off Raoult recently. He’s the subject of an enquiry and I suspect retirement is in the offing.
He also seems to have a reputation for manipulating performance measures to inflate his apparent importance.