Larger doses are required because most of the virions you’re exposed to won’t do anything. The number of virions that actually start the infection can be quite small. Probably in the single digits for SARS-CoV-2.
As others have said, that makes no sense. The transmission bottleneck is drawing from a large population of virus in which selection has been acting very effectively. The probability that a less fit virus is transmitted is determined by its frequency in the source population as well as by the size of the bottleneck.
Sure. “Particle” is (or at least was) typically used to refer to something that looks like a virus but may or may not (almost always the latter for RNA viruses) be infectious. Particles are more easily quantified today as genomes.
So a particle-to-PFU ratio of 100 means 100 genomes (by PCR) or particles (by electron microscopy) per 1 virus capable of forming a plaque in an assay on cultured cells. Plaque purification, followed by infection at a low MOI (multiplicity of infection) is pretty universally done to create viral stocks because it dramatically decreases the ratio.
Another way to titer is by ID50 in cultured cells or in whole organisms. For the same aliquot of virus stock, the latter is almost always significantly lower than the former.
After ID50, one can do LD50s (not on people), which are typically orders of magnitude greater than ID50s. This is why we should wear masks!
Exactly. The innate immune system often mops up smaller doses so that no response by the acquired immune system ever occurs, but this is very difficult to study quantitatively.
On the contrary, it makes perfect sense. There are tons of experimental evidence that Muller’s ratchet operates in many viruses when strong bottlenecks occur, leading to fitness decline.
See section 6.5 and fig 6.4 of the review below.
Let me emphasize the following passage, because it is aligned with what Sanford says: Fitness decrease due to the accumulation of mutations proposed by H.J. Muller in general terms is expected to be accentuated in the case of viral populations with continuous generation of new mutant genome. In mutant spectra of viruses, the least mutated class of genomes will be the one displaying highest replicative fitness and will correspond to the master sequence that dominate the population. With plaque-to-plaque transfer design, there is a probability that in each plating (a step or click in the ratchet) the least mutated class of genome is lost. Therefore, the viral population is forced to regenerate a distribution from which again the least mutated class will be lost in the next ratchet click. The system is doomed to rapid deterioration and extinction unless mechanisms operate to restore fitter genomes.
There’s plenty of evidence that Muller’s ratchet operates in viruses when you artificially restrict the population size and minimize the effect of purifying selection. For example, from the 1992 paper on VSVGreat care was taken to pick virus from plaques selected:
So yeah, deleterious mutations often (but not always) accumulate if you restrict the transmission bottleneck to exactly one virion, keep the total population small (a single plaque), and minimize purifying selection. In the real world, though, viruses don’t have these restrictions and don’t experience Muller’s ratchet.
Once again you are confusing a host-to-host transmission bottleneck, with a total population bottleneck. The Coronavirus quickly spread to thousands, and then to millions of people, and is still going with worldwide numbers of cases continuing in hundreds of thousands to millions. It never ever reaches something remotely approaching a person-to-person transmission bottleneck.
They don’t seem to be arguing that Muller’s ratchet is the inevitable fate of viruses, but rather that steps that might be able to facilitate it should be explored.
Again it must be emphasized that to the extend that imposing a persistent population-wide bottleneck one some organism facilitates a decline in fitness, this isn’t a result that is at odds with a conventional understanding of population genetics.
But Genetic Entropy is not a hypothesis of “Muller’s Ratchet due to persistent bottlenecks”, it’s a hypothesis that all natural populations must go extinct at the entire range of their natural population sizes and mutation rates. The idea is that evolution is supposed to be impossible, not that under limited and synthetic constraints RNA viruses will experience Muller’s Ratchet and suffer decreased virulence.
Well, that would be interesting if they provided any evidence for their differing view. But they don’t. The key sentence is ‘Modeling suggests that such bottlenecks likely drive down the virulence of a pathogen due to stochastic loss of the most virulent phenotypes’, in support of which they reference this theoretical study. If you look at that study, you’ll find that their main conclusions are:
Reduction of virulence because of transmission bottlenecks should be much larger for vertically transmitted viruses than for horizontally transmitted ones. Note that all of the viruses we’re talking about are horizontally transmitted.
Experimental plaque-to-plaque studies show that bottlenecks as small as five virions have little or no impact on viral fitness. Since purifying selection is lower in these studies than in vivo, this is good evidence that quite small bottlenecks should have little effect on viral virulence in the real world.
Even bottlenecks of size 1 have essentially no effect if the number of hosts is large, because of competition between viruses in different hosts. Based on their simulation, ‘large’ means about 25 infected individuals.
In short, for any kind of realistic infectious disease, the cited work in fact strongly undercuts the particular claim of the authors of the letter.
Clearly, you’re not familiar at all with Sanford’s work on GE, for if it was the case, you wouldn’t assert such a false claim. Indeed, Sanford made it clear that the susceptibility of natural populations to GE depends on many factors and that certain combinations thereof can protect populations from GE, for example the combination of low mutation rate, small genome size, large population size, intense selection, etc…
Here is an interesting report supporting the idea that Muller’s ratchet may have operated in the Covid19 epidemic in India.
Significance Epidemiological features are intricately linked to evolutionary diversity of rapidly evolving pathogens, and SARS-CoV-2 is no exception. Our work suggests the potential of average stability of complexes formed by the circulating spike mutational variants and the human host receptor to track the severity of SARS-CoV-2 infection in a given region. In India, the stability of these complexes for recent variants tend to decrease relative to their ancestral ones, following countrywide declining fatality rate, in contrast to an increasing mutation rate. We hypothesize such a scenario as nascent footprints of Muller’s ratchet, proposing large-scale population genomics study for its validation, since this understanding could lead to therapeutic approaches for facilitating mutational meltdown of SARS-CoV-2, as experienced earlier for influenza A virus
All of which he argues basically never occurs in nature. Of course, so have you. With your repeated references to analogies about rusting cars, and “many more ways to break than to improve machines”, and similar such vague blather.
Please do not pretend this current argument we are having is somehow taking place in a vacuum, and that the concept of GE is not at all related to an idea that under naturally occurring ranges of population size and mutation rates (as you are so fond of saying, “given only mutation and natural selection”), all populations are doomed to extinction, that the history of life cannot extend hundreds of millions of years into the past, and therefore evolution is false.
It is extremely curious to see GE proponents now suddenly arguing that GE is this weirdly limited hypothesis that only is relevant to analyzing dubious and hypothetical cases of Müller’s Ratchet operating on RNA viruses under very restricted sets of circumstances of extremely small, repeating, and consistent bottlenecks.
What exactly is your goal here? What do you think anyone is going to learn by your dredging through the thousands of SARS-CoV-2 preprints, looking for anything that might support your preconceived ideas, even if you don’t understand the paper? How are you ever going to learn anything if you only read the things that tell you what you want to believe?
Looking at your latest offering… There is so much wrong with this paper. The authors want to use spike-receptor stability (which they estimate computationally) as a proxy for viral fitness, and suggest that decreasing stability because of mutations accounts for decreasing viral load and fitness, and therefore fatality rate. Looking at Fig.2 and 3, though, it’s clear that most of the difference in estimated stability between regions must come from the different prevalence of the ancestral and mutated forms of spike protein D614G, which is a single mutation that increased transmissibility and which therefore has nothing to do with Muller’s ratchet. That mutation also points up the uselessness of relying on computational estimates of stability. The software they’re using predicts that the mutated (G) form should be more stable (lower HADDOCK score), while experimental results (see this paper) show that the mutation actually decreases the stability of spike-receptor binding, even while increasing transmissibility.
This is meaningless paper in support of a daft attempt to explain away a failed prediction of a completely unsupported and unscientific hypothesis that was developed solely to support the insane idea that life on Earth is only 10,000 years old.
Also, @Giltil, do you think there might be anything to the fact that the preprint was posted in August, but today, 7 months later, it hasn’t been published anywhere?