I’ll channel Walter: “You’re misrepresenting me by clearly reading what I wrote instead of what I intended to mean, which anyone can readily understand by reading my self-published book.”
According to the history given at WikiPedia genetic drift was first recognised in 1921, and was given a theoretical treatment by Sewall-Wright. (He used the term random drift for what we would now label drift or genetic drift.) Kimura advocated for its prevalence (neutral theory); I presume motivated in part by the observation of the frequent occurrence of isozymes in wild populations.
Just to quibble a bit: Genetic drift is “prevalent” in all species that do not have an infinite number of individuals in their populations (which is of course all species). Kimura’s important advocacy was for the prevalence of mutations that are neutral, enough of them that they explain most genetic polymorphism and most nucleotide substitution. By the way, in their textbook in 1970 Crow and Kimura tried to establish “random genetic drift” as the general term for genetic drift, as “drift” is also used in physics for some nonrandom changes.
@Joe_Felsenstein do you dispute any major points in this narrative?
Depends on what is a “major point”. It is a long (and not badly explained) tour of neutral theory and issues about gene duplication and loss, in Paying Tribute mode to Chase Nelson’s late thesis advisor. There are a number of points where I would differ. Saying that
Darwinism asserts that natural selection is the driving force of evolutionary change. It is the claim of the neutral theory, on the other hand, that the majority of evolutionary change is due to chance.
will mislead many readers. Change of gene frequencies may be mostly due to genetic drift, but adaptation is mostly due to natural selection. I also think Nelson’s and Hughes’s argument seems to say that there is a net decrease of genes owing to gene loss, not compensated by gain of genes. Some time I need to check that by seeing whether life went completely extinct in the Precambrian.
It’s also lousy language. People make claims. Theories make predictions. “-isms” are ideologies, not theories.
I think there is an important opportunity with neutral theory. It seems that creationists in every camp agree that neutral evolution is real. We can observe it in the lab, simple and complex simulations, and it is disconnected from debates about how innovation arises. At the same time, this is where some of the strongest evidence of common descent can be found. Explaining this well, could do a lot to create understanding.
“Creationists in every camp agree that neutral evolution is real”? I suggest that this is wrong. Most creationists (and ID advocates) are at great pains to deny that most of the genome evolves neutrally. See, for example, Spencer Wells’s book The Myth of Junk DNA (2011). They have been enormously encouraged by the ENCODE Consortium’s leadership’s announcement that 80% of the human genome is “functional”, but seem not to have noticed the same leadership when they subsequently revised that figure drastically downwards. Explaining this well has been tried (most notably at Larry Moran’s blog Sandwalk) and it has made no dent in creationists’ opinions about junk DNA. (And, alas, there are many molecular biologists who are equally sure that the great majority of the genome will be found to have “function” and who have also averted their eyes from those explanations – in spite of almost all molecular evolutionists trying to point out that evidence).
I agree with Joe. Creationists (and IDers) tend to reject neutral evolution, first because it implies that God would create messy genomes, and creationists also because it’s evolution. Creationists often attack all evidence for evolution of any sort, including the microevolution that many of them claim to accept. Need I mention Biston betularia?
I have sort of a related general question. When biologists talk about “junk DNA” vs coding regions (not sure those are mutually exclusive but bear with me), is it generally believed that “junk DNA” was at some point “functional”? In other words, can we think of the genome as a large pot of “raw materials” from which some percentage is taken to form functional sequences subject to natural selection, or would it be more like various parts of the genome are dynamically (over long stretched of time I guess) moving between coding and non-coding, between functional and “junk”? Or perhaps I’m nowhere in the ballpark.
Coming from a creationist background I have tended to assume a more “everything has a purpose, and that purpose doesn’t change” (mis)conception about the genome. I’d like to develop a better intuition about it.
Yes, ultimately all DNA, whether today nonfunctional or not, derives from something that used to do something useful for some sort of “organism”(loosely including a virus or selfish element as an organism here).
In other words, can we think of the genome as a large pot of “raw materials” from which some percentage is taken to form functional sequences subject to natural selection, or would it be more like various parts of the genome are dynamically (over long stretched of time I guess) moving between coding and non-coding, between functional and “junk”? Or perhaps I’m nowhere in the ballpark.
Seems correct to me, and those two are not mutually contradictory. A large portion of the genome can be considered to be (not be confused with having been “selected to function as”) a large pot of genetic raw materials that some times mutates to become functional over deep time, while other parts become nonfunctional junk as they deteriorate to deleterious mutations, or simply aren’t beneficial anymore and so aren’t kept functional by natural selection.
The proper comparison would be junk DNA vs functional DNA. There are functional non-coding regions, so it would be incorrect to contrast junk DNA with coding DNA. Functional DNA is usually defined as a region where deleterious mutations can destroy something the cell needs.
Junk DNA can have several sources. Pseudogenes are vestigial DNA derived from once functional DNA. This can be due to gene duplication or changes in environment. Junk DNA can also have function of a sort, such as in the case of transposons which have DNA capable of copying itself and inserting elsewhere in the genome but lacks function that the cell needs. A big chunk of the human genome is filled with active and defunct transposons. Other junk DNA is simply random bits of DNA that have been copied or recombined.
I would suspect that the vast majority of newly functional DNA comes from gene duplication or some other recombination of functional DNA. This is because stop codons occur quite often in random DNA sequences which results in relatively short peptides. Recombination of functional DNA will lack stop codons and result in longer proteins which have a higher chance of having function. It is also possible for random DNA close to genes to acquire promoter activity or function as a non-coding RNA molecule which could change gene expression or affect other genes.
Needless to say, there is a lot to consider. As a simple rule, if a segment of DNA is accumulating mutations consistent with neutral drift then it is strong evidence that it lacks function in that genome and in that lineage. Conservation of sequence is by far the best evidence we have for functional DNA, but it isn’t perfect.
Added in edit:
The Ship of Theseus is applicable in this case. If you replace each piece of a ship one by one until the entire ship is made of new material, at what point did it become a new ship? The same would apply to some junk DNA. If you keep mutating a once functional piece of DNA, at what point does it become random DNA?
Those statements can both be true.
Gauger, Sanford, Carter, Rana, Roberts, and Ross all propound neutral evolution. All references to Mt-Eve and y-adam imply neutral evolution too.
So do Axe, Meyer, Behe, Wells…indeed pretty much every ID theorist of my acquaintance.
What they deny is that neutral evolution is constructive, in the sense of “constructive neutral evolution,” as articulated by W. Ford Doolittle, Michael Lynch, and many others.
Haldane’s Ratchet theory by Rupe and Sanford model the near-neutral deleterious mutations as Kimura did.
They show how bad genes can end up in all in all individuals of a population. They computationally confirm what Ohta and Kimura’s did the old fashioned way with pencil and paper (so to speak).
Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon “Haldane’s Ratchet”.
They’re results are consistent with Ohta and Kimura’s formula:
So independent of Rupe, Sanford, ReMine, if one is willing to invoke Ohta and Kimura, one will see the same problem.
The problem of near neutrals, led Kondrashov (who worked at the NIH) to ask, "Why have we not died100 times over?"
Thus, if the genome size, G, in nucleotides substantially exceeds the Ne of the whole species, there is a dangerous range of selection coefficients, 1/G < s < 1/4Ne. Mutations with s within this range are neutral enough to accumulate almost freely, but are still deleterious enough to make an impact at the level of the whole genome. In many vertebrates Ne approximately 10(4), while G approximately 10(9), so that the dangerous range includes more than four orders of magnitude. If substitutions at 10% of all nucleotide sites have selection coefficients within this range with the mean 10(-6), an average individual carries approximately 100 lethal equivalents. Some data suggest that a substantial fraction of nucleotides typical to a species may, indeed, be suboptimal. When selection acts on different mutations independently, this implies too high a mutation load. This paradox cannot be resolved by invoking beneficial mutations or environmental fluctuations.
Rupe and Sanford haven’t said anything that isn’t quietly acknoweldged by others as a real problem. They just use the phrase, “Haldane’s Ratchet”, but the name doesn’t matter, whatever one calls it.
It’s interesting that you left off the last sentence of the abstract:
Sure, reminds me Behe’s 1st rule of Adaptive evolution. Functional compromise ends up being “beneficial.” So creatures lose genes and organs, but they are “beneficial”. Problem solved!
That doesn’t relate to the material that was quoted. Soft selection means that multiple deleterious mutations can combine in a way that allows natural selection to select against them. Epistasis is a known and real thing where a deleterious mutation in a protein stops being deleterious due to another mutation elsewhere in the protein.
Thanks for your comment, but I don’t think soft selection or synergistic epistasis has been shown to arrest growing functional compromise in lab and field experiments. There is an equivocation going on as to what is being improved and in what context.
“Deleterious” and “Beneficial” are misleading terms with respect to functional (functioanlly integrated) vs. non-functional.
These population genetic “solutions” to evolutionary conundrums don’t go to the heart of the real problem, and that is the problem of functional compromise – Spiegelman’s monster is a metaphor for what happens in real organisms. This has been confirmed in direct observations in the lab and field. We can call this Reductive Evolution in the general sense.
If 99% of the beneficials are functional compromises, and most of the deleterious mutations are functional compromises, this doesn’t bode well for evolutionary theory. Population genetics ends up not answering real questions. To quote one of our very own:
Mathematics Vs. Evolution
Science; Nov 17, 1989; 246, 4932; ProQuest
many evolutionists will fail to find the clear and simple messages that population genetics theory once seemed to promise