I realize that the population explosion is a recent phenomenon, and my understanding is that this means that the time to fixation and probability of fixation for already existing alleles is considerably less than 4x the census population, because those deleterious alleles are already present in a great many of the members of the existing population as a result of common ancestry. But I don’t think that makes a difference with respect to new mutations which arise now. For one of those, the effective population would be much closer to the census population because in order for such an allele to reach fixation it will have to spread to the rest of the population, and the fact that there is a degree of common ancestry won’t help. Isn’t that right?
I think this is a really interesting point, and I am going to do a little research (amateur research, not the real stuff) before I respond substantively.
Could you explain why you think it’s not a viable alternative?
As for me, I can explain why IMO it’s a viable one. First, it comes from the observation that there is no example of an organism with complex ontogeny having a low ratio of non protein coding to protein coding DNA (let’s call R this ratio), whereas there are many examples of organisms with simpler ontogeny having a low R. This suggests that complex ontogeny requires high R. Secondly, isn’t perfectly logical to hypothesize that organisms with complex ontogeny requiring myriads of cell fate decisions such as humans also require much more regulatory sequences than organisms requiring much less cell fate decisions such as C. Elegans?
I am not sure, but isn’t the case that the fact that onions have 5 times more DNA than humans is mostly due to ploidy. In that case, the ratio R of non protein coding to protein coding DNA would be higher in humans than in onions.
In any case, as I said to @John_Harshman above, it seems that all animals with complex morphology/ontogeny have high R, suggesting that high R is required for the implementation of complex ontogeny. This doesn’t mean that organisms with simpler ontogeny have necessarily low R, amoeba being possibly a case in point.
This pufferfish with it’s 400 Mbp genome laughs at that statement:
Sure, it is logical to to posit that more regulatory programming is required to specify more diverse cellular behavior, and that increased programming would require more storage space.
But notice that merely predicts a requirement in terms of relative size (A requires more than B), it doesn’t predict an absolute value (A requires 3Gbp whereas B only requires 1Gbp) and it completely fails to explain the observed variance within and between different clades.
If all your genome were functional, most mutations to it would be deleterious. You have somewhere around 150 new mutations per genome per generation. The mutational load would be very quickly unsustainable. That’s the “nonviable” part. Other than that, there’s good data showing thatmost of the genome is not conserved and that genome sizes have no particular correlation with organismal complexity.
Not true at all. Fugu, for example, have genomes 1/8 the size of yours with more or less the same complexity. Birds, almost universally, have genomes around 2/3 the size of mammal genomes. There really is no correlation. The main thing driving this supposed trend is that prokaryotes have small genomes while eukaryotes have large ones. But multicellular eukaryotes have genomes no larger than do unicellular eukaryotes.
Could be logical, but the data don’t back it up. It may in fact be that humans have more regulatory sequences than nematodes, but even if so they don’t take up much of hte genome. Perhaps as much as 8% of your genome is regulatory, 2% protein-coding sequences, another percent or so structural RNAs, and the remaining 90% or so is junk.
Somewhere, perhaps at second or third hand, you have been exposed to the writings of John Mattick, who originated the bogus tale about the correlation between organismal complexity and the amount of non-coding DNA. Again, this correlation does not exist and relies on egregiously cherry-picked data.
As for the term “dog’s ass”, specifially, it comes from a notoriously bad figure in one of Mattick’s articles; here:
It is not, though that’s an excuse that’s been offered on occasion by Mattick. You have been grossly misinformed.
I would be interested to hear where you think the “soft” limit lies, because I can think of a few places, but I still can’t understand why there isn’t also a “hard” limit, in that every deleterious mutation must be deleterious in respect of one or more traits of an organism, and there are only so many traits. As those deleterious mutations pile up with respect to those traits, (which affect fitness by definition) each of those decreases of fitness must inevitably have increasing effects as they decrease the fitness in respect of that trait more and more. It doesn’t matter what it is. At some point, if the trait affects survival, being bad enough at it is going to kill you. (Though you will hit the limit well before that, because all that is necessary is that the deleterious mutation’s effect on your trait’s expression in turn affect your fitness by more than s=-1/4Ne, so it really only has to get to the point where it affects your fitness by a tiny amount, and the mutation stops being nearly neutral).
The Time Bomb Model
Though I do understand that you are concerned with the “lobster pot” model, and not the “time bomb” model of GE, both arguments are made by the people I will be debating, and so I do want to address it as well, because maybe you will rip it apart anyway, and I want to test it in your harsh crucible (see what I did there?)
In terms of the “time bomb” model, the paper I referred to earlier,
suggests there is a mathematical model developed by Crowe and Kimura and expanded upon in that paper for calculating the selective balance (and presumably s coefficients for traits dependent on multiple alleles), but as a dirty unworthy non-scientist–who is also too cheap to buy a subscription for this purpose, I am not able to access that information, so I am going to have to shoot from the hip. I acknowledge in advance that I am full of it.
I nonetheless propose that for the “time bomb” version of genetic drift, that this model, which I have not reviewed, but nonetheless pretend to understand in detail, answers this straw-man that I have made of your objection and does provide a hard limit (for this case that you have not put forward) in that there will be mutation selection balance in respect of complex genetic diseases that depend on multiple alleles. That is, genetic diseases, even when dependent on multiple recessive traits that are individually effectively neutral, still reach mutation selection balance.
So there scientist! (or, um…creationists…)
Lobster pot Model
In the lobster pot model, I don’t have any papers to rely on (YET! someone smarter than me, like @dsterncardinale might be aware of something), but I do think there should still be a limit.
If the nearly neutral deleterious mutations are mostly dominant, (which it is my understanding is not what the theory, or observations support (https://www.pnas.org/doi/10.1073/pnas.1424949112), that is, most deleterious mutations are recessive) then the argument becomes fairly simple. Because two people are likely to be carrying mostly different genes that deleteriously affect the same trait, all of the deleterious alleles that are passed on from both sides are going to be expressed in the offspring, and so there will be an exponential growth in the number of deleterious alleles being expressed in each subsequent generation that is inheriting deleterious mutations in respect of any given trait. This will cause a significant variation in fitness across the population with respect to those traits. The exponential nature of growth in variation of fitness between those individuals who carry those individually neutral but additive deleterious mutations, and those who do not carry those genes, quickly exposes those genes to selection as a group.
With recessive mutations, things are more complicated, and the limit is going to be further away, but I still think it exists.
Like I be said before, with the example of the energy budget, wasting a few ATP when you have millions to spare is nearly neutral, wasting any when you have little or none to spare is not nearly neutral. Or, as @Rumraket said in his post a few days ago, where mutations affect the stability of protein structure, the degree to which such a decrease in function affects an organism’s fitness is going to be dependent upon how far away the protein is from its optimal function. As more and more of the function is compromised, each decrease has the potential of taking the protein down to a level where it is unable to fully fulfill all of its functions, and where that begins to occur, the incremental fitness effect of that change is going to start to increase, to the point where that individual step will result in a change in fitness that is large enough for selection to act on in those circumstances even though the same mutation would have resulted in a change which was effectively neutral otherwise.
to put it in terms a non-scientist would understand, as the scarcity of any resource decreases, the value of that resource increases. Ten dollars is nothing to a millionaire, but it could mean the difference between life and death to a homeless man. One will notice the loss far more quickly than the other, and the same is true for deleterious mutations.
Another way to think of it is as follows:
In his most recent video, Zach Hancock points out that the scientific literature suggests that the further you are from an adaptive peak, the greater effect on fitness any beneficial mutation will have. The corollary should also be true that where a gene that contributes to a beneficial trait experiences a deleterious mutation, the effect of that deleterious mutation should at least generally be greater the further you stray from the adaptive peak. If that is true, then each incremental step taken from such an adaptive peak is successively more likely to on its own have a selective effect above the threshold of s=-1/4Ne. At that point, selection will start to see the mutation and purifying selection will begin to occur. Thus, even in the worst case scenario, there will be a “hard limit” to genetic drift.
You could equally say that as drift takes a population away from an adaptive peak, even in the event that purifying selection does impose a limit on that decline, the fact that such a departure from such an adaptive peak increases the room for compensatory mutations to move the population back towards that adaptive peak, imposes its own limit. Notably, such compensatory mutations are going to be strongly selected and fix very quickly. I don’t think this is quite the same thing as what I have said above, but it is perhaps yet another mechanism by which the practical realities impose a limitation on the amount of damage that GE can do (I acknowledge, with thanks the work of @dsterncardinale who has explained this idea elsewhere more eloquently than I have here).
I look forward to your insight into why this isn’t so, and if you could provide me with some of the examples you alluded to in order to help me understand the concept, I would appreciate it.
I see possibly two problems with your fugu example. First, although I’m not sure, or at least I’m unable to quantify it, I wouldn’t be surprised if the ontogeny of fugu was significantly less complex than the ones of mammals for example, therefore requiring less regulatory DNA. But even if this happened not to be true, the fraction of non protein coding DNA in fugu would still be quite high, probably around 85%, which would probably be enough for a fair amount of ontogeny complexity.
It’s the nature of a soft limit that it doesn’t lie anywhere. What’s the border between red and yellow?
True, but selection happens long before that point. You persist in thinking in black and white, all or nothing. This genetic entropy thing doesn’t have to kill anyone. It just has to reduce expected reproductive success below replacement level. And you also persist in thinking about absolute fitness rather than relative fitness. Natural selection acts on the latter.
Again you persist in this black-and-white scenario. That just has nothing to do with the real world, even less than genetic entropy does.
That’s because you naturally consider your species to be the pinnacle of evolution. Since fugu are not human, they must therefore be less complex. That way lies the dog’s ass plot.
So you have just admitted that at least 80% of the human genome is junk, and that there is no particular correlation between genome size and complexity.
Oh, I should point out that if the human genome were reduced to the part we consider functional (the conserved bits), the genome would still be 80% non-coding. Functional non-coding sequences far outnumber functional coding sequences. This is no help for your claims.
I don’t really see why the fact that most of the genome is not conserved would be evidence that most of the genome is junk. It could be that these genomic differences are foundational to the morphological differences between organisms
I see definitely that what you explicitly claimed was an observation:
…was not an observation at all. I don’t see a linguistic reason for you to do that, as “observation” is a true French/English cognate.
So why did you present hearsay and/or assumptions as observations? Do you truly not realize that this was merely an empirical prediction of your ID hypothesis, which is clearly falsified by real observations?
There’s a long distance between your claim of an observation and your current “I wouldn’t be surprised,” no?
And no, I don’t see any reason why there would be significant variation in ontological complexity among vertebrates at all, but then unlike you, I have studied developmental biology.
“Probably” is not an observation. Why not make real observations before pontificating?
And you are employing the standard creationist ruse of conflating noncoding DNA with nonfunctional DNA, when the latter is a subset of the former.
Because the lack of conservation is not only directly observed between (entre) species, it also is obvious within (à moins de) species, so your reasoning is once again based on objectively false assumptions.
Sure, you can believe that if you’re a creationist who thinks that each species is separately created. But if you accept any degree of common descent among species, that claim is untenable.
It’s close enough to an observation. All we have to do is assume that fugu have around the same amount of protein-coding DNA as we do, which is a pretty safe bet. Nor is he conflating noncoding DNA with nonfunctional DNA, because he thinks that there is no nonfunctional DNA. Though in light of the fugu genome he’s been walking that back.
That’s a straightforward entailment of the fact that functional regions can’t tolerate an infinite amount of mutation while remaining functional. You seem to be saying something that implies a conflict with so much other ID-apologetics concerning genetics. Effectively, that the amount of change the genetic material can undergo while remaining functional appears basically without limit.
No matter how many exons of a protein coding gene are deleted, how many domains are cut in half, reversed, shuffled, or truncated in alternatively spliced proteins, no matter how many frameshifts or premature stop codons it suffers, no matter how mutated beyond recognition some obscure binding spot within endlessly mutated copies of GAG, POL, and ENV we find, you’re basically just assuming it can tolerate all that and yet still remain functional.
Yet in other threads, discussing related matters about the relationship between the size of sequence space and the fraction of it being functional, discussing the slow divergence of duplicate genes, and the meandering of the random walk in sequence space over geological time, you have no problem turning around and stating you think the functional parts are extremely intolerant to mutation because only an unfathomably infinitesimal fraction of it is functional and would all break down long before a new function is found.
I don’t think you have ever explicitly considered the implied contradiction here. I don’t think you have any real numbers, and only by keeping these ideas at a vague and verbal level can you sustain them in their compartmentalized states.
That statement is much too ignorant of the actual data on the genetic variance between species, both in terms of the genome size differences and the degree of conservation. And then there’s the fact that there is just no good evidence that it takes that much genetic material change to produce the relatively small amounts of morphological differences we see between many species.