A Genetic Entropy Analogy

I have been doing a little bit of debating in the YEC community recently, and one of the people that I have taken on is Donny B from SFT.

I am also scheduling a follow-up debate with Donny, and it is in respect of that which I am seeking assistance.

Donny is fond of the analogy of a classroom, where even if one eliminates half of the students, if the students keep getting dumber every year, then eventually the whole class is going to fail.

We know that this is not the case, but I am thinking about an example with simple math that can show him how this is wrong.

I know that the math in the below example is not being done “correctly” in that I am not doing it, but what I do not know is whether the shortcuts that I am doing get close enough to the right answer to serve the purposes that I am trying to use it for.

If they aren’t, would anybody like to help me massage this example so that it is valid, and or give me some pointers on the easiest way to do these calculations? I have watched a number of videos from Zach Hancock, and so I understand how to do the calculations by hand, but that is going to take a long time, and I do not know how to program a model to do them for me.

At any rate, here is the analogy:

Donny, I have thought about how to tweak your classroom analogy to help explain to you what’s going on. I’m going to use fairly big, fairly round numbers for this example so that we can do the math in our heads, but we’ll talk about what happens when you play with the values at the end.

I want you to imagine that there is a class at a very prestigious school, and it has 10 000 students. The students have different abilities to get different test scores on a scale from 1 to 100, and lets assume the class average is 55%, and that everybody below 50% fails the class. That’s our purifying selection.

For the sake of simplicity, lets just assume that the distribution of test scores for these students is a standard quintile distribution. So 20% of students score between 30% and 40%, 20% score between 40% and 50%, 20% score between 50% and 60%, 20% score between 60% and 70%, and 20% score between 70% and 80%.

They all take the test, and 40% of them fail. (that’s our purifying event) They do not get to graduate. The others do.

Now this school has a strange rule. If you graduate, you get to pick a friend to attend the school the next year. If there are extra seats left in the class, then you can pick again, in order of your test scores until we have 10,000 students again for the next year. (that represents our relative fitness because the fittest individuals have the most offspring, up to the ecological carrying capacity of the environment)

Lets use the same Quintile distribution to determine how smart the friends each kid picks are. So each kid has 5 friends, one is 10% smarter than the kid, 1 is the same, one is 10% dumber, one is 20% dumber, and one is 30% dumber. (If the kid gets to pick twice, she chooses from a new set of 5 friends the second time)

So on average, each student’s friends are 10% dumber than they are, and, as an average, the whole class is 10% dumber each year, and that represents our deleterious mutations,

  • so you would think the class is slowly going to get dumber over time, until eventually they all fail out, right?*

*But that’s not what happens. *

In the first year, everyone takes the test, and 40% fail. Then the class is going to choose their replacements to fill the class for the next year, but before they do, what’s the class average? It was 55% before the test, but you have eliminated the bottom 40% of test takers, so the distribution now is 33% 50-60, 33% 60-70, 33% 70-80, so now the class average of those who get to pick a friend is actually 65%.

*Then, everybody gets 10% dumber, bringing the class average down to 55%, right? Not so fast. We have to get back up to 10,000 students, so the top 2 quintiles get to pick again. Now the class average of the students making choices, per choice, looks like this: *

*20% 50-60, 40% 60-70, 40% 70-80. So the average of all the choosers is now about 70%. *
Then the class gets 10% dumber, as we pick the crop of new students.

So 10% 40-50, 20% 50-60, 40% 60-70, 30% 70-80

The class average is now 60%.

*This time, 10% are going to fail, *

The top 10% are going to get to pick twice to fill the class, and the numbers are going to look about the same again, and they will stay that way, forever IN THIS EXAMPLE, using simplified quintiles.

If you want to get more complicated, and use precise mathematics, and model this, then the individual students’ scores are going to move around a little more inside their quintiles, and the point at which it stabilizes might be a bit lower, as we may need to have more kids picking more than one friend, but this gives you an idea of how this works and why it happens.

If you use different numbers, you are going to get different rates of change, and different balance points, and if you choose some numbers that are big enough or small enough for different values, of course you can hit a limit where class average is almost 100% all the time, or everybody fails out super fast, but in general, this is the pattern you would expect to see for most reasonable numbers.

And it’s the pattern that we DO see in nature.

Thanks in advance for the help scientists!

Oh, also, in the original debate, I think that I did a fairly good job of beating Donny over the head with the fact that genetic entropy is going to be impossible for the simple reason that even if a deleterious mutation is individually unselectable, if they reach a point epistatically where they are going to have a selection effect, then the last piece of the puzzle is going to have an s coefficient that is sufficiently large to be “seen” by evolution, if the population is large enough.

I would love any feedback the community cares to give about my performance

image

I don’t think that’s actually true, or not necessarily. It depends on whether individuals differ enough for their differences to be selectable. If, for example, it takes 100 tiny deleterious mutations to produce a significant negative effect, but the mutation rate (and drift rate) is such that the population members differ by at most 80 such mutations, there will be no selection within the population; the absolute fitness of individuals in the population will just keep on decreasing with time, eventually resulting in extinction. Of course if the parameter values that result in genetic decay were at all common, species would be going extinct right and left, and few would still be around. But there’s no theoretical barrier to it.

Hi John, thanks for the response, I really appreciate the feedback. It you are correct, then I am confused (and in trouble for the next debate!).

However, would it not be the case that if it takes 100 mutations to cause a significant effect, then sure 80 could build up, heck 99 could build up, because even together they have no measurable effect based on population size, but that 100th mutation, which in your example has a catastrophic effect, is never going to reach fixation because it has a catastrophic effect.

Essentially, if there are multiple alleles that together have a selection coefficient greater than -1/Ne, then those alleles will never all reach fixation in the same population because purifying selection will begin to remove those alleles at the point when thier collective selection coefficient exceeds that threshold.

The simple example would be to look at how Ohta 's probability of fixation formula for nearly neutral mutations would deal with an allele that was strongly deleterious in the presence of one other allele, which was already fixed in the population. In such a circumstance obviously the new allele would have a near zero probability of fixation.

Likewise, the where the second allele has not reached fixation, the selection coefficient for both is going to be dependent on the prevalence of the other allele in the population, and as one drifts to fixation, purifying selection increases is pressure on the other until the point where you reach mutation selection equilibrium.

If you think I am missing something here, please let me know as this is exactly the kind of criticism that I was hoping for. Better to have it here than when I am debating a creationist!

Mistake

Bigger mistake.

They don’t. That’s why his analogy is dumb. He assumes that entropy is true to show it is true. It’s a dumb analogy.

It’s not your responsibility to engage with his dumb analogy, it’s his responsibility to demonstrate his analogy is valid. Which he can’t do, because genetic entropy isn’t a thing.

There is however a practical barrier to it, in that there is no class of strictly additive deleterious mutations capable of producing fitness effects in a biologically relevant number of generations.

1 Like

That is essentially my statistical argument against GE. It’s common in ID/Creationism to present evolution as some that happens to individuals, which is wrong. Evolution happens in populations.

1 Like

Ultimately I think the reason GE is false is because it requires the assumption that the DFE of mutations remains constant or nearly so. But this has been shown empirically to be false. Diminishing returns epistasis is a reality.

As an example consider the fitness-effects of mutations on many protein structures. Some proteins rely on certain structures to carry out their functions, and those structures have to have a certain degree of stability. If they’re too rigid they can’t bend and accommodate binding partners, if they’re too unstable they can misfold and degrade so the cell has to waste energy producing new proteins to replace the ones that constantly fall apart.

Some mutations alter the stability of protein structure up or down, and these have fitness effects that are proportional to their degree of effect on stability. GE basically posits that there must be a pool of mutations that can invisibly destabilize (by making them too strong or too weak) such cellular proteins all the way to where the cell stops functioning. But this is just not physically possible.

The more mutations that occur that alter protein stability away from whatever the optimum is for the protein in question, then the bigger the effect it’s lack of function is on organismal fitness. But when the protein is much too unstable (for example), many more mutations are possible that can strongly re-stabilize the protein.

Consider pairwise interactions between residues in different secondary structure elements in the protein. The more of these you disturb in a deleterious way, the more possibility you open up for restoring one by a mutation (so the probability of a beneficial mutation increases the further away from the optimum stability it is). Also, as you disturb more and more of these pairwise interactions, the more beneficial will a restoration of a strong interaction become. Very near the fitness peak, mutations that strongly stabilize or destabilize the structure will take the protein too far away from it’s optimum level. However, if you weakly destabilize 10 such pairwise interactions by mutations that individually are invisible to selection, a single strong interaction can now in principle compensate for the 10 weak ones by taking it back near the optimum. So the probability of large-effect beneficials that really fully compensate for the tiny deleterious ones has also increased.

This phenomenon is ubiquitous at the molecular level. It works for everything from enzyme activity, and binding affinity, to structural stability, and so on. To the organism that has only 1 copy of a beneficial gene but could use more, gaining a single duplicate can mean a large increase in fitness. To the organism that has 100 copies already, gaining one more can have an imperceptible effect.

What this shows is that as you move further down the fitness landscape, more mutations become beneficial, and their positive effects become larger. This is why the DFE cannot physically remain constant, and why GE is impossible in principle. Physics!

1 Like

You misunderstand. The hundredth mutation doesn’t have a catastrophic effect. The hundred mutations, taken together, have a small but selectable effect. But selection is relative. In this case, individuals with the 100 mutations would decrease in frequecy relative to individuals with zero mutations. But that requires such an imbalance in the population. If individuals differ by fewer than 100 of these mutations, there is no selection within the population. Again, it’s not the total number of deleterious mutations per individual that counts for selection, it’s the difference in that total among individuals. If that difference remains small as deleterious mutations continue to happen, the absolute fitnesses of individuals will continue to decrease while the relative fitnesses will remain unchanged, invisible to selection. That’s the scenario the creationists are advocating, though they may not understand it.

One can come up with examples. For example, given that junk DNA has some fitness cost, why do we have so much of it? It’s because that cost is relative to that incurred by other members of the population. It’s possible that an individual with 90% junk would be at a reproductive disadvantage relative to an individual with no junk, but that individual doesn’t exist. The differences in quantity of junk in the population are on the order of a few thousand bases, and any new mutations are on the same order, so are invisible to selection. Of course, if there were indeed some massive cost to having 2.7 billion bases of useless DNA, our species (and all other mammal species) would be extinct by now, so apparently there is not. But the point is that natural selection would not save a species from genetic entropy if in fact it were a real thing.

This is an excellent point actually, that I think independently busts the whole idea of GE wide open.

First, before I completely agree with you, I should say in fairness that it is my understanding that there is some evidence for the existence of additive deleterious mutations occurring in populations inasmuch as we see the effects of mutation load, a load that we can and do bear in our population.

(see for example Mutation-selection balance with multiple alleles - PubMed)

We could also point to specific examples of genetic diseases that require the mutation of multiple genes before the disease arises (for example, cystic fibrosis: [Analysis of the spectra of mutations and polymorphic loci of cystic fibrosis transmembrane conductance regulator in the population of Bashkortostan] - PubMed)

This having been said, you are correct in saying that these instances of additive deleterious mutations are not infinitely cumulative. Moreover, as the cystic fibrosis example demonstrates, alleles like this are not all going to reach fixation in a population because they are sometimes going to affect fitness, (ie: in the children who are born where the alleles occur together and cause cystic fibrosis) and therefore purifying selection will reduce their prevalence.

Because most deleterious mutations are suspected to be recessive, and further because that recessive trait will only be expressed in circumstances where the multiple alleles co-exist in the individual, there can be a surprisingly high level of deleterious alleles floating around in the genome. However, as I mentioned above, it is my understanding that there is a limit imposed by selection, in that these alleles, while they will reach a certain level in the population, will never reach fixation.

So, while these deleterious mutations will impose a cost in terms of the number of offspring each person needs to have in order to maintain a stable population, it is a cost that populations can and do bear, assuming that all of these mutations are kicking around the population at the same time, and further assuming that the population is of sufficient size to make selection efficient enough to “see” the deleterious mutations.

Now I’m ready to agree with you… one rejoinder to the argument made above is that it does assume that all of the deleterious mutations are kicking around the genome at the same time, and that does not necessarily have to be true. What if mutation one happens in the genome, reaches fixation, and then mutation 2 happens, reaches fixation, and so on? Then, couldn’t the last mutation drift to fixation and destroy the species?

The answer is no, for two reasons. The first, as I mentioned before is that it would not because it would never reach fixation in that situation, and the second reason is that even if I am wrong, that fear would not be biologically relevant in humans because the time to fixation of a new mutation is equal to roughly 4x(the effective population). If a new mutation arose tomorrow, and has not already been kicking around, then it has to spread across 8 billion people, which is going to take about 32 billion generations. On that timescale, we need to worry more about what we are going to do when the sun burns out than we do about genetic entropy.

At least, that’s my argument, but I’m not an expert. Can you see anything wrong with my understanding? I appreciate this dialogue, because I want to make sure my grasp on this subject is correct.

I appreciate that, but I guess what I really want to know, is: Is my analogy also dumb, or does it demonstrate the point I am trying to make about the balance that ends up arising despite the accumulation of deleterious mutations?

I’m worried about the possibility that the approximations that I use with respect to the changing percentages, which simplify the math, are also just straight up wrong, and that you can’t do the math that way. I’m not enough of an expert in this area to know whether I have made a mistake. Would you mind giving me some peer review? (Not that you and I are peers. You know much more about this subject than I do).

For a lay perspective, I recently posted this at Biologos.

Genetic Entropy [ GE ], the signature model of John Sanford, is incoherent with observed everyday reality. An aeronautical scientist can publish a book full of arcane math, intimidating physics, computer simulations, and overflowing bibliographies, to prove that “bumblebees cannot fly”, and a layperson who has little grasp of the technicalities can confidently pronounce that he is wrong. Flying bumblebees can readily be observed. Similarly, despite Sanford’s status as a geneticist, it is ridiculously apparent by the most basic observations that GE as presented is a farce.

Photosynthetic algae can double in less than a day and serves as the base of the ocean’s food chain. Rats progress from pink little pups to parents in less than three months, mice a bit quicker. Houseflies go from eggs to laying eggs in a matter of a couple of weeks. Bacteria can double several times in an hour. Viruses hijack host cells to explosively multiply. The popular aquarium and research model zebrafish has a generation time of about three months. Such examples can be found endlessly, but suffice that a representative cross section of life, vertebrate and invertebrate, terrestrial and aquatic, plant and animal, microbial and complex, will have undergone at least several tens of thousands of replications even given the six thousand years allowed by YEC.

GE predicts that all these populations should have inescapably suffered error catastrophe and become extinct, as with each generation slightly deleterious mutations accumulate and permeate the gene pool. That is the whole idea. But far from these species being long gone, or even teetering on the brink, instead what we have are populations which are thriving and robust, often despite our best efforts at eradication. Anybody, scientist or layperson, can see for themself that the reality could not be more contrary to the expectation of GE.

As this contradiction is bound to be frequently raised, Sanford associate Robert Carter penned an article to allow the usual “if they read our literature, they would know we have already addressed those criticisms” line YEC’s love to spout. In it, Carter carves out an exception from GE for the uncooperative bacteria, because “their genomes are simpler, they have high population sizes and short generation times, and they have lower overall mutation rates.” Oddly, except for the mutation rate, the same can be said for influenza virus, which he and Sanford offered as their principle case study for GE wiping out a simple organism in a matter of a few decades.

It gets even better. Carter devotes a paragraph to consider mice, which have a similar genome size to humans - how do they escape the ravages of GE? He maintains they actually haven’t. The common house mouse has “much more genetic diversity than people do”, and is “certainly experiencing GE”. Now, the conclusion that any clear thinking individual would draw from a vigorous, fecund, thriving population displaying high genetic diversity is that mutation does not simply correlate with decline into extinction. Carter presents no evidence whatsoever that mice are experiencing GE; what they are actually experiencing is mutation, which is something everyone agrees on. This observation fits with the expectations of mainstream population genetics and natural selection. And if the rodents are doing fine, we humans who have undergone far fewer replications have nothing to worry about.

So it is not surprising that population geneticists, at least those who may be aware of Sanford’s existence given his paltry journal output, find glaring shortcomings in his understanding. Scientists who have done recognized work specifically in population genetics and soundly critique Sanford include Zach B. Hancock, Michael Lynch, Joe Felsenstein , and Dan Stern Cardinale; joining other biologists such as Joel Duff. There is plenty for those who want to get into the weeds of drift, selection, the distribution of fitness effects, and mathematical modeling. The point here, however, is that armed only with a rudimentary lay understanding of nature, it is apparent that the GE idea is absurd. Apply the bumblebee test. Remember that the next time you shoo away an annoying fly, whose genes have replicated one hundred thousand times from the one that buzzed Adam six thousand years ago.

2 Likes

Dr. Harshman,

Thank you again for your time in responding to me and helping me to understand this issue. Given our relative differences in education on this specialized issue, I had a feeling that if you and I disagreed, there was a pretty good chance that I was the one with a misunderstanding.

I think I understand what you are saying with respect to the 100 mutations thing–so long as the new mutations are building up slowly enough, and so long as the decline in fitness is so incrementally small that it is less than the variation within the population, (that is, so long as the cumulative selection coefficient of all of the deleterious mutations in circulation at any given time is less than -1/N) then the deleterious mutations will not have enough of an effect on fitness for selection to act on them. But then, in that case, for humans, wouldn’t it still result in two limit problems?

The first one that I see is a practical limit. Given the human population size as it stands now, the rate at which our genome is undergoing entropy has effectively reached zero. This is because the time to fixation for a new deleterious mutation is going to be 4Ne, generations, and in context Ne for a new mutation is more than likely going to be close to the census population size. So, if genetic entropy is true, then it is making us all worse at a rate of about 1/8B every 32 B generations (or would that be s= -1e-8B per generation?). Either way, if that is true, who cares? The sun will have burned out long before it is going to hurt us. Any faster, and selection will see the mutations because s will get too large and the relative fitness differences will be detectable.

The second limit I see is still theoretical. Even if we assume that we are going to run this “forever” and we ignore the fact that getting worse in any appreciable way is going to take a bajillion years, then we can continue to get worse and worse, and the relative difference will not be enough for selection to detect it (as you have kindly pointed out). At first, I was thinking that there would be a limit based on the fact that there are only so many base pairs to alter, and that you will run out of bad changes that you can make to the genome, such that back-mutations came in to play and would establish an equilibrium, but then I realized that you have already pointed out that the size of the genome is not fixed, and we can still always get worse by adding more useless or inefficient code, and if the marginal cost of that addition is small enough, and we wait long enough, then it can still get worse forever.

But hang on, is there not still a limit here, based not on the relative difference, but the absolute limit of what is viable for the organism? Lets say for example that we keep adding more and more junk DNA to the genome, the marginal cost is going to remain low for a while, maybe a long while, but eventually that marginal cost is going to start going up, such that any further changes (additions to the code, in this case) are not going to have a small effect anymore, they are going to have a big effect, because this next tiny bit of ATP cost is one of the LAST ONES that we have available. As you continue to make the organism worse and worse, each further negative change will necessarily have a larger effect, (like how losing a hundred dollars affects a poor person much more than it does a rich person), and as such the selection co-efficient for those changes will hit a limit where it can not be changed by a small amount anymore, it must be changed by a large amount. In that scenario, those last few deleterious mutations still will not reach fixation, because they will inevitably be seen by selection.

Obviously, I am still missing something, but I promise I am not being deliberately pig-headed. I look forward to your response.

The examples you give are not of additive deleterious mutations. Additive deleterious mutations are 1) individually deleterious and 2) additive in their fitness effects. You’re talking about mutations that are neutral individually but highly deleterious when some particular set of them is found in the same genome. That’s something else entirely and not, as far as I see, relevant to the creationist GE model, such as it is. The proliferation of junk DNA is a much better example, or would be if junk were sufficiently deleterious.

I see two problems here. First, the current population size is a very recent phenomenon. Second, there’s no need for particular alleles to become fixed as long as there are enough alleles in circulation and the difference in number of bad alleles among people isn’t enough to be noticed by selection. Whether your bad alleles are different ones from mine is not relevant.

No, I don’t think so. There’s no hard limit, only a soft one. There’s no “last straw”; the camel just gets more and more burdened, increasing the probability that it will collapse. Remember, the claim is that these mutations are of additive effect, not increasing effect. One can envisage all sorts of scenarios, with all sorts of effects on fitness, but none of them are logically necessary.

1 Like

Deleterious mutations aren’t all removed instantaneously from the population every time they occur. If a set of mutations all reduce the probability of reproduction by 10%, then all will be removed from the population eventually, but it’s unlikely for any of them to be removed in the first generation. So we get a reasonably constant load imposed by the balance of newly occurring mutations against the rate of selection against those mutations. In short, all of the mutations from your references are under effective selection, just not ‘immediate’ selection. So they are not nearly neutral, meaning they are irrelevant to genetic entropy.

This is wrong, or at least wrongly phrased. It’s true that more recessive deleterious mutations will persist for a number of generations, but it’s unlikely that most deleterious mutations that ‘occur’ are recessive.

This would require the previously mentioned class of strictly additive deleterious mutations that are nonetheless below the threshold of effective selection. To my knowledge, the only such mutations are additional energetic demands from something like increased junk DNA, and that will not accumulate in biologically relevant timescales.

The effective population size of humans is several orders of magnitude lower than 8 billion.

1 Like

An alternative being that the so-called junk DNA is not junk. Consider for example that the ratio of non protein coding to protein coding DNA increases with morphological complexity, suggesting that increased complexity is associated with the expansion of regulatory information.

Well, that’s not really true. The onion has about 5 times as much DNA as humans do, and the are amoebas with over 200 times more DNA than we do. Are you suggesting that all of thier genomes are functional as well? Why would it take more than 5 times more regulatory genes to regulate an onion than it does for a human?

1 Like

That’s an alternative, but it’s not a viable one. Literally.

You would have a hard time supporting that assertion. Are you familiar with the term “dog’s-ass plot”?

3 Likes

You got that idea from one of Mattick’s dogs-ass-plots, right?

1 Like

2 posts were split to a new topic: Is Genetic Entropy the 2LoT Arguement in disguise?

Actually, for the person I am debating at least,they include both of those instances. There is what he describes as a kind of “time bomb” model, which is the individually neutral but strongly deleteriousness model, and also the “lobster pot” model which is the more traditional individually slightly deleterious mutations adding up over time. So I have to be able to address both of these.