Daniel Arant: Questions about Evolution and Design

Paradoxically, perhaps, it helps.

Think about it this way. Let’s say that to “win” you have to be able to sink 100 free throw baskets (i.e. get 100 specific mutations). If you have 100 throws, how difficult is this? Pretty hard. Even professionals might struggle.

Now, what if you have a 100,000 throws to get 100 baskets? Well, now it is much much easier. The vast majority of mutations can be off in random directions, and it doesn’t really matter. Though the number of random throws is dramatically increased, the chance of getting the mutations you need has increased a great deal.

So yes, most mutations are not useful, and are not selected. But that also means that only a few mutations are required for important evolutionary changes, and there is a lot of extra shots to find those mutations.


There’s no reason why it should should make it more difficult to discover new functions, but to understand why you need to separate the three questions of novel function, and more complexity, from the question of fitness.

Natural selection has to do with the fitness effects of mutations (natural selection doesn’t “care” how mutations achieve those effects, whether increases or decreases in functions or complexity). Do they increase survival and reproduction? A mutation doesn’t have to produce a new function, or more genes, to help survival and reproduction. It may simply change the degree of some existing function up or down, and this can help survival and reproduction.

And a mutation resulting in a new function might even be deleterious in rare cases. Suppose an existing transcription factor mutates so now it can bind a new place on the chromosome, but this new binding spot happens to block expression of another important gene. In this case, new functionality would be deleterious.

So when it comes to discovering new functions, it doesn’t matter what the fitness effects of the average mutation is. The specific proportion of these mutations that result in novel functions is largely independent of their fitness effects. So when scientists have discovered that most mutations are pretty much selectively neutral, this didn’t change anything about how likely it is for a mutation to result in a novel function.

However, when it comes to the question of complexity, there are classes of high-probability mutations that can quickly result in increases in complexity. And ironically, this tendency for complexification can actually provide the basis for increased speed of discovery of novel functions. It’s basically the scenario I described above in the figure with all the squares.

Think of it this way: Two classes of mutations are thought to be very frequent: Gene duplication and insertion-type mutations(such as transposons), and deleterious point mutations of relatively small effect.
These two types of mutations, in combination, allow for an increased rate of exploration of sequence space. Here the exploration is driven by inherent mutational tendencies. What types of mutations are most likely to occur (and by types I mean what biochemically happens at the molecular level, not their fitness effects). That is the tendency for repetitive segments to undergo duplication, and for transposons to facilitate their own copy and random insertion, combined with the tendency for “degenerative” mutations to occur in these extra gene copies.

This means the number of genes that are exploring sequence space by accumulating mutations can build up over time, leading to an increased rate of exploration of that space because more and more genes are mutating in parallel. Instead of just one gene waiting for new beneficial mutations to also have novel functions, you get lots of copies of genes that just accumulate lots of mutations of relatively small effect, and so with many genes mutating in parallel you get a much higher rate of sampling sequence space for new functions.

So complexity builds up over time while being mostly selectively neutral, but the complexity increases in turn increases the rate of discovery of novel functions.

New functionality still depends on mutations and genome rearrangements, so whether most mutations that are fixed are beneficial or neutral doesn’t make any difference to the probability of discovering a new function my mutation.

It is a well-known result in population genetics that the rate of fixation of neutral mutations is equal to the rate of mutation. This is pretty well explained in this 12 minute video:

There’s not anything special in some “scenario” that has to happen for this to occur, it simply follows mathematically.

1 Like

Not so. I merely point out the implications of your statement. If you would like to withdraw the statement, that would be a fine response.

It would be useful to consider the difference between quantity and quality. Given that most of the genomes of most eukaryotes are junk, evolving neutrally, then most mutations that become fixed in a species do so through drift. That doesn’t mean that selection is not important, in fact dominant, in the small portion of the genome under selection and subject to adaptation.

Generally not, since fixation is slow, and one would need to survey a very sizeable fraction of the population. But theory should do: if I recall, it should take on average around 4nµ generations for a new mutation that eventually becomes fixed to go from mutation to fixation, where n is the effective population size. That’s quite a long time for most populations. However, the number of fixations in any one generation is also the same as the number of mutations per individual, so they do add up.

Whoops, I think I meant just 4n, not 4nµ.


Glad to hear it.

I’m sorry, but that makes no sense at all.

I would strongly recommend that you learn the basics. It will be impossible to understand even the most basic concepts in population genetics without being able to distinguish between genes and alleles. Alleles get fixed, not genes. That’s not pedantic.


It’s not assumed. The evidence is consistent with homologous proteins having common ancestors, not one evolving into another. It’s an important distinction.


You’re oversimplifying.

Nope, still major.

No, it increases it.

How many randomly-generated antibodies must one screen to get measurable beta-lactamase activity?

Here’s a chance for you to put a hypothesis to the test. Predict a number.

I would also suggest that you’ll get more constructive responses if you don’t resort to regurgitation of deliberately vague terms like “body plan.”

1 Like

I think the bigger problem, skeptics would say, is that there was precious little time for such a change to take place. Is it not the case that mutations in developmental genes are similar to any other, in that they are overwhelmingly deleterious? Are there any observed examples of undirected mutations to developmental genes producing a useful, or at least neutral, change?

Yes, they are similar, but no, they are not overwhelmingly deleterious.

1 Like

Your problem there is “undirected”. How could it be shown that a mutation is or is not directed? I can’t think of a way.

Consider the mutation that caused industrial melanism in peppered moths Biston betularia. It was certainly to a developmental gene. It was certainly advantageous (in a particular environment). Was it undirected? How would you tell?


Precious little time for what change? What is it you think there isn’t enough time for in the scenarios I have described? Gene duplications are very frequent types of mutations, particularly in repetitive regions of the genome. Deleterious mutations of small effect are also quite likely, and since their effects will be masked by gene-dosage effects resulting from duplications, the duplications can pile up, leading to a sort of neutral ratchet of genome expansion, enabling this larger-scale parallel exploration of sequence space.

I’m not aware of any such evidence.

First I have to ask what @John_Harshman asked, which is how I would know whether a mutation was undirected or not?

I can certainly give examples of developmental mutations with neutral and beneficial effects though. I recommend this simple video, easily accessible to laymen, where a handful of developmental mutations that cause particular phenotypes in different breeds of dogs are explained:

This also highlights that the fitness effects of mutations depend on environment. It has been extremely beneficial to the reproductive success of different breeds of dogs that they have suffered mutations that humans who like to hold pets, find attractive. Dogs have essentially secured their future by piggybacking on the human population by having adapted to us in ways that make us obsessed with keeping them safe and happy. Once you realize this you can see how the same thing has happened to sheep, to pigs, to cows, to anything and everything we have domesticated. In the environment with humans, some mutations are highly beneficial to many different species. This goes for all mutations, there is almost never such a thing as a mutation that is unconditionally beneficial in all environments.

This is true even within the context of a single protein sequence, as some mutations interact with other mutations in the protein in a phenomenon called espistasis. A mutation that would normally destroy the function of a protein can suddenly become beneficial if another mutation has happened first.


25 posts were split to a new topic: SCD asks questions on design and evolution

Welcome Dan. Its great to have you here. I am a layman too, and PS is a wonderful place to learn a lot about evolutionary biology.

Now I see where your “bias” is playing out.

Before we knew what caused lightning, people asked, why can’t science-minded meteorologists simply admit that intelligent design is the best current explanation we have for lightning formation?

Before we knew what caused disease epidemics, people asked, why can’t science-minded doctors simply admit that intelligent design (and/or bad humors) is the best current explanation we have for disease epidemics?

The simple answer. It is not useful to scientific inquiry.

Thankfully we don’t always have to imagine. We see beautiful demonstrations of these transitions in the fossil record. Follow the whale bones, they have a lot to tell.

There is no magic here. Exaptation can lead to inefficient new complex systems, but with time as that puny tinkerer called evolution works its magic (hehehe), these new systems may get better at what they do. A beautiful case study is the evolution of pentachlorophenol detoxification in certain bacteria. Look it up.

It would also be nice if you look up constructive neutral evolution.


Hi Daniel welcome!
According to neutral theory the rate of mutation is about equal to the rate of fixation in a population. All that being said there is lots of fixation that needs to be explained over evolutionary time.

Asking for a step by step explanation is not necessary if the mechanism of change is credible. In order to build a flagellum motor approximately 100k nucleotides (DNA sequences) need to be organized to build this motor out of 4^100k possible arrangements. The mechanism needs to explain this organization. Behe’s argument entails that a mind is required to account for this organization.

Yes. That explains why biological systems are so often messy and inefficient, with lots of unnecessary and redundant components.

I don’t really see why you find that as an argument against evolution.

Do mean an example in which all the steps in the development of such a system have been directly observed as they happen? Do you not realize how many thousands of years this requires under the evolutionary model?

Actual numbers are missing.

Yes, there is an enormous amount of possible permutations and combinations of DNA sequences.

Few of these will be functional.

However, there are also enormous numbers of organisms over enormous amounts of times sampling those sequences thru mutations to find the functional one (not with any deliberate intent, of course.)

So is the frequency with which new functional systems arise more than would be expected under those conditions? You haven’t presented any numbers that would justify your position that it is.

This is the equivalent of the question you are asking: If you know that the odds of any particular number being drawn in a lottery are one in 50 million, how likely is there to be a winner in next week’s draw?

Can you figure it out from the information provided? Or do you need more?

1 Like

I would like to hear what experts think of Kenneth Miller’s “refutation” of each of Behe’s chosen examples of irreducible complexity. Starting point of Miller’s comments on Behe’s IC examples.

Welcome, Daniel! I enjoyed hearing about you.

I won’t enter into the specific points being argued here, as most of them have been raised here before. Your questions are good ones. I and others have raised those questions, or questions like them, many times. The answers we have received here are more or less along the lines that you are getting. These origins discussions often seem to put the participants (on all sides) on a treadmill from which they can’t step off!

As for this:

I make no comment about the particular exchange in question, but only a general one about dealing with disagreeableness in tone and conversational style. If you stick around here long enough, you may find that some conversation partners tend to exhibit a “take no prisoners” attitude. Actually, the person you were responding to here is much less aggressive than several others here! So this place is not for the faint of heart or the easily offended. If you defend either ID or creationism here, or even if you don’t defend them, but just call for more balance or fairness in criticisms of them, you will find that you won’t get much sympathy, and you will get some pretty aggressive responses. This is standard on “origins” websites, from BioLogos through Panda’s Thumb through Skeptical Zone to here. It’s part of the “internet culture” of origins discussions.

Anyhow, I wish you the best of luck in your conversations here. And if you find Michael Behe’s position somewhat reasonable, other writers who may interest you: Michael Denton; J. Scott Turner; and the authors of The Mystery of Life’s Origin, which is now out in a new and improved edition.

1 Like

My perspective here is that the questions @Daniel_Arant have been asking are entirely fair and sensible questions to ask. He doesn’t have to agree with anything that’s been stated here so far, and I’d welcome any criticisms or commentary he may have to offer, or further questions. This doesn’t have to take the form of a debate, and I don’t find it hostile or insulting if people aren’t convinced.

I’d flip this question around on it’s head and ask, if we can see that two proteins that share something like only 15% sequence similarity are still capable of performing essentially the same function (bind the same molecule or catalyze the same biochemical reaction), then why should we not think this?

typical protein similarity pattern

If they can be THAT different yet still remain functional in the same way, then why should we think somewhere in the range of similarities between 100% and 15% there is some gap of nonfunctionality?

Presumably two proteins which are totally identical would not merit any doubt at all. So if we make them 99% identical, that’d still plausibly make them able to remain functional. Even if they have some degree of constraint on how much they can change before they stop being able to function, 99% seems to be similar enough that it shouldn’t be a problem. And the same at 98%.

We can imagine we keep going and at some point, maybe we start to think they’re beginning to be so dissimilar it might impact the function? Say we think maybe at 50% similarity we have some intuition that a lot of the protein sequence has changed, so how can it still retain the same structure or function?
But then we discover a variant of that protein that is 15% similar to the other one and yet it still functions. But then doesn’t that immediately imply the entire range spanning 100 to 15% must contain functional sequences? Shouldn’t that cause us to think there really is a pathway all the way from 100% to 15% similarity? We thought maybe 50% was some lower limit of similarity we couldn’t go beyond without breaking the function or structure, and yet now we find one that only has 15% similarity and it remains able to perform the function.

If we can find two proteins that adopt a similar structure which are 90% similar, and others that are 80% similar that still adopt the same structure and function, and then others that are 15%, how is this not exactly what we would expect to have if these proteins really did evolve from a common ancestor over very long periods of time?

And if these proteins are part of some larger structure, and other proteins in that structure exhibit the same pattern, and they show the same sort of gradual decrease in similarity with the distance of relationships between the species that carry them, then what could really justify the idea that they can’t have diverged gradually over that enormous period of time?


Because it is not an assumption. The process has been replicated in the lab: