Bilbo Defends' Behe's Irreducible Complexity

When I discuss your examples, I will explain how you think they refute IC1. But you don’t want to hear my explanations? Just Behe’s? But what if my explanations fully refute your examples, based on what Behe has written? Perhaps Behe hasn’t responded to your examples, because he expects you to be able to figure out for yourself why they don’t refute him. Especially someone who claims to be familiar with his works.

Why did Behe change his definition? The one time I remember him suggesting a different definition was as a way to measure the amount of ICness of a system. It was not because he thought his original definition had been refuted. If you have evidence that it was because he thought his original definition had been refuted, please show it. Otherwise, you have just accused Behe of intellectual dishonesty with no evidence.

I did not accuse him of dishonestly. I said one possible explanation was silent omission, which is not dishonesty.

If you can accurately explain why I think it refutes IC1, I’d love to how where you think I went wrong in that assessment.

“I think he knows it is falsified by evidence, but doesn’t want to be clear about that publicly.”

That is an accusation of intellectual dishonesty.

1 Like

I did not say intellectual dishonesty. Still, if it were true, it is not trustworthy. it would certainly merit the reaction that he has received from the scientific community. Which is why I’d like him to clarify. That is why I’ve invited him into conversation on this: Inviting Behe and Axe into Dialogue.

Maybe I am wrong, but it seems he is best able to clear it up. I’d like to know what he actually thinks here. Right now it does not look like his being upfront (I did not say dishonest). If he can clear it up, that would be great. Maybe it is just a misunderstanding. Right now, the appearance is very bad, and I want him to clarify.

@Bilbo IIRC, the definition was changed in Behe and Snoke (2004). That might be a good place to start.

And then maybe Bilbo can explain where Behe went wrong in the redefinition.


The two places I quote Behe’s changing definition:

Note, that the second definition pops up in 2000. Was there yet another definition here?

Exactly where in the Behe/Snoke paper does he change the definition?

As understand it, there is no mention of IC1, but just IC2 in the Snokes papers. Remember also that Black Box’s IC is exclusively IC1. The malaria “complexity cluster” of the Edge of Evolution is not IC1, but is only marking out the boundaries of IC2.

In the paper you cite, Dr. Swamidass, Behe does not reject his first definition of IC. He offers the second definition tentatively, as a way of trying to help scientists focus on how many evolutionary steps must be crossed before one reaches an IC system. If you don’t believe me, try reading from this part of the paper:

VI. An Evolutionary Perspective on Irreducible Complexity

As to EoE, he does not claim that the malaria resistance is IC. He is trying to determine the maximum that Darwinian evolution can be expected to evolve on its own, via random mutations. He eventually comes to the conclusion that it can be expected to come up with two proteins at most. If a system requires three or more different proteins in order to function, it will be beyond the limits (“edge”) of Darwinian evolution.

This is my recollection, which could be wrong. Behe does not reference Irreducible Complexity by name in this paper, but instead refers to “multiresidues”.

In most models of the development of evolutionary novelty by gene duplication, it is implicitly assumed that a single, albeit rare, mutation to the duplicated gene can confer a new selectable property (Ohta 1987; 1988a, b; Walsh 1995). However, we are particularly interested in the question of how novel protein structural features may develop throughout evolution; not all structural features of a protein may be attainable by single mutations. In particular, some protein features require the participation of multiple amino acid residues. Perhaps the simplest example of this is the disulfide bond. In order to produce a novel disulfide bond, a duplicated gene coding for a protein lacking unmatched cysteines would require at least two mutations in separate codons, and perhaps as many as six mutations, depending on the starting codons. We call protein characteristics such as disulfide bonds which require the participation of two or more amino acid residues “multiresidue” (MR) features.

@Bilbo, there is no reference for that quote. There was more than one Snokes paper, so you should include a link. Also, this quote exactly contradicts what he wrote in 2000 (and also several times later):

He can be self-contradictory if he wants, but it is legitimate for us to request he clarifies what he is saying. The pattern in engagement seems to be switching at will between different definitions. Perhaps it is unintentional. Perhaps not. I can’t speak to the internal state of his mind. I can, however, legitimately ask him to clarify…

I’ve already shown this is not the issue. But to be crystal clear, I am by no means the originator of Muller’s argument. Other scientists, many of them, have been calling attention to this for 20 years. There has been no meaningful response to this, and it does not even appear in his new book Darwin Devolves.

Behe does not have to explain himself to me specifically. He does have to explain himself to someone.

And to be clear, making this clear right now, before I publish a review of his book, is out of kindness to him. If he fails to respond. this will not be a pretty review. If he does fix this errors, I would respect it. I have nothing personal against him, but there are consequences to being this confusing (intentional or not) in such a charged area.

That is how I understand it. However, in 2000, and then later on, he refers to this as “another” definition of Irreducible Complexity. Though, he is never clear about the switch in definitions, which is exactly my objection.

1 Like

I am going to channel another poster from another thread. The claim was not that Mike was selective in his responses, of course he was selective, the charge was that he had not responded to critics.

Dr. Swamidass, I was referring to the article that you referenced, where Behe was replying to Miller, Doolittle, and Robison. If you read part VI of that article, entitled " An Evolutionary Perspective on Irreducible Complexity," you will see that Behe has not rejected his first definition of IC. Instead, he has supplemented it with a second “evolutionary” definition. He explains that he is doing this in order to focus attention on the number of unselected evolutionary steps that would need to be taken in order to evolve an IC system. The more unselected steps, the more difficult it would be to evolve a particular IC system.

Behe is not being self-contradictory.

Then he should have problem clarifying himself. I’ll look forward to seeing it done. I would be happy to find out I was wrong.

Dr. Swamidass, have you bothered to read part VI of that article?

Yes I have @Bilbo. I think you have made your point. I have made mine. Not much more for the two of us to hash out at this time.

1 Like

I recently wrote an article discussing IC and cooption:

I did not have a chance to nuance all of my points, so I will expand on one of them here. One key issue is the rarity of functional proteins, and this article describes how the tolerance of proteins to mutations drops off exponentially, or more quickly, with increasing numbers of mutations:

The study demonstrates that after 1 to 2 mutations to a protein, about 2/3 of the following possible non-synonymous mutations could be “tolerated”. However, after a few more mutations (around 5-6), the likelihood of the following non-synonymous mutation being tolerated is roughly 1/3. This result closely matches Axe’s result for the rarity of functional sequences in the vicinity of a functional beta-lactamase: (1/3)^150 is around 1E-77.

In addition, the Chatterjee et al. paper I cite analyzed targets which represent half (c=0.5) of a sequence matching the target (e.g. 500 out of 1000 nucleotides in a gene). In contrast, the Tokuriki and Tawfik results suggest that a protein becomes nonfunctional after considerably less than 30 mutations, none of which individually would necessarily disable a gene, and that number corresponds to c=0.2. In other words, Chatterjee et al. demonstrate the impossibility for a random search finding a target which is vastly larger than that represented by actual proteins.

After you read my article and the referenced articles, all of my comments should make sense. I will try to respond to comments and questions over the holidays. However, I will only have time to respond to those who have made a sincere effort to thoroughly understand my arguments and the content of the research I cite.

As a side note, I mentioned to Mike Behe the following comment related to the upcoming responses to his new book

Let the 23rd annual Behe games begin.
May the odds ever be in your favor.

And to all, have a wonderful holiday season.

1 Like

Likewise to you @bjmiller. Thank you for posting here. It will take some time to engage with it more deeply.

I propose that we discuss that claim, concentrating on the math (since you’re a physicist) and the assumptions baked into it. The first and most obvious problem is that you are incorrectly presenting an extrapolation as a result.

More interestingly, if Axe’s result can truly be extrapolated as you did, if I start with a library of only 4 x 10^8 protein sequences that are constrained because they must fit into a completely different protein fold, how many would you predict would have measurable beta-lactamase activity?