Hossjer: Using statistical methods to model the fine-tuning of molecular machines and systems

Yes I have to admit I cringed at that one. It seems like they were desperate to just find some reason to throw some sort of blame on the authors, and in their flailing came up with a really silly one that wouldn’t constitute a valid reason for rejection anyway, rather than admit their own peer review process failed.

1 Like

This is in fact a core issue. The disclaimer makes it an ad hominem, “creationists can’t publish here!”

That’s obviously untrue and presumes almost a belief statement sort of control over a scientists private beliefs. That is not how science works. There is no requirement to disclose personal beliefs or all ones affiliations. Association with an advocacy group also is not a conflict of interest, though failing to disclose funding could be an issue (though not important in this case).

The most deceptive part of the submission we can see so far is leaving ID out of the keywords, abstract, and title. That is somewhat a gray area, because the text is extremely clear. Of course they don’t cite work that rebuts ID, and that’s a problem too, but that should have been caught in review. I wouldn’t call that part deceptive in the same way.

The key point is that the disclaimer is not really appropriate. The journal needs to look at this more carefully.


This is pretty bizarre. The paper is very clear to anyone who read it … which makes me wonder if anybody did. I think this is totally on the journal.

I was actually looking forward to reading the paper, but I can’t exactly find much of any science in there. Maybe it’s my experimentalist bias, but it looked more like a philosophy of science review paper than any sort of actual analysis of fine-tuning in molecular machines.


@Joe_Felsenstein, you’ve published in the Journal of Theoretical Biology. Was there anything at all odd in the review process?

1 Like

Have to agree with this. The referees assigned for this paper by the journal dropped the ball big time. Either they were not qualified to judge it or just rubber stamped it for some unknown reason. The journal should bite the bullet, admit it screwed up and retract the paper.

Not that I recall. It was two papers, 30 and 39 years ago.

By the way, people should accept that, short of someone successfully suing them, they are not going to release the reviewers’ names. Doing that would mean they would thereafter have trouble getting people to agree to review for them. And it being a terrible paper is not itself sufficient grounds for retraction – lots of journals publish papers that turn out to be terrible.

1 Like

From the second paragraph of the Introduction:

We define fine-tuning as an object with two properties: it must a) be unlikely to have occurred by chance, under the relevant probability distribution (i.e. complex), and b) conform to an independent or detached specification (i.e. specific).

To me, this is basically nothing more than a re-stating of Dembski’s ideas. In other words, there isn’t any novelty or innovation, nothing really about fine tuning (as we know of the concept in physics) in biology. Rather, the article would seem to be a review (not a research paper) of ID thought.

I would agree with Jason Rosenhouse - it’s not a very good paper. Even as a review, it seems to me disjointed and barely coherent. But I would welcome the authors coming here and explaining some of the ideas and sections.

I also think it’s way past the time for the editor and the journal to try and “fix” this. They went through the submission, review, acceptance, and publication process, and likely cashed the check. They shouldn’t be trying to backtrack now.


Given the number of disastrously bad COVID19-related papers that have passed peer review this year, I’m not even sure this registers on the same scale.


I should tell one story about my JTB submission. It was for an Unsolved Problems special issue, and I was invited to submit a paper discussing an unsolved problem. So I wrote about one that I did not know how to solve, but which seemed to be of importance. When the issue came out I was surprised to find that only one other person has discussed a problem that they didn’t know how to solve. Everyone else had just taken a problem that they recently had solved and (retroactively) declared it to have previously been an important Unsolved Problem. It seemed to me that I and the one other author were the only ones who were honest about that.


What unsolved problem did you submit? What was the other problem submitted? Can you link the papers here?

Presumably Joe is referring to this paper: https://www.sciencedirect.com/science/article/pii/0022519382901527

From this issue:

1 Like

I’m late to the party as usual, but I finally had a chance to catch up. Echoing Jason Rosenhouse on Panda’s Thumb, there is nothing new here; we already know how to do these calculations, but not how to fill in the probabilities for how life evolves. Dembski (2005) used a much simpler formula and ignored any influence of selection on probabilities. Making the formula more complicated doesn’t change anything.

1 Like