Read on …
Everybody is wrong, including @T_aquaticus
The root of this problem is the very premise of inferring anything with this sort of probability calculation is flawed. Counter-examples can demonstrate there is a flaw, but not why all such examples are flawed.
Bear with me for a bit of simplified Bayesian statistics, and I’ll do this as non-mathematically as I can:
Suppose there are two possible explanations, A and B, for an event E. Given some data for E we might be able to evaluate the probability of E for each explanation A and B. Bayesian statistics allow us to make some assumptions about the probability before we calculate it (a prior assumption, or just “prior”). Usually this prior assumptions is “weak”, meaning it doesn’t greatly influence the final probability calculation (posterior probability), BUT nothing prevents the use of a strong prior that completely determines the resulting posterior probability.
We would like to compare the probabilities of E for A and B for the same data and compare them to see which posterior probability is larger. In Bayesian statistics the ratio of these probabilities is called a Bayes Factor, and it is related to the Likelihood Ratio from the usual “Frequentist” statistical theory. If the Bayesian prior assumption is very weak (non-informative) then the Likelihood Ratio and Bayes Factor are the same thing. With the preliminaries out of the way we can get on with the statistics.
Given a sequence of data, the ID argument puts a probability on the event E based on an explanation from evolution (A). This is usually a flawed explanation of evolution, but that doesn’t matter because it’s the wrong flaw anyway! Next we do the same calculation of probability for E based on an explanation from ID (B) and compare them.
Except there is no calculation of probability for E based on an explanation from ID (B). ← This is where the ID argument goes wrong.
Sure the probability based on A might be incredibly small, but the probability based on B could be even smaller. We don’t know because it can’t be calculated.
That’s not where the ID argument stops though - the argument goes on by claiming the probability of A is so small that B must be the answer. AND this is where the prior assumption comes in. The ID argument is making a claim - an inference for Design - that has the same form and interpretation as the Bayes Factor does to a statistician.
Now for my own inference. The ID argument based on the form of a Bayes Factor concludes ID is more likely than evolution without any probability of ID. Therefore, I conclude the ID argument is making an unstated prior assumption that the probability of ID must be greater than that of evolution. This must be a very strong prior, because it the data doesn’t influence the posterior probability at all. This unstated prior assumption forces the conclusion favoring ID, always. It’s not really a method or argument at all, merely a sneaky way of restating the hidden assumption.
TL;DR: This ID argument tacitly assumes the ID is the only possible conclusion.