That’s as close as Bill has ever come to admitting common descent, though I suppose he doesn’t know what he’s saying.
Or it could be that Bill doesn’t, at a deeper, perhaps subconscious, level think that denying Common Descent is so much about what is true, and what he can find evidence of, as a statement of who he identifies with. Saying that he denies Common Descent is perhaps the equivalent to wearing team colors to a sports game – it doesn’t say what is true so much as which side he supports.
Which would be consistent with the fact that, after @stcordova at least hinted that he is open to UCD, Bill suddenly become much more open to it, as well. A key difference being Sal at least seems to understand what UCD is. Bill has yet to demonstrate that he does.
Yet Behe’s acceptance of UCD seems to have no similar effect – is it perhaps because Cordova is currently present on this forum, but Behe isn’t?
This, if true, would make Bill’s level of acceptance of UCD (entirely?) dependent on the level of ID acceptance of UCD present in his immediate vicinity (virtually speaking) at any given time. So to find out what Bill’s thinking is on UCD at any given time, one might as well ask the other ID advocates active on the forum, as to ask Bill himself.
Bill in fact denies that Behe accepts UCD, at least as anything more than “for the sake of argument”.
It’s not all that easy to determine what Bill does and does not understand at any given moment. He might be aware that Behe has said and written some words about UCD. It does not follow that, at any given moment, Bill understands what those words meant. I find it most helpful to think of Bill as a malfunctioning ChatGPT.
AU: artificial unintelligence?
It was perhaps unwise for me to mention Bill and “fact” in the same sentence.
I suppose that makes a certain sort of sense. Bill believes that ID supporters wear ‘denies UCD’ team colors. Michael Behe is an ID supporter. So, lacking Behe being right here to explicitly state his position, Bill will assume (all evidence to the contrary) that Behe must deny UCD.
La moquerie est souvent indigence d’esprit
Jean de La Bruyère
If you think Bill has made any good, coherent argument here, then by all means explain it to the rest of us.
Hi John
I agree with your point here.
A model is at least a step forward from where we are which is no model to empirically confirm.
Perhaps, but isn’t it well deserved in this case?
You’re asking @Giltil to say something possibly mildly critical of one of his fellow teammates. You know that will never happen. Loyalty over truth, always, with this crowd.
Progress. So what were you trying to write with that false word salad?

A model is at least a step forward from where we are which is no model to empirically confirm.
Utterly, objectively false. We have gone far beyond model/hypothesis to theory, repeatedly confirmed by the mountain of data that you ignore.

Yet the hypothesis was not confirmed with a population genetics model in over 700 posts.
Reminder of this post:
Bill it’s just a rate of fixation of neutral mutations, which would be equal to the rate of occurrence in a population of constant size. So for chromosomal fusions (which we know empirically are mostly neutral), you just take the rate at which they occur and then the mean rate of fixation is the mean rate of occurrence(there would be some variation around the mean of course). Same would be true for gene gains and losses(duplications and deletions/pseudogenizations) which generally have very small fitness values in the small effective populations of large animals. So you look at the phylogeny, see that a handful are different between individual deer species. Can a handful of those occur and fix in the couple of million years that separate them? Yep. That’s it. That’s the “population genetics model” you’re asking for. The End.

La moquerie est souvent indigence d’esprit
Jean de La Bruyère
Ridicule is often caused by something being ridiculous.
@Tim
But thank you for the ad hominem tone policing.
Bill it’s just a rate of fixation of neutral mutations, which would be equal to the rate of occurrence in a population of constant size. So for chromosomal fusions (which we know empirically are mostly neutral),
You need to look more closely at the data. Chromosomal mutations reduce reproduction efficiency.
The claim that mutation gain/loss toward new function is neutral is also not empirically supported. It also does not make sense mathematically given the ratios of functional to non functional sequence space.

You need to look more closely at the data. Chromosomal mutations reduce reproduction efficiency.
Really? You’ll bring me papers that show all or a large majority of chromosomal fusions reduce reproduction efficiency? (No, your paper on hybrids from already considerably diverged populations doesn’t show that chromosomal fusions reduce reproduction efficiency.)
In order to really reduce the rate of fixation where a handful of them can’t fix over a span of, say, 2 million years, a large majority of individual fusions would have to be deleterious. Especially if there are population bottlenecks as we find in founder events, where even deleterious mutations have easier time going to fixation.
But you can show no such data. In fact we’ve already brought you papers that show the majority have no measurable effect.

The claim that mutation gain/loss toward new function is neutral is also not empirically supported.
Right, it’s often outright beneficial too. By the way, will you be contacting Sal Cordova to tell him he shouldn’t be harping on gene loss as a definition of genetic entropy?
Bill you need to use your scroll wheel. The data is in this very thread.
I quote to you again:
Gene loss and dispensability
The pervasiveness of gene loss throughout evolution leads to the fundamental question of how many genes can readily be lost in a given genome. Intuitively, the answer to this question depends on how many genes are actually essential for a given organism, and therefore cannot be lost, and how many genes are to some degree dispensable, and therefore susceptible to being lost because their loss has no impact or only a slightly negative impact on fitness, at least under certain circumstances (FIG. 2).The knockout paradox. Gene dispensability is a meas-ure that is inversely related to the overall importance of a gene (that is, gene essentiality), and this measure has been approximated by the fitness of the corresponding gene knockout strain under laboratory conditions38,39. Understanding which genes are dispensable or essential by linking genotypes with phenotypes is one of the most challenging tasks in the field of genetics and bio-medicine in the twenty-first-century post-genomic era. This understanding is important both theoretically, such as when defining the minimal genome for a free living organism40, and practically, such as when identifying all essential genes that are responsible for human diseases41. Historically, Susumu Ohno not only pioneered the idea that gene duplication was an important evolutionary force, but in 1985 he also pondered the concept of gene dispensability and suggested that “the notion that all the still functioning genes in the genome ought to be indispensable for the well-being of the host should be abandoned” (REF. 42). The emergence of large-scale gene targeting approaches has facilitated the calculation of the number of genes that are globally dispensable in a given genome in certain conditions. Thus, systematic large-scale approaches that involve single-gene deletions in Escherichia coli and other bacterial species showed that only a few hundreds of genes are essential, suggesting that nearly 90% of bacterial genes are dispensable when cells are grown either in rich or minimal mediums39,43,44. The high degree of global gene dispensability found in bacteria is consistent with findings from systematic gene deletion screens in Saccharomyces cerevisiae and Schizosaccharomyces pombe. These screens revealed that approximately 80% of protein-coding genes are dispensable under laboratory conditions45,46. Following the same trend, large-scale RNA interference approaches in C. elegans 47,48 and D. melanogaster 49 suggested that 65% to 85% of genes, respectively, are dispensable in these organisms, and similar figures were obtained in mice by the Sanger Institute Mouse Genetics Project50. Recent attempts to test for gene essentiality in humans using gene trap and large-scale CRISPR–Cas9 screens suggest that approximately 90% of tested genes are dispensable for cell proliferation and survival, at least in human cancer cell lines51–53. These surprisingly high values of seemingly dispensable genes in different organisms and their tolerance to inactivation have been referred as the ‘gene knockout paradox’ (REF. 38). Two main factors have been provided that may account for this observed gene dispensability54: mutational robustness and environment-dependent conditional dispensability.
Will you remember it this time?
YOU are wrong about the data. YOU.

Will you remember it this time?
YOU are wrong about the data. YOU.
What I will remember is you are making arguments that contradict the data. You are not looking at both sides of the argument and proposing a very poorly thought out model.
Mutations that involve amino acid substitutions are not generally neutral especially in the hydrophilic core.
Chromosome mutations often cause reproduction problems
Sequences break down when randomly changed
Gene knockout experiments are single knockout not loss of dozens or more.
Bacteria and insects are not vertebrates.
Neutral mutations based on gene duplication are unlikely to find new function due to lack of directional force