What new genes? Isn’t most of that particular pattern best explained by gene loss, especially in the muntjac lineage? And why can’t new genes be formed by neutral mutations? I would presume that most of the new genes result from gene duplication. Isn’t duplication generally a neutral event?
A lack of evidence for these type of chances in existing populations. Dramatic gene loss without purifying selection stoping the process. Gene gain with no real working model of how this works.
If you think the numbers do add up make a proposal.
There is no empirical evidence supporting this and lots of reason to doubt it given forming a new gene by duplication and random mutation has rarely if ever been empirically demonstrated.
Doesn’t gene duplication form a new gene by definition? And why do so many genes look as if they form gene families, even fitting into a nested hierarchy that shows the point at which duplication happened?
Are all genes essential? Don’t known mechanisms produce large gene families with partially overlapping functions? You know, like the myosins?
Fabrication.
If you think they don’t, show your math. You’ve never done any.
Your ignorance of evidence is not a data point.
Have you looked for the evidence? Duplication happens all the time. Are you claiming to know that one or both of the duplicates are then somehow immune to mutation?
The fact that you have not directly observed something to occur is not a reason to think it can’t happen. Straightforward non-sequitur. We need know nothing else to see your position stands on thin air.
Second, those aren’t lacking. Changes in chromosome numbers, loss of genes through both wholesale deletion and pseudogenization (or gain by duplication), and these having no obviously deleterious phenotypic effect, are all observed realities.
Gene loss and dispensability
The pervasiveness of gene loss throughout evolution leads to the fundamental question of how many genes can readily be lost in a given genome. Intuitively, the answer to this question depends on how many genes are actually essential for a given organism, and therefore cannot be lost, and how many genes are to some degree dispensable, and therefore susceptible to being lost because their loss has no impact or only a slightly negative impact on fitness, at least under certain circumstances (FIG. 2).
The knockout paradox. Gene dispensability is a meas-ure that is inversely related to the overall importance of a gene (that is, gene essentiality), and this measure has been approximated by the fitness of the corresponding gene knockout strain under laboratory conditions38,39. Understanding which genes are dispensable or essential by linking genotypes with phenotypes is one of the most challenging tasks in the field of genetics and bio-medicine in the twenty-first-century post-genomic era. This understanding is important both theoretically, such as when defining the minimal genome for a free living organism40, and practically, such as when identifying all essential genes that are responsible for human diseases41. Historically, Susumu Ohno not only pioneered the idea that gene duplication was an important evolutionary force, but in 1985 he also pondered the concept of gene dispensability and suggested that “the notion that all the still functioning genes in the genome ought to be indispensable for the well-being of the host should be abandoned” (REF. 42). The emergence of large-scale gene targeting approaches has facilitated the calculation of the number of genes that are globally dispensable in a given genome in certain conditions. Thus, systematic large-scale approaches that involve single-gene deletions in Escherichia coli and other bacterial species showed that only a few hundreds of genes are essential, suggesting that nearly 90% of bacterial genes are dispensable when cells are grown either in rich or minimal mediums39,43,44. The high degree of global gene dispensability found in bacteria is consistent with findings from systematic gene deletion screens in Saccharomyces cerevisiae and Schizosaccharomyces pombe. These screens revealed that approximately 80% of protein-coding genes are dispensable under laboratory conditions45,46. Following the same trend, large-scale RNA interference approaches in C. elegans47,48 and D. melanogaster49 suggested that 65% to 85% of genes, respectively, are dispensable in these organisms, and similar figures were obtained in mice by the Sanger Institute Mouse Genetics Project50. Recent attempts to test for gene essentiality in humans using gene trap and large-scale CRISPR–Cas9 screens suggest that approximately 90% of tested genes are dispensable for cell proliferation and survival, at least in human cancer cell lines51–53. These surprisingly high values of seemingly dispensable genes in different organisms and their tolerance to inactivation have been referred as the ‘gene knockout paradox’ (REF. 38). Two main factors have been provided that may account for this observed gene dispensability54: mutational robustness and environment-dependent conditional dispensability.
Your second point does not work as when we look a the divergent gene families that find new function the changes are way beyond a few sequence differences.
From uniprot
WNT 4 mouse
WNT 6 mouse. over 180 sequence differences
WNT 4 mouse
WNT 4 Rat. 4 sequence differences
Rum’s citation about gene loss does not consider anything close to the magnitude of the number of genes you attribute to gene loss.
Yes, and then there are two genes where there once was one. How is that not clear?
Doesn’t that depend on how long it’s been since the split? Clearly the genes you see were duplicated long before the split between “mouse” and “rat”. So?
So it’s just the number that concerns you, not the mere existence of such events. That’s a big change from your previous position. Doesn’t time fix that problem? As long as gene loss happens at all, there will be more of them after more time passes. So now you need to figure out if the rate is plausible. Have you tried?
What new function? You compare two homologous genes, see they are not identical, and then you assume one has a function that is new or different from the other.
But you have no idea how many duplicates have novel functions.
And?
They are not identical =/= they have new functions.
They have many differences =/= many differences are hard to evolve (that just takes more time.)
Also, you have no idea where along their divergence a new function emerged, assuming one did.
Assuming a new function evolved, the genes diverge after the split, differences accumulate over generations, and at some point you don’t know, a new function evolves. But that could have happened at any point. With the first mutation, or at the 180th difference, or anywhere in between.
Retrospective calculations tell you nothing about how rare novel function creating mutations are.
Genes diverge increasingly with time, unavoidably. The WNT family of genes is really really old and it’s members have been duplicated, pseudogenized, lost, and diverged many many times.
The problem is the fixation of all these changes is way beyond any population genetic model. The proteins need to be able to bind to a matching receptor, fizzled.
There is no change to my position here. The magnitude of the gene loss is the problem not showing that a single gene loss mutation can happen.
We need to get beyond plausible and describe if gene/gain loss is a likely explanation for the pattern.
Still counts as two, and as a difference in the gene set between two species. So they would contribute to those numbers in the Venn diagrams you keep bringing up.
This has been explained to you before, so it can’t be news to you.
Show your math.
And?
You haven’t shown that it is a problem. You just believe it’s a problem. When questioned you offer no good reasons for thinking it is a problem.
But you keep claiming it is a problem, or that it is unlikely, despite having no good reason to think so. Once again, the real issue is to be found in your own psychology. There is no reason in any of the data that indicates there is a problem. You just hold this belief as some sort of axiom that you demand we disprove. This is a problem with you, not the data or any existing model.
I would suggest that this ‘appearance’ is likely merely due to the superficial nature of your analysis. Bill Cole’s Gene Arrangement Test™ appears to simply involve counting up the number of identical genes from the (heavily summarised) Venn diagram. A closer look at the non-identical genes may provide information on similarities that explain how the phylogenetic relationship was established.
Actually Bill, my “claim” assumes nothing of the sort. Genes could have been “gained and/or lost” by a designer adding or removing them from the design.
But if we look at the individual genes I rather suspect we will find evidence that these gains or losses occurred naturally. But until you actually look, you won’t know, and all your huffing and puffing on the subject is nothing but hot air.
I would suggest that you cannot provide “strong evidence” of anything unless and until you are willing to dive into the gene data at greater depth than the shallow and superficial level that your Venn diagrams allow.
This tends to be a fairly pervasive trend in any empirical-based argument (whether scientific, economic, or whatever). The side with the greater depth of understanding of the underlying data will almost always be at a strong advantage. Know your data. If you don’t, you are simply setting yourself up to lose.
Bill, that’s another fabrication. You haven’t done any math and you clearly have no intention of ever doing any. Why?
Why did you use the singular article “a”?
Bill, it’s very hard not to conclude that at this point you’re simply, and deliberately, lying. There is no receptor called “fizzled.” There is a FAMILY OF 10 DIFFERENT RECEPTORS IN HUMANS called “Frizzled.”
This is how life works. Families with fuzzily, partially overlapping functions. This is a consequence of duplications followed by mutations.
It is not how human designers design anything. But you have once again stumbled into an empirical prediction of your ID hypothesis. You might learn something if you didn’t simply assume that your prediction (a single receptor) was true.
Why are you pretending to have the slightest idea what you are talking about?
Whatever are you talking about? Do you even know what proteins do?
Here is an image on the WNT pathway with the WNT ligand binding to frizzled. This initiates transcription. If you compare sequences of frizzled in different cells the number of different amino acid positions is in the hundreds. If you compare rats and mice in the same cell the sequences are almost identical. Neutral mutations cannot explain this.
I showed you already. What’s a plausible rate of gain and loss? And again, what are pseudogenes if there’s no such thing as gene loss?
I do not think you can make a feasible model to show this magnitude of gene loss but you can show me I am wrong. The bigger problem is if you can show this is feasible you then have to show that hundreds of new genes gained is feasible and there currently is nothing that will help you with this.
The sequence and waiting time problems are a show stopper for the single origin claim given all the gene Venn diagrams that are currently available.
Hi Ron
This specific argument is about the lack of explanatory power of neutral mutations finding new function.
John is appealing to neutral mutations to explain the gene patterns in the Venn diagrams. Neutral gene loss plus gene duplication and neutral mutations finding new function. The WNT example shows that not only does WNT have to change (to generate a new cell type) it has to change and bind to a receptor (frizzled) that is also changing if neutral evolution is the cause of the different sequences between cells.
The better explanation is frizzled and WNT were designed as matched pairs with different sequences depending on the cell type.