@bob_o_hara, I saw your comment on Uncommon Descent. Welcome to Peaceful Science. You can continue the conversation here, or even start a new thread. Peace.
Thanks for setting up a dialogue, but it would help if you didn’t then close it! The discussion started by @sygarte on Table 4 contains several mis-understandings of what Bayes Factors are. FWIW, when I saw Table 4 my reaction was that the values are barely plausible: some are several orders of magnitude larger than any other (log-)Bayes Factor I’ve seen.
I’d be curious why you object to my explanation.
Do you disagree?
(S. Joshua Swamidass)
split this topic
The idea of common descent isn’t merely about a grouping pattern but includes a temporal component as well. Single-celled organisms preceded multicellular life. Invertebrates appeared before vertebrates. Therapsids and Synapsids preceded mammals, & etc. A priori, from a design viewpoint, the temporal appearance of various groups did not have to occur this way.
And, compared to other factors, the time since apparent divergence correlates reasonable well with sequence variation. If one interprets the “dependency graph pattern” as arising from a designer, how does that mesh with the temporal patterns and relationships that are also in the data?
Without second guessing Ewert’s own way of thinking about this, it would appear that he’s talking about the re-use of modules to modify existing forms (whether those forms are the physical manifestations or some kind of informational blueprint, as in the old Owen or Agassiz shemes, is another matter).
In other words there is some kind of chronological ordering presupposed in it.
The same could be true in other design views based on less fashionable philosophies. For example, on the principle of plenitude held by Linnaeus, every form that could be created would be created. It’s easy to imagine that a rational God might, in a creation-extinction over time model based on plenitude, create the variations sequentially rather than randomly. Hence you might get a sequence of similar moths, but probably not a Precambrian rabbit. This is particularly apt as an explanation for nested clades, because logically people tend to start with a global category and then branch out to fill its sub-classes in increasing detail, rather than starting bottom up.
Come to think of it, the fact that taxonomic groups appear to start with major groups and diversify fits that rather better than the idea of one major group slowly evolving into another major group randomly, which isn’t a common pattern in life. No doubt evolutoionary theory has a fix for this, but at the instinctive level top-down tree taxonomy fits design better than RV&NS. And it at that instinctive level of what design ought to look like that your point was made.
The overall patterns of increasing complexity might also be explained, on design, in the same way that evolutionary theory sometimes does - early organisms terra-formed the earth (and in design models were created for that): plants prepared the land for land creatures, etc.
I can see faults in all those ideas, but am also aware that (a) I’ve drawn them out of thin air as possible explanations and (b) any model has faults when nobody’s spent a lot of time refining it.
Yes, he is talking about reuse of modules. I’m wondering about how such reuse left temporal signatures. To me, this suggests a pattern of largely incremental modification over time rather than special creation of groups or species, ie. Common descent.
This applies as well to Ewert’s work as well by comparing the patterns to human software implementation. I’m wondering how the temporal patterns we see will integrate with this model. Ultimately it’s not what design ought to look like but what the data tells us it must look like. It’s going to likely be design implemented via modification with descent over time without massive horizontal transfer, at least among many ‘higher’ eukaryotes.
Aside: If we take software design as a guide, not only do we see module reuse but also massive horizontal transfer between individual programs. This is not common in many branches of life, again suggesting small modification with descent rather than ad hoc or de novo modes of creation.
My first comment here, and I haven’t quite figured out the comment threading yet …
I have a technical issue with how Ewert is calculating the Bayes Factors. He is maximizing the probability for the Dependency Graph, but taking a fixed value for the Tree Graph (assumed to be maximum). Even with penalties for added complexity I would expect this to bias towards the DG.
I think he should be maximizing both the numerator and denominator, adding/subtracting nodes as appropriate to the DG and TG hypotheses, excluding the common “tree” nodes between models, and testing the remaining “dependency” nodes.
However, all this seems to be reinventing the wheel. There are existing Bayesian phylogenetic methods which are well developed and accepted (See Theobald 2010). If ID is to become an acceptable scientific concept, then proponents should approach the problem using acceptable scientific methods. By trying to tackle an unpopular idea with unproven methods published a (shall we say?) fringe journal, Ewert is setting himself up for failure.
Theobald, D. L. (2010). A formal test of the theory of universal common ancestry. Nature, 465(7295), 219.
Welcome @Dan_Eastwood, the correction factor (the prior) and optimization procedure does need a careful look. I have not had time to do it yet.
Sure, but I’m not sure this helps them here. They need a well controlled experiment (same implementation) that compares trees and dependency graphs. The existing Bayesian phylogenetic methods might be useful as a control experiment to ensure things are working properly. Regardless, the result should be about the same either way if they implemented it correct. So good control experiment.
Not if the penalty is correct. It should be neutral then. Though I can’t yet comment if the penalty is correct.
The part that is more likely to bias towards DG is noise in the data, and factors already known violate nested clades in common descent. The data may actually be better represented by dependency graph. The real problem with the argument is that common descent does not predict perfect nested clades.
A couple of quick notes about evaluating the Dependency Graph Model:
The DGM and common ancestry could coincide in different circumstances. For instance, if some species in the Precambrian experienced large infusions of information which led to radical new body plans, a signal could exist in the background for common ancestry, and a DG distribution of complex traits could exist in the foreground. This possibility should be studied at some point.
The most interesting similarities are complex traits which share multiple genetic modifications such as echolocation in dolphins and bats and vocalization in humans and birds. The only standard explanation for such similarities would be convergence: natural selection directed the formation of similar structures/genetic modifications independently. This explanation faces many challenges which will have to be addressed head-on:
Limited effectiveness of natural selection in small populations.
Timescales required for new highly specific, coordinated, and nearly neutral mutations to accumulate in a single organism and then spread through the population.
The large number of changes often required before a given trait becomes highly advantageous. An interesting example is the origin of the melon needed to focus sound in echolocation. It is composed of special lipids which are actually toxic to an organism if not properly contained. And, its ability to focus sound requires the right 3D distribution in the organ.
The rarity of mutations which affect early development in such a way as to create changes in the fundamental architecture of an organism.
What would be the best papers which attempt to address these challenges for some complex traits?
A future study will need to examine the tree versus dependency graph relationships for trait-gene combinations which are too distantly related to be explained by non-convergence processes.
Great to see you after all this time @bjmiller. Welcome back.
You are making the correct point that DGM and Common Ancestry (CA). Both can exist at the same time. There can be a signal for both CA and DGM. That is entirely correct.
I will also add that DGM is not equivalent to only an “infusion of designed information.” Perhaps, there are both natural and supernatural explanations for such a pattern.
Sure but convergence happens at multiple levels. It has to be parsed out quantitatively to make any sense of it. As I understand it with echolocation, for example, there are couple (or just one) identical point mutations, but then most the other changes (on a genetic level) are all different (and therefore not genetically convergent). That points to independent origins, and is not at all difficult to explain with standard theory. I hasten to concede that this is just a high level description. A more rigorous treatment is possible, but this is to insist that the high level descriptions of “convergence” have limited utility making sense of the questions you raise.
I’ll briefly answer your questions here though:
I’m not sure how that is relevant. A more realistic model is a large number of small populations, where only one population needs to hit some target that is eventually selected. The chances go way up.
You have several implied claims in here that I would dispute: (1) highly specific mutations are the rule, not the exception, (2) coordinated changes are a necessary, with no intermediate benefit, are the rule and (3) changes must all appear in a single organism, and (4) it takes a lot of time for a successful change to spread through the population.
I’m just not sure if any of these implied claims are correct. In fact, they appear to be false.
This seems to confuse the final result of a long process with intermediates that are each individually useful, but less useful than the final result. Much stronger evidence is required to claim these changes must be “coordinated.” Maybe they are. I’m not going to insisting one way or another, but you have to do more than merely conjecture that this all had to have come in one fell swoop, and that no intermediately useful steps are possible.
I am not sure there is evidence that the fundamental architecture of these organisms changed. Isn’t it the point that whales, for example, have the same fundamental body plan as the walking-whale?
The bias I expect is because the DG is allowed to test an arbitrarily large number of possible nodes (degrees of freedom) in a Dependency Graph, but the Tree Graph is not optimized in a similar way (with restrictions to tree structure). Both need to be penalized for added complexity (ie: AIC or SBC). Non-informative or minimally informative priors can be used for both.
But this is still leading straight back to existing methods (Theobold again), and Ewert is reinventing the wheel.
On a more critical note, Theobald gave an example of applying his method to test a hypothesis about design. His test rejected design in favor of common ancestry FOR THIS ONE CASE, but it opens the way for more testing because this is well accepted statistical methodology. IF it had favored design instead, it would have to be taken seriously. Advocates for ID should be jumping at the change to put their ideas to the test, but fail to do so.
Yes I made this point to @Winston_Ewert already. It is not clear his penalization factor is correct. The strategy he used does not enable verification, and possibly might be in error.
Can you link to that paper, and any critiques offered of it?
I’d also caution that design and common descent are not mutually exclusive. Design is poorly defined (not our fault), but there are some conceptions of design that could overlay on top of a common descent signal. So showing evidence for common descent is not showing evidence against “design” in a generic sense, though perhaps might rule out defined class of design theories.