Continuing the discussion from Rob Carter Responds to TMR4A:
@dsterncardinale and @GutsickGibbon, you’ve been having some fun engaging with Rob Carter. In his last response to you he states this:
Figure 2 , after Sanford et al . (2018): The normalized distributions of chromosome 22 SNP data from the 1000 Genomes project, our Evolutionary Model, the Evolutionary Adam and Eve Model, the Designed Alleles Model, and the Designed Gametes Model. Clearly, a number of different biblical models align surprisingly well with the actually observed allele frequency data.Regarding this graph, they complained (elsewhere, not in this video) that we changed the values on the y-axes between the various graphs in our paper, as if they did not understand the process of normalization.12 It would have been nice to have all the world’s computing power at our disposal so that we could model going from Adam and Eve to 7.5 billion people over the course of six thousand model years, all the while tracking every mutation that happened in duplicate 3-billion-letter digital genomes. We would then have had as many mutations to track as seen in the 1000 Genomes data. Barring that, the data had to be normalized to be comparable. My point is that models are always limited, ours and theirs included.
I think he has a point here. It is legitimate to normalize these curves.
By increasing the population size/samples, the number of total mutations would increase. So normalization is the correct thing to do. Though (1) the normalization should be by area, not by height, and (2) I’m sure improved evolutionary models would fit the data just fine.
Ultimately, I think just about any date for Adam and Eve can work by tweaking the simulation and sampling. There just isn’t enough of the right information in the SFS, and the simulations have far too many parameters.
Can you clarify what exactly you think his mistake was? Thanks.