Cooper: Assumptions in mutation rate

Would be interested in seeing what is available here. Again, if the samples come from only limited numbers of populations in a few areas, seems like we might miss the outlier populations (like Group A in my scenario).

Sorry, I should have phrased my question better. I meant to ask what the DNA models would show as the bottleneck size and date. Yes, I provided you with the size and date, but I meant to ask, “What if this wasn’t handed to you and all you had was the current Group C sample and maybe a little from Group B?”

I guess this is my answer:

That doesn’t instill me with a lot of confidence. =)

Why not? I’m still pretty confused.

Yes, I realize Group A might be able to be detected as a ghost, but what if Group A was so small and so much mixing took place for it to be fully homogenized within Group C? Would seem like Group A would be lost at some point and thus you might come up with much earlier and much larger estimates of the bottleneck in that case, right? Maybe the 5,000 people in Group B at 10,000 years ago would be the calculated bottleneck. But that would be wildly off compared with the actual answer of 100 at 20,000 years ago.

Maybe I’m barking up the wrong tree, but with these answers, I really don’t see how anyone can claim “heliocentric certainty” on population bottleneck estimates.

Just not feeling warm and fuzzy about what seems to be a lack of scenario modeling on social/group assumptions to find upper and lower bounds on mutation/mixing rates. And then based on the results of these models, providing confidence levels for ranges of bottlenecks and ages.

Maybe someone in the OEC or ID camps has similar questions and can back me up? Or they could also let me know if this is not an area of concern for them.