Cooper: Assumptions in mutation rate

Not exactly. We use a mutation rate that is (at first) assumed to be stable (but not constant). We then can test this, to verify with data if it is stable. It is. So this is not an assumption in the end. The model is established by data, through and through. Perhaps there is another way to explain this data, but I have not yet figured out how to do so. I don’t see anyone else close to figuring it out.

Thousands of genomes at this time. I’m not sure (from memory) how many have been used to calculate mutation rates, but there are several studies. If there are any anomalies, we can be certain they will be discovered soon.

In this scenario (assuming it is not a trick question), the sharpest bottleneck at 20,000 years ago is 100.

We may or may not be able to detect this in DNA. It would depend on the data. This is an imaginary scenario, we just cannot say for sure. Maybe we could detect it. Maybe not. Maybe, if it didn’t happen, we could rule it out. Maybe not.

In the scenario above, we have samples from Group B and C. We still might be able to infer that Group A exists. The inference of ghosts populations like this have been done several times already, and then confirmed later by subsequent studies.

No. I do not think so.

You are no dummy. These are good questions.