Thanks for inviting me in. I’m sorry this is so late, as I have been working especially hard during the COVID era (several of our doctors have retired or cut back, and one died of COVID), and have also been spending time with the isochron/mixing line thread.
Ignostic (C1) outlined the problem fairly well. He notes that coal has been noted to have radiocarbon in it, that seemed beyond what would be expected for laboratory contamination. His references are accurate, although the major thrust of Lowe’s article was that one should not use coal because it contained radiocarbon. The proposition that fungi in particular, but also bacteria, might have contributed carbon-14 to the coal was an attempt to explain why coal would have the observed carbon-14.
I’m not sure what point is being made by reference to extensive wheatering (presumably weathering) of coal. As I read the article by Mitchell and Matthews, it seemed that some samples were taken fairly rapidly from the mines, made into samples under an inert atmosphere, and then stored in drums filled with slightly pressurized argon. Reading the original Baumgardner chapter (2005– http://www.icr.org/i/pdf/technical/Carbon-14-Evidence-for-a-Recent-Global-Flood-and-a-Young-Earth.pdf , pp 605, 606), it appears that these drums is where the RATE group got their coal. Weathering should not be a problem.
As far as microorganisms being washed in, I agree that Farrell and Turner found apparently fresh organisms in coal. But bacteria were not everywhere. As they noted on p. 160,
The absence of bacteria in the samples from the sixth level west is consistent with the fact that this coal was very compact while the coal from the sixth level east was filled with minute fracture cracks.
We’ll come back to this point later.
jammycakes (C2), as we have already discussed in another thread ( Paul Giem: Isochron Dating Rocks and Magma Mixing - #99 by RonSewell ),
It is not true that “every step in sample preparation will introduce modern carbon to the samples, typically in the 0.14-0.25% range.” First, that value is for oxidation of a sample and then reduction, which is multiple steps. Second, in the best labs, contamination in all of those steps can be gotten easily below 0.1%, and Baumgardner et al. (at my urging) used one of the best labs. Third (and this is not in the other discussion), if this was all laboratory contamination, Lowe would not have singled out coal as being unreliable. He would have had to say that everything is unreliable. And finally, two of the researchers who know the RATE group data the best both agree that laboratory contamination is not the answer for their data. It is important enough to repeat here:
Kirk Bertsche ( RATE’s Radiocarbon: Intrinsic or Contamination? ) has stated,
While this conclusion [laboratory contamination] explains the higher values for the biological samples in general, it does not account for all the details. Some biological samples d* have radiocarbon levels not explainable by sample chemistry. These samples are mostly coals and biological carbonates ….
Unlike the literature values, Baumgardner’s coal samples do show significant radiocarbon above background, inviting explanation.
(italics his). He blames the carbon-14 found on “ in situ contamination”, but he at least agrees that it is not just laboratory error.
And Harry Gove, as summarized by Kathleen Hunt ( Carbon-14 in Coal Deposits ), stated:
The short version: the 14C in coal is probably produced de novo by radioactive decay of the uranium-thorium isotope series that is naturally found in rocks (and which is found in varying concentrations in different rocks, hence the variation in 14C content in different coals). Research is ongoing at this very moment.
Note that they have different explanations for the data, which may or may not be correct singly or in combination, but they both agree that there is real carbon-14 in coal. In other words, they agree with Lowe.
You are correct that AMS dating is being applied, and that cosmic rays (which are a major problem with decay counting) do not interfere significantly.
I have trouble standing behind anything Kent Hovind says unless it can be independently verified.
You can go back to 80,000 years under very special circumstances. That is a carbon-14 level of 0.005 pMC, and was done by Taylor and Southon in 2007 on a diamond (Available at Use of natural diamonds to monitor 14C AMS instrument backgrounds - ScienceDirect ). If one is using 50 micrograms of a target, one can contaminate it with modern carbon at the level of 2.5 nanograms of modern carbon (actually ~2.3 ng, as we are now at ~110% modern carbon since the atmospheric nuclear bomb tests) to produce these results, and I think anyone will concede that it is possible, if not highly probable, that we are dealing with contamination here. Well, maybe not some creationists, but they should.
However, that does give you a good idea of what is possible with modern techniques. There was a set of experiments done on subfossil wood dated at 2 million years. Let me quote from my 2001 paper ( Geoscience Research Institute | Carbon-14 Content of Fossil Carbon ):
Perhaps the most interesting experiment was reported by Kirner et al. (1997). Part of the background is as follows: R. E. Taylor was aware that short-age constant-decay theories predicted that there should be >0.005 pmc in fossil carbon (Giem 1997a, p 180-187). Taylor believed that he should be able to obtain 14C/C ratios lower than those commonly published, and that could possibly match or even surpass those obtained from graphite. The results his group obtained include several measurements with an average of 0.162 pmc. The lowest value they obtained was 0.056±0.004 pmc. [](https://www.grisda.org/_edn8) Their conclusions were that the data were best explained as the sum of a constant amount of contamination by modern carbon regardless of sample size, plus a constant proportion of carbon-14 equivalent to 0.12±0.02 pmc. The constant proportion of carbon-14 "could arise if our wood blank was not truly 14C dead either due to a finite age or the result of the presence of residual contamination not removed by chemical treatment.”
So our best information is that it is not just coal that has the problem of residual radiocarbon. This radiocarbon either had to be contamination in situ, or nuclear synthesis, or both, or some of it is actual residual radiocarbon, which has implications for age. Can anyone give any estimate for the first two? Or can anyone suggest experiments to differentiate between the three? For example, could one test for either bacteria themselves under a microscope, or grow bacterial cultures in various parts of a coal seam and see if their radiocarbon levels correlated with the bacterial/fungal burden?
jammycakes, you say (C11),
This is a side to science that I’m pretty sure a lot of radiometric deniers haven’t a clue about.
Perhaps, but not all. I’ve taken a tour of the lab where the videos were made, among other labs, and before going to medical school I ran a lab doing radioimmunoassays for aldosterone. I’m familiar with crazy quirks, like our lab could only use methanol for paper chromatography (yea, it was that long ago) from Mallingkrodt; methanol from all the other manufacturers we tried would not give us good results. This may explain why I won’t discount laboratory contamination as an explanation without convincing evidence that it is not the whole explanation for the radiocarbon measurement in question.