Paul Giem: Isochron Dating Rocks and Magma Mixing

RonSewell (C52),

Thanks for the tip. I had not noticed that offer before. Of course, it will be valid only after Canada feels comfortable about COVID.

jammycakes (C51)

PaulGiem:

Note that the two explanations of the data are not identical, although I suppose one could try a combination of them. The reason for pointing this out is not so much to argue who is right (that would take another post, if not more experiments) as it is to point out that the experts do not believe the laboratory contamination hypothesis. It would appear that the laboratory contamination hypothesis is in fact functioning as a rescuing device.

Well of course it will be a combination of them. That’s simply how measurement works in every area of science. You don’t just dismiss one source of error as a “rescuing device” just because it can’t account for the discrepancy in isolation. You have to add all the possible sources of error up together.

You can properly say that laboratory contamination is functioning as a rescuing device if it is being used as an explanation in isolation, particularly when in an experiment at a particularly well-functioning lab it is shown to be inadequate in isolation.

Now I agree that one must add up all possible sources of error. But I would ask if you have made any estimates of the amount of error one can expect from either nucleosynthesis or in situ contamination. I’d like to assume that this is not just a handwaving argument. The preliminary data and calculations I have seen seem to indicate the causal inadequacy of these theories also, either individually or in combination. Perhaps you have other data and/or calculations.

With discussion of carbon dating here, this new thread may be of interest.

It is standard practice in every area of science to start off with a ball park estimate of how old, or how large, or how heavy, something is. This is simply so that you know to use the correct tool for the job. From that, you take measurements in order to narrow it down.

So for example, they will start off knowing that a rock formation is from Cretaceous strata. In other words, somewhere between 145 million and 66 million years old. This will simply tell them that they should be using uranium-lead, potassium-argon or rubidium-strontium methods, rather than, say, radiocarbon or thermoluminescence dating. From that, they will then be able to pinpoint the formation as being, say, 66.038 ± 0.011 million years old.

It’s exactly the same principle as if I were to measure some furniture, such as a desk, to see where I could put it. I would start off thinking that it must be roughly two metres by one metre by one metre. That would tell me to use a tape measure rather than an electron microscope, a micrometer, the odometer in my car, a handheld GPS device, or the Hubble Space Telescope.

Honestly, this is something I see happening quite frequently in young Earth arguments. They take techniques that are perfectly legitimate and standard practice in every area of science, and by ignoring the fact that there is actual maths involved, twist them to make them look like there’s some kind of fudging going on.

Well yes, I’d agree there. He does seem angry, and it is pretty off-putting. But he does make some important points about quality control and quality assurance, and he does provide quite a lot of detail about the standards that geologists are expected to meet in conducting their research. For example, he pointed out that if there are any questions about the integrity about a particular experiment, you are expected to discard those results and re-do the experiment, not to “correct” the results to account for “typographic errors” in the reports, as the RATE team did with a set of results from twenty years previously.

That must qualify as the understatement of the century. Accelerated nuclear decay doesn’t just “raise problems,” it is complete science fiction. The problems that the RATE team reported (and downplayed to a staggering extent) barely scratch the surface of the problems with their hypothesis. Besides the heat problem, it would have required the fundamental constants of physics to have changed. This would have had extremely far reaching consequences. At best, you’re talking about the Earth being turned into a cloud of hot plasma. At worst, you’re talking about a false vacuum decay.

Well that may be so, but the question is still, just how discordant is the data? The discordances noted are still only a factor of two at most, and the biggest discrepancy is K/Ar, which is the most susceptible to leakage since argon is a gas. For the others, the disagreement is no more than twice the size of the error bars at most. I’m sorry, but to cite differences of that scale as evidence for errors of a factor of a million is still blowing things completely out of all proportion.

No you can’t say that laboratory contamination is functioning as a “rescuing device.” Period. End of story. No exceptions, no excuses. Fully and correctly accounting for contamination is a fundamental requirement in every area of experimental science. Especially when you are making extraordinary claims that hundreds of thousands of results taken using dozens of different methods are all consistently out by factors of a million. As I said, to describe laboratory contamination as a “rescuing device” is to demand an exemption from the basic rules and standards of laboratory technique.

Laboratory contamination has been measured. This can be done by preparing a sample for radiocarbon analysis twice and measuring the difference between the first and second analyses. The difference turns out to vary widely with values between 0.14% and 0.25% of modern carbon being typical for each step in sample processing. And that is assuming that the experimenters concerned took great care in handling and preparing their samples – even relatively minor sloppiness, carelessness, or experimenter error would only increase the errors. Given that radiocarbon dating typically requires several steps of sample preparation (especially if you have to extract cellulose from wood or collagen from bone), contamination levels of 0.5% modern carbon are completely unsurprising.

The RATE team might have been able to claim that something warranted explanation if they had consistently reported levels of contamination of around 5% modern carbon, with a clustering of results around that figure. But levels of up to 0.5% or so with a very wide variation most certainly do not. And if outliers greater than that only crop up occasionally rather than on a regular basis, then those would almost certainly be down to either poorly studied contamination vectors, or experimenter error, or both.

6 Likes

You have just expanded my knowledge of Quantum physic by a small notch. I knew the heat problem was easily life-ending, but false vacuum decay is … not sure what to call it … universe-as-we-know-it-ending?

I wrote IF, Paul. You haven’t shown that anyone has done that.

2 Likes

jammycakes (C58),

It is standard practice in every area of science to start off with a ball park estimate of how old, or how large, or how heavy, something is.

That depends on how “ballpark” you are talking about. In medicine, we tell the lab that we want a blood test run, and don’t give them any more information than that it is blood. A quantitative hCG can run anywhere from 2 (or actually 0) to over 100,000. That’s quite a ballpark.

So for example, they will start off knowing that a rock formation is from Cretaceous strata. In other words, somewhere between 145 million and 66 million years old. … From that, they will then be able to pinpoint the formation as being, say, 66.038 ± 0.011 million years old.

Yes, that would work if all one said was that it was Cretaceous. But what if they were told that the formation was late Maastrichtian, or even the K-T boundary? That would narrow the time period considerably. BTW, I doubt that any lab would give you the data you quote. That is far too low an error for a single lab report. I suspect you got it from the Wikipedia date for the K-T boundary, or from the same source they did, which is almost certainly a composite date.

So have you submitted geological samples, and if so did you tell them that they were Cretaceous?

PaulGiem:

I agree with you that accelerated decay during a Flood raises problems with heat (not to mention radiation) that may well be insuperable.

That must qualify as the understatement of the century. Accelerated nuclear decay doesn’t just “raise problems,” it is complete science fiction.

I’ll give you a little tip on how to sound less like Henke. Mostly let your data do the talking. If they don’t convince, do you really think that your statements are going to do the job? And when someone is agreeing with you, take the win graciously. (I am unlikely to ever say anything dogmatically, even agreeing with you, as I am cognizant that we have been surprised in the past; the bacterial cause of ulcers, the Bretz floods, and the theory of gravity itself come to mind.)

PaulGiem:

Chapter 6 of the RATE II book …

Well that may be so, but the question is still, just how discordant is the data? The discordances noted are still only a factor of two at most, and the biggest discrepancy is K/Ar, which is the most susceptible to leakage since argon is a gas. For the others, the disagreement is no more than twice the size of the error bars at most. I’m sorry, but to cite differences of that scale as evidence for errors of a factor of a million is still blowing things completely out of all proportion.

You don’t seem to understand the point I was making. I have offered reasons why isochron dating might not be trustworthy, and the major answer I am getting is something like “but the dates all match, so they must be trustworthy.” Documentation that the dates in some cases do not match is relevant to the discussion, even if they are only 30-50% different. BTW, if you read the original article you will discover that the “error bars” are actually 2σ error bars or approximately 95% confidence limits, so the dates are quite significantly different.

If you are asking how different the dates are from (agreed to by all sides) real time, you might notice that, as noted in my paper and partially quoted in C46, Miocene to Recent lava in multiple instances dates in the 500 to 1,500 Ma range. Now we’re talking a factor of over 100. That is concerning. How can we (or can we) exclude such samples before we ever test them, so we don’t have to exclude them afterwards (which I agree with you is not best practice)?

No you can’t say that laboratory contamination is functioning as a “rescuing device.” Period. End of story. No exceptions, no excuses. Fully and correctly accounting for contamination is a fundamental requirement in every area of experimental science. Especially when you are making extraordinary claims that hundreds of thousands of results taken using dozens of different methods are all consistently out by factors of a million.

It helps your case if you know the position of your fellow discussant and don’t exaggerate. I realize that some whom you disagree with do believe that the dates are off by a factor of a million, but I would at present not put myself in that category. And dozens of different methods? Really? And you are right that contamination must be accounted for. But to use contamination as a complete explanation after it has been effectively debunked as a complete explanation is not proper, whether you like the term “rescuing device” or not.

Laboratory contamination has been measured. This can be done by preparing a sample for radiocarbon analysis twice and measuring the difference between the first and second analyses. The difference turns out to vary widely with values between 0.14% and 0.25% of modern carbon being typical for each step in sample processing.

Just wow. I can only conclude that you are getting your information from some internet site, and really don’t understand the primary literature (nor do they, assuming you repeated their material accurately). There are two estimates I can cite for contamination, and did so in my paper in Origins in 2001 ( Geoscience Research Institute | Carbon-14 Content of Fossil Carbon ):

Van der Borg et al. (1997) noted graphite to have 0.04±0.02 pmc when measured without reprocessing, and 0.18 pmc when tested after recycling. Arnold et al. (1987) reported a graphite having 0.089±0.017 pmc without recycling, and 0.34±0.11 pmc after recycling (statistically significant at p<0.025).

These happen to match almost exactly the figures you quoted (the only difference, a non-significant and probably not technically justified one, is .251 instead of .25 for the second reference). Thus the contamination quoted is not for each step, but for the entire oxidation-reduction laboratory process.

But there is a worse problem with what you said. Contamination varies from laboratory to laboratory. Natural gas measured by Beukens, for example (1992. Radiocarbon accelerator mass spectrometry: background, precision, and accuracy. In: Taylor RE, Long A, Kra RS, editors. Radiocarbon After Four Decades: An Interdisciplinary Perspective. NY: Springer-Verlag, p 230-239.) had a total measurement of 0.077 ± 0.005 pMC, putting a limit at that lab for the oxidation-reduction process of less than 0.09 pMC. And Bird et al. (1999: https://repository.arizona.edu/bitstream/handle/10150/654594/3802-3473-1-PB.pdf?sequence=1 ) noted that their treated and untreated samples of “Ceylon graphite” were not statistically different, and their highest value (for an untreated sample!) had 0.08 ± 0.03 pMC (their average was closer to 0.04 pMC). So at the best labs laboratory contamination can be reduced to quite a bit less than 0.14 pMC.

I agree that it is probably not zero (except possibly at Bird’s lab). But it does not account for all the results of Baumgardner et al., especially as their samples were done at one of the better labs, and an apparent background was in fact subtracted from their reported coal data.

I also agree that contamination underground, contamination during procurement, and nucleosynthesis should be explored, and if possible quantified, so that we can scientifically reach a conclusion as to whether they are adequate explanations (separately or together) for the carbon-14 we are finding in coal (over and above laboratory contamination). I am seeing what can be done to test these hypotheses.

Dan_Eastwood (C57),

I’ll take a look at that discussion.

1 Like

OK. E=mc^2.

The best labs do not even quote for ages beyond 50,000 years. The calibration curve goes to 55,000 years, but the error bar broadens.

But by far, the predominate source of contamination in coal occurs before the sample even makes it to the lab, or is even exposed to mining. Contamination is a complete explanation, and in fact an expected one.

3 Likes

How is this measured? If it involves in any way chromatography, ion-selective electrode electrochemistry, chemiluminescence, or any form of analog analytical instrumentation, I can guarantee you from a career in instrumentation that a turndown of 100,000 to one is a fantasy of at least two orders of magnitude. That is not even considering sample prep. In industry, where optimization can represent very large sums, typical expected accuracies for analytical are on the range of 2% which limits turndown to 50:1. Research analysis can do much better, but that requires exceptionally rigorous justification and calibration, and a printout does not qualify as such. @jammycakes comment is absolutely on, you do not measure ingredients for a cake on a truck scale.

While I challenge your contention that hCG measurement has a 100,000 turndown, we are discussing AMS for 14C. It is ridiculous to have labs actually doing the measurements stating their own results are only valid to their specified limit, and have somebody tell them from outside, “no, you do not understand. Your turndown goes to zero!”

2 Likes
  1. What evidence do you have that geologists are actually doing this?
  2. Even if they were doing it, it would still be perfectly legitimate if they were doing so to narrow down the errors from, say, ±5% to ±0.5%.

I got it from this BBC news report:

I see no reason to doubt it. Pushing the boundaries of accuracy and precision is an active research topic in every area of science. So while you may not get figures that tight from commercial laboratories, it is only to be expected from university departments working on new cutting-edge techniques.

Besides, error bars that tight are very, very common in many areas of science. When I was at university, one of the very first practical classes in my very first term had as its objective to measure gravitational acceleration with a similar level of precision using nothing more than a pendulum, a ruler, and an electronic timer.

In any case, what exactly do you mean by a “composite date”? And if it really were as unreliable as you insist, how would being a “composite date” make the error bars smaller rather than larger?

22,000°C, Paul. Twenty. Two. Thousand. Degrees. Centigrade. And remember, that was the RATE team’s own admission.

And you don’t seem to understand the point that I was making. The point is that it is one thing to show that some radiometric methods fail to give correct results in some situations. It is a completely different matter altogether to show that every radiometric method is so badly out of whack that all of them consistently fail to differentiate between thousands and billions every single time.

Nobody is claiming that all the dates match all of the time. We are only saying that the dates match sufficiently often to rule out the second possibility. If radiometric dating really were so badly out of whack that every single method consistently failed to differentiate between thousands and billions, we would almost never see different methods agreeing with each other. Not just some of the time.

Well yes, but that only applies to samples that are basic graphite, such as non-biological Precambrian samples, which contained levels of radiocarbon up to 0.12% modern carbon – fully consistent with these contamination levels. Phanerozoic (biological) samples are significantly more complicated because additional steps need to be taken, such as extracting cellulose from wood or collagen from bone, and those will bump up the laboratory contamination levels significantly. The larger quantities reported by the RATE team – up to 0.7% modern carbon – were all from Phanerozoic samples.

6 Likes

Well precisely. That’s why you can’t just dismiss contamination as a “rescuing device.” And that’s why contamination is the best explanation. If there really were an intrinsic component, the readings would all cluster round a point significantly larger than the spread of the results. A spread between 2.2% and 2.7% might be something to write home about. A spread between 0.2% and 0.7% is not.

Again, precisely. The total level of contamination will be the contribution from these vectors plus contamination from sample processing. As I said, you need to demonstrate that the measured levels exceed the sum total of these contributions, not just one of them individually.

The whole point is that if you are going to claim that hundreds of thousands of other radiometric data points are consistently out by several orders of magnitude, you bear the burden of proof to demonstrate that any of the radiocarbon is intrinsic rather than contamination. That is the way it works in every area of science. If this weren’t the case, then we would be granting a free pass to astrology, homeopathy, water divining, reading tea leaves, feng shui, and tobacco companies claiming that smoking is good for you.

5 Likes

Apples and oranges. The hCG test has a far different range of sensitivity than the radiometric dating methods. (Even if the hCG range has been stretched two orders of magnitude too far, about which I have no information to contribute.)

As you know, this implies that 5% of radiometric dating results will fall outside the error bars.

I have the impression that those 5% will be cited one-by-one in a RATE report, or some other YEC publication, as proof that the earth really is ~7000 years old.

This is interesting. So you acknowledge publicly that the Earth is at least millions of years old?

Why are you going through this whole exercise if you think that standard geology textbooks are more accurate than YEC textbooks? This is not intended as an implicit criticism. I am genuinely curious.

Blessings,
Chris Falter

4 Likes

I think he is open to Young Biosphere Creation. Right @PaulGiem ?

It would help me know the position of my fellow discussant if my fellow discussant made that position clear. Are you trying to claim that the Earth is six thousand years old, or that it is x billion years old after all but things might have happened at different times and/or in a different order from what radiometric ages suggest?

Because if you’re claiming that the Earth is six thousand years old, but that the dates aren’t consistently out by factors of up to million, you are contradicting yourself.

Apologies for the Gish Gallop, but start here:

Even if you don’t agree with the validity of these different reasons, there’s no denying that there are dozens of them. And that’s just a partial list.

Laboratory contamination in isolation may have been debunked as a complete explanation. But total contamination (laboratory + in situ) has not.

2 Likes

How young? Given that the biosphere is several billion years old by standard methods, the difference between an old earth and an old biosphere can’t be all that accommodating.

2 Likes

off-topic, but in my college physics lab my team finished the pendulum experiment early. Gathering up all the string we could find, we collected one more data point by hanging the pendulum out a 3rd floor window. :grinning:

1 Like

YBC is an odd (crocko-) duck, with an OEC planet and YEC biosphere. You might think they would be more open to the idea of old life, but no. It does avoid the 22K C heat problem discussed above, but not the heat problems of a global flood.

1 Like

It’s very puzzling that they can accept dates for earth yet reject such dates for life, which after all has been around for 4/5 of that time. I’ve never seen any explanation from a OEYL advocate for how that’s possible.

3 Likes

On the 8th of August, 1916, a discharge of lightning struck a mining car rail line, which conducted the electrical energy deep into the mine shaft. Three workers lost their lives in the explosion; one of them was my wife’s great granduncle.

This piece of family lore actually ties into the question of 14C contamination in coal. Methane builds up in coal mine shafts because methanogenic fungi and bacteria colonize the exposed coal and timbers, so any extracted coal is already tainted. But it is well known that even coal sitting undisturbed deep underground is populated by a pervasive ecosystem of fungi, bacteria, and archaea. Coal seams are porous, fractured, and permeable to groundwater and aquifers. That methane is produced in coal seams has been long recognized in industry, and stable isotope analysis is routine to ascertain biogenic origin. The coal may date back to the carboniferous era, but the subterranean biome lives in situ now. Methane seepage into coal shafts where it builds past the lower explosive limit is enabled by the permeability of coal. Coal is not even remotely close to pristine and hermetically sealed awaiting 14C dating.

Study has moved on from the question of if there is significant biological activity in coal, to much more detailed characterization, aspects of research concerning the potential of utilization and enhancement for commercial exploration and production, understanding of aquifer hydrology and surface influence, and more academic interest in diversity, phylogeny, interaction, and particular habitat preferences of the various microbial species. A great deal of understanding of subsurface biomes has progressed since 1989 when David Lowe was already warning “even with these precautions, considering the ubiquitous occurrence of fungi and microbes (bacteria have been found in a drill hole 3 km underground apparently living on granite!), the use of coal samples as routine 14C laboratory background test samples should probably be avoided.”

It is expected, then, that trace amounts of 14C would be ubiquitous in coal, which may be added to contamination during extraction and handling prior to the lab. Then there is sample preparation. Then there is the AMS itself. AMS is a marvelous piece of technology which yields reliable results, but like any instrument it will have inherent limitations on sensitivity, and will require recurrent zeroing and range calibration. Added together, the actual outcome is pretty much in line with what one would expect, from no significant 14C to very little.

His bio on both the CMI and AiG sites has him stating:

I believe that if we do our homework carefully enough, and without succumbing to bias, we will find that the Book, including a literal 6-day creation, will stand.

Pertinent to 14C dating, in the same article Paul writes:

Carbon-14 dating was the most fascinating method of all. Fossil carbon, with a conventional age of up to 350 million years, repeatedly dated to less than 55,000 radiocarbon years. This is compatible with a date of as low as 4,000 years in real time (the date of the Flood would have to be determined on other grounds). It is incompatible with an age of millions of years, or even realistically with an age of over 100,000 years or so. It basically forces one into accepting a short chronology for life on earth.

@PaulGiem , as this was written back in 2001, I would be interested if you are steadfast on that statement, because I find the assertion of AMS accuracy with vanishingly minuscule 14C way out on the decay asymptote, to be incredibly dissonant with the idea that “radiocarbon years” are an whopping order of magnitude inaccurate in “real time”.

2 Likes

That’s exactly what a YBC would say.

Do you refer to a six-day, recent creation following a Genesis 1:1 creation billions of years earlier? If so, that seems impossible to fit into the rest of Genesis 1, not to mention reality.

1 Like