Stupid Question about Radiocarbon Dating

Lucian

Illuminator
Joined
Jan 26, 2009
Messages
3,257
I am reading a book called Anglo-Saxon Deviant Burial Customs by Andrew Reynolds (Oxford UP, 2009). I came across a statement that I found puzzling. Reynolds says, "C14 dating in the Anglo-Saxon period is fraught with problems. The fifth and sixth centuries do not provide accurate dates, nor do the eighth and ninth centuries, but opportunities for accurate dating are provided during the seventh and tenth centuries" (p. 126). He cites as his source a personal communication from Alex Bayliss.

So, my question is, why would it be particularly difficult to narrowly and accurately date material from certain time periods or geographical locations? My understanding is that (very broadly) Carbon 14 decays at a predictable and known rate. I've never seen anything that suggests it goes a bit wonky in the fifth, sixth, eighth and ninth centuries. I did Google around a bit. I found some articles, including some co-written by Bayliss, that discuss ways of narrowing the dating using grave goods and Bayesian statistics, but that doesn't really help me. I did see a couple of references to possible dietary effects on the radiocarbon dating of bones (and although Reynold's statement doesn't mention bones, his entire book is about human burials).

I should point out that I am a complete idiot, so if you could explain radiocarbon dating in a way that an English major or an unusually slow two year old could understand, I'd really appreciate it.
 
Sounds like a rattling good read! What were their procedures for burying deviants?

It depends on the time period (pagan vs. Christian), but typical features of deviant burials include: prone burials, decapitation, bound limbs, burial on boundaries, mutilation, and placement of stones or other heavy objects on top of the corpse. In the Christian period, there are separate execution cemeteries.
 
I'm not expert, but this seems odd. I know that the current addition of fossil carbon to the atmosphere is (and has already) introduced a bias into carbon dating of relatively recent things, but I've never heard of anything from the 5th, 6th, 8th, or 9th centuries. I can find loads of articles using radiocarbon dating on objects from those time periods that don't mention anything about problems. If it is fundamentally flawed for some reason, I'm surprised that there aren't other dating methods that would work well, perhaps tree rings?

The basic idea of C-14 dating is what you wrote. Carbon in land based biological systems comes from the atmosphere, via plants. We can date any relatively recent terrestrial organic matter by measuring the ratio of C-14 to C-12 (the far more common and entirely stable variety.) C-14 decays quickly, in a geological sense, so it is useful for dates in the hundreds to tens of thousands of years range. It is continually produced in the atmosphere from the interaction of cosmic rays with nitrogen-14. So, many N-14 atoms are converted to C-14, some get taken up by a living thing, that thing dies and is preserved, and over time the C-14 converts back into N-14. As long as there is some remaining, we can tell how long ago it died based on the relative amount of C-14 left.

There are a number of known limitations with C-14 data. A couple of the best known have to do with carbon that is not atmospheric. Scientists who use C-14 dating are aware of these limitations though, and they will either adjust dating for the situation or avoid C-14 dating entirely in favor of a method more appropriate to the sample.

For instance, much of the carbon in the ocean comes from geological rather than atmospheric sources. It has almost no C-14 in it at all, giving the appearance of a much older speciman than is really true. This is a common creationist argument against C-14 dating, that living snails date to 100,000 years ago and such. Similarly, land animals that derive a large amount of their diet from sea creatures (an otter, let's say) are then deficient in C-14 and so will also appear to be older than they ought.

A second commonly mentioned issue has to do with C-14 generated by terrestrial radiation rather than cosmic rays. If a sample happens to be buried in a location that is rich in radioactive minerals, C-14 can be generated and produce anomalously young ages for ancient samples (fossils in coal that are hundreds of millions of years old can contain C-14, for instance.)

Another problem is that humans have messed around with the carbon in the atmosphere, both decreasing the natural ratio of C-12:C-14 by burning loads of C-14 depleted fossil fuels and by increasing the natural ratio by exploding nuclear bombs in the 20th century.

We've got some experts on here who will hopefully let you know if I'm way off base, but that's my understanding of the process. In short, it works well for land based specimens of ages ~50-60,000 years which are either plants or which derive most of their carbon from plants.
 
I am reading a book called Anglo-Saxon Deviant Burial Customs by Andrew Reynolds (Oxford UP, 2009). I came across a statement that I found puzzling. Reynolds says, "C14 dating in the Anglo-Saxon period is fraught with problems. The fifth and sixth centuries do not provide accurate dates, nor do the eighth and ninth centuries, but opportunities for accurate dating are provided during the seventh and tenth centuries" (p. 126). He cites as his source a personal communication from Alex Bayliss.

So, my question is, why would it be particularly difficult to narrowly and accurately date material from certain time periods or geographical locations? My understanding is that (very broadly) Carbon 14 decays at a predictable and known rate. I've never seen anything that suggests it goes a bit wonky in the fifth, sixth, eighth and ninth centuries. I did Google around a bit. I found some articles, including some co-written by Bayliss, that discuss ways of narrowing the dating using grave goods and Bayesian statistics, but that doesn't really help me. I did see a couple of references to possible dietary effects on the radiocarbon dating of bones (and although Reynold's statement doesn't mention bones, his entire book is about human burials).

I should point out that I am a complete idiot, so if you could explain radiocarbon dating in a way that an English major or an unusually slow two year old could understand, I'd really appreciate it.

As I understand it, the level of atmospheric carbon is not a constant, and any variations can throw off the dating process. In Britain I believe they get around this by studying the growth rings of trees. To do this you have to begin with a piece of wood you know the date of, then compare samples from that.

So it may be at some point they got luck and have wood they know for a fact grew as a tree in X year. In the other centuries they have not been so lucky.
 
I am reading a book called Anglo-Saxon Deviant Burial Customs by Andrew Reynolds (Oxford UP, 2009).
Where I live everything Anglo-Saxon is regarded as deviant.

So, my question is, why would it be particularly difficult to narrowly and accurately date material from certain time periods or geographical locations?
I think this might be down to variability in solar activity; at times of high variability the signal gets confused. No doubt somebody more knowledgable will put us straight.
 
As I understand it, the level of atmospheric carbon is not a constant, and any variations can throw off the dating process. In Britain I believe they get around this by studying the growth rings of trees. To do this you have to begin with a piece of wood you know the date of, then compare samples from that.

So it may be at some point they got luck and have wood they know for a fact grew as a tree in X year. In the other centuries they have not been so lucky.

Wouldn't they be able to recreate a complete sequence of tree growth by matching the tree rings of several different trees with overlapping time ranges? It sounds odd that there would a gap in the dendochronology, let alone two gaps.

Secondly, what intended accuracy are we looking at? E.g., the radiocarbon dating of the Shroud of Turin places it between 1260 and 1390 - that's a 95% confidence interval of 130 years for something roughly half the age the OP is asking about. That's maybe an extreme example, but I never heard that C14 could give you an accurate date (what dendrochronology can do, if you have an adequate wood sample).
 
It may have to do with the error built into the calibration curve for those time periods. Wiki has this:
Over the historical period (from 0 to 10,000 years BP), the average width of the uncertainty of calibrated dates was found to be 335 years – in well-behaved regions of the calibration curve the width decreased to about 113 years, while in ill-behaved regions it increased to a maximum of 801 years. Significantly, in the ill-behaved regions of the calibration curve, increasing the precision of the measurements does not have a significant effect on increasing the accuracy of the dates.

This has more to do with having artifacts available with a known age (and of similar materials and circumstances) than it does with changes in background rates of radioactive carbon formation. I see that bone in particular isn't a good candidate for testing because of exchanges/absorption from the soil, but that recent techniques have overcome this problem.

How old is the book you are reading?
 
Wouldn't they be able to recreate a complete sequence of tree growth by matching the tree rings of several different trees with overlapping time ranges? It sounds odd that there would a gap in the dendochronology, let alone two gaps.

Secondly, what intended accuracy are we looking at? E.g., the radiocarbon dating of the Shroud of Turin places it between 1260 and 1390 - that's a 95% confidence interval of 130 years for something roughly half the age the OP is asking about. That's maybe an extreme example, but I never heard that C14 could give you an accurate date (what dendrochronology can do, if you have an adequate wood sample).

Been done. There are tree ring databases for most countries.
The 14c inaccuracy claims are just that - claims.
Still, taken with other evidence such as strata and associated archaeological record date ranges are pretty much accepted.
I suspect the -gaps- are an artefact in some-ones mind
 
marplots said:
It may have to do with the error built into the calibration curve for those time periods. Wiki has this:
Over the historical period (from 0 to 10,000 years BP), the average width of the uncertainty of calibrated dates was found to be 335 years – in well-behaved regions of the calibration curve the width decreased to about 113 years, while in ill-behaved regions it increased to a maximum of 801 years. Significantly, in the ill-behaved regions of the calibration curve, increasing the precision of the measurements does not have a significant effect on increasing the accuracy of the dates.

This has more to do with having artifacts available with a known age (and of similar materials and circumstances) than it does with changes in background rates of radioactive carbon formation. I see that bone in particular isn't a good candidate for testing because of exchanges/absorption from the soil, but that recent techniques have overcome this problem.

How old is the book you are reading?
Thanks to everyone who has replied.

The book was published in 2009, but his citation comes from 1997. I did notice the discrepancy, and from what I found when I investigoogled, I wondered if this problem had been corrected. However, many of the C14 tests he discusses were performed in the 1990s.

Most of the C14 dating Reynolds is discussing does come from bone. There are a few exceptions. At a couple of the execution cemeteries, bits of wood have been found (not presumably large enough for dendro-dating), and some of these have been dated.

The other possible exception may be the Sutton Hoo execution burials. I'm not sure what they found to date there. Very little bone survives at Sutton Hoo. Most of the execution burials only left behind "sand-bodies:" crusty stains in the soil.
 
My vague understanding is that because of variations in the atmosphere, the relationship isn't totally linear and a single value can give you multiple possible ranges.

http://c14.arch.ox.ac.uk/embed.php?File=calibration.html

calib.gif
 
My vague understanding is that because of variations in the atmosphere, the relationship isn't totally linear and a single value can give you multiple possible ranges.

That's part of it. More specifically, C14 is created when N14 interacts with certain types of cosmic rays/radiation byproducts. This creates some unique problems, in as much as subarial nuclear testing increased the amount of C14 in the atmosphere. C14 requires calibration, which is why dendochronology (tree rings) are so important--it's an independent test that can be used to calibrate the radiocarbon dating.

My question is, is this unique to that area? I don't know much about stuff that young. If it's world-wide, it could be one of a number of issues; if it's local, there are another set of explanations. I'm almost certain it's a local issue; I should have heard about a global issue (you'd think a book three inches thick on the topic would mention it!). The whole subarial nuclear testing thing did give false dates for newer stuff, but that shouldn't impact ancient materials.

Local taphonomic factors are almost certainly more important. You can't test what ain't there. Bone is somewhat problematic in that regard--"Archaeology of Human Remains" discusses numerous processes that can make bone not be there anymore, specifically discussing that time period and area, and there are numerous ways it can become contaminated (either with dead or fresh carbon). Charcoal is useful, though it over-estimates the date (the tree that created the charcoal had to die to be burned, so the age the tree gives is some time older than the date of the fire--in a centuries-old building, this can be a serious issue). I've used remarkably small bits of wood for radiocarbon dating in the past; it's almost all carbon, so it works really well.

If they find silica remains from that time period, they could try optically-stimulated luminescence dating. Works for about the same time period, though the requirements for collection are a bit more strenuous. For one thing, it has to be dark. Plus, it's a destructive test, and silica artifacts of that time would be rare.
 
I don't know anything about this, but here's something from an article called Radiocarbon Calibration in the Anglo-Saxon Period: Ad 495–725, by McCormac et al, from 2004:

From the Abstract:
"Radiocarbon dating has been rarely used for chronological problems relating to the Anglo-Saxon period. The “flatness” of the calibration curve and the resultant wide range in calendrical dates provide little advantage over traditional archaeological dating in this period."

From the Introduction:
"Radiocarbon dating is presently rarely used for archaeological dating in western Europe during the migration period (about AD 400–700). This is because the calibrated dates produced in this period have usually been insufficiently precise to refine existing archaeological chronologies (principally those based on artifact types).

The shape of the existing calibration curve (Stuiver et al. 1998) suggests that the atmospheric concentration of 14C was changing rapidly in the period AD 570–720, and so high-precision 14C measurements might produce calibrated dates spanning 50 yr or less (at 95% probability)."

Stuiver et al refers to:
IntCal98 radiocarbon age calibration, 24,000–0 cal BP. Radiocarbon 40(3):1041–83

They have apparently been working to fix this.
 

Back
Top Bottom