• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Does the Shroud of Turin Show Expected Elongation of the Head in 2D?"

OK, I'll bite.

Reviewing this kind of work takes time, and IRL I don't have much spare, so expect delayed responses from me - reading the paper you cited and writing this post used a lot of time.

I'm hesitant to get involved because you have used three distinct statistical approaches to support your proposition that the sample results are so heterogenous as to indicate subterfuge.
  1. In #70 you used non-overlap of 1 sd intervals. I addressed that briefly in #98 .
  2. Next, in #159 you relied on the "chi^2 test" saying "If the carbon 14 dating fails the chi^2 test, then the results are no good" without giving any more detail of the "chi^2 test". In #176 I pointed you to Bray and said "If you want to continue with your heterogeneity argument, you really need to address that expert opinion." In #196 you said "Look at table two [in Damon et al], and the reported X^2 value of 6.4". I addressed that in #203.
  3. Next, in #206 you said "How about a paper with two statisticians contributing?" linking to a paper you apparently hadn't read (#215) and I'm practically certain you hadn't understood, and tried to reverse the burden of proof ("Can you explain why it is wrong?").
Overall it appears that you use statistics as a drunk uses a lamppost — for support rather than illumination. If I see signs that you are moving goalposts, or trying a gish gallop, or evading my questions and comments, then I'll stop engaging with you - I value my time above your approval.

OK, on to the statistics...

Firstly I'll address the crux of your proposition, that data differences ("dispersion") between labs suggests that the samples did not come from one source. In #76 you said:
That doesn't matter. No matter what date, if the samples don't agree, then the samples were not from the same thing.​
The control samples do agree on the date, that means the control samples were from the same items.​

This evidences a fundamental misconception (not restricted to you!), that heterogeneity is binary - things are either heterogenous or not. In reality nothing is perfectly homogenous, there are always real differences between subsamples - not just measurement errors. Those differences may be too small to matter but, no matter how small, they can be demonstrated given enough data and appropriate data analyses. If data analysis doesn't show a difference that does not mean there is no difference, only that the data and/or analysis are inadequate to show the difference.

This illustrates one of the problems with null hypothesis significance tests (NHST), we don't believe the null hypothesis anyway (see, for example, link). We know the SoT is not perfectly homogenous, it is at least somewhat heterogenous. If NHST tests do not demonstrate heterogeneity then either the data or the data analysis are inadequate.

To support your proposition you would need to show that the sample dispersion was so great as to be incompatible with "samples ... from the same thing". But we do not know the dispersion expected of "samples ... from the same thing". The control samples give some relevant information, but it would be a great assumption that the heterogeneity of the ToS and the controls were similar.

The non-overlap of standard deviations you used first (#70) and the chi^2 analysis used by Damon et al. compare the between lab dispersion to (estimates of) within lab dispersion, not relevant to your proposition. link is relevant here.

Even worse, in this data set samples and labs are inextricably confounded, it isn't possible to distinguish sample effects from lab effects. Again, the control samples give some relevant information, but it would be a great assumption that between lab differences in treatment of ToS samples were the same as differences in their treatment of controls.

To add to the difficulties, Damon et al. Table 1 gives only 12 results for the ToS, 4 for each sample/lab. With such a small data set no statistical test will have useful power, you really have no chance of showing undue heterogeneity unless it is massive.

OK, on to the paper of which you asked "How about a paper with two statisticians contributing?"

TL;DR This paper really is rubbish. It's approaches are "different" enough to be considered wacky. Even if their conclusion ("... a decrease in age BP as x1 increases ...") were correct, it would not support your conclusion that the 3 samples did not all come from the SoT.

There are (at least) three versions of the paper:

1) You linked to Fanti et al. (2010) That 5 page document is from a Workshop Proceedings and "The purpose of this paper is to summarize the results obtained in Ref. 2". Ref 2 is Riani M., Atkinson A.C., Fanti G., Crosilla F.: “Carbon Dating of the Shroud of Turin: Partially Labelled Regressors and the Design of Experiments”, which I'll refer to as...

2) Riani et al. (2010). The link to this in Fanti et al <https://www.lse.ac.uk/collections/statistics/research/RAFC04May2010.pdf> gives a 404 error but I found it at Researchgate . The coincidence in dates (May 4, 2010) suggests this 20 page document is from the same workshop as Fanti et al (2010).

3) Riani et al (2013) Riani et al (2013) by the same authors is an 11 page article in in Stat Comput (2013)

My comments apply to all three, but I've concentrated on Riani et al (2010) as being the most complete. Pulling it apart point by point would be tedious and pointless. I'll highlight some of the worst bits, if you want more justification of my opinion, indicate what would satisfy you.

The Riani et al approach is wildly unusual and, in my opinion, fatally flawed.

They have only 12 data points,far too few to show excessive heterogeneity. Their section on heterogeneity (section 2 in Riani et al 2010) really just demonstrates that they don't have a good model of the errors, required for any realistic analysis.

Damon et al. say "The results, together with the statistical assessment of the data prepared in the British Museum, were forwarded to Professor Bray of the Istituto di Metrologia 'G. Colonetti', Turin, for his comments. He confirmed that the results of the three laboratories were mutually compatible, and that, on the evidence submitted, none of the mean results was questionable".
Riani et al ignore this and work on the basis that there is heterogeneity. In 2010 they said:
Fanti et al. "The 12 datings, furnished by the three laboratories, show a lack of homogeneity"​
Riani et al. "The twelve results from the 1988 radio carbon dating of the Shroudof Turin show surprising heterogeneity. We try to explain this lack of homogeneity by regression on spatial coordinates."​
By Riani et al 2013 the heterogeneity was not only undeniably present, it was "egregious" - last sentence of section 2 Heterogeneity "We now use a spatial analysis to try to discover the source of the egregious heterogeneity in the readings on the TS."

They fit a spatial model when they don't have good spatial data. Their generation of "387,072 possible cases to analyse" is really just fiction, they don't seem to realise that all except one (at most!) of these cases are wrong - there is only one "real" arrangement and they don't know which (if any) of the "possible" cases this is.

They fit a rectilinear model with no clear reason (but not surprising considering they have only 3 "real" x values. They conclude (Riani et al 2010 p14) "The effect is that of a decrease in age BP as x1 increases. The effect is not large over the sampled region; between x1 = 43 and 81, our estimate of the change is about two centuries. Extrapolation of this linear trend to unsampled values of x1 eventually leads to meaningless negative results". Such an impossible conclusion should have been taken as clear evidence that the model is just plain wring.

Overall this work illustrates the phrase "if you torture the data long enough, it will confess to anything"
 
OK, I'll bite.

Reviewing this kind of work takes time, and IRL I don't have much spare, so expect delayed responses from me - reading the paper you cited and writing this post used a lot of time.

I'm hesitant to get involved because you have used three distinct statistical approaches to support your proposition that the sample results are so heterogenous as to indicate subterfuge.
  1. In #70 you used non-overlap of 1 sd intervals. I addressed that briefly in #98 .
  2. Next, in #159 you relied on the "chi^2 test" saying "If the carbon 14 dating fails the chi^2 test, then the results are no good" without giving any more detail of the "chi^2 test". In #176 I pointed you to Bray and said "If you want to continue with your heterogeneity argument, you really need to address that expert opinion." In #196 you said "Look at table two [in Damon et al], and the reported X^2 value of 6.4". I addressed that in #203.
  3. Next, in #206 you said "How about a paper with two statisticians contributing?" linking to a paper you apparently hadn't read (#215) and I'm practically certain you hadn't understood, and tried to reverse the burden of proof ("Can you explain why it is wrong?").
Overall it appears that you use statistics as a drunk uses a lamppost — for support rather than illumination. If I see signs that you are moving goalposts, or trying a gish gallop, or evading my questions and comments, then I'll stop engaging with you - I value my time above your approval.

OK, on to the statistics...

Firstly I'll address the crux of your proposition, that data differences ("dispersion") between labs suggests that the samples did not come from one source. In #76 you said:
That doesn't matter. No matter what date, if the samples don't agree, then the samples were not from the same thing.​
The control samples do agree on the date, that means the control samples were from the same items.​

This evidences a fundamental misconception (not restricted to you!), that heterogeneity is binary - things are either heterogenous or not. In reality nothing is perfectly homogenous, there are always real differences between subsamples - not just measurement errors. Those differences may be too small to matter but, no matter how small, they can be demonstrated given enough data and appropriate data analyses. If data analysis doesn't show a difference that does not mean there is no difference, only that the data and/or analysis are inadequate to show the difference.

This illustrates one of the problems with null hypothesis significance tests (NHST), we don't believe the null hypothesis anyway (see, for example, link). We know the SoT is not perfectly homogenous, it is at least somewhat heterogenous. If NHST tests do not demonstrate heterogeneity then either the data or the data analysis are inadequate.

To support your proposition you would need to show that the sample dispersion was so great as to be incompatible with "samples ... from the same thing". But we do not know the dispersion expected of "samples ... from the same thing". The control samples give some relevant information, but it would be a great assumption that the heterogeneity of the ToS and the controls were similar.

The non-overlap of standard deviations you used first (#70) and the chi^2 analysis used by Damon et al. compare the between lab dispersion to (estimates of) within lab dispersion, not relevant to your proposition. link is relevant here.

Even worse, in this data set samples and labs are inextricably confounded, it isn't possible to distinguish sample effects from lab effects. Again, the control samples give some relevant information, but it would be a great assumption that between lab differences in treatment of ToS samples were the same as differences in their treatment of controls.

To add to the difficulties, Damon et al. Table 1 gives only 12 results for the ToS, 4 for each sample/lab. With such a small data set no statistical test will have useful power, you really have no chance of showing undue heterogeneity unless it is massive.

OK, on to the paper of which you asked "How about a paper with two statisticians contributing?"

TL;DR This paper really is rubbish. It's approaches are "different" enough to be considered wacky. Even if their conclusion ("... a decrease in age BP as x1 increases ...") were correct, it would not support your conclusion that the 3 samples did not all come from the SoT.

There are (at least) three versions of the paper:

1) You linked to Fanti et al. (2010) That 5 page document is from a Workshop Proceedings and "The purpose of this paper is to summarize the results obtained in Ref. 2". Ref 2 is Riani M., Atkinson A.C., Fanti G., Crosilla F.: “Carbon Dating of the Shroud of Turin: Partially Labelled Regressors and the Design of Experiments”, which I'll refer to as...

2) Riani et al. (2010). The link to this in Fanti et al <https://www.lse.ac.uk/collections/statistics/research/RAFC04May2010.pdf> gives a 404 error but I found it at Researchgate . The coincidence in dates (May 4, 2010) suggests this 20 page document is from the same workshop as Fanti et al (2010).

3) Riani et al (2013) Riani et al (2013) by the same authors is an 11 page article in in Stat Comput (2013)

My comments apply to all three, but I've concentrated on Riani et al (2010) as being the most complete. Pulling it apart point by point would be tedious and pointless. I'll highlight some of the worst bits, if you want more justification of my opinion, indicate what would satisfy you.

The Riani et al approach is wildly unusual and, in my opinion, fatally flawed.

They have only 12 data points,far too few to show excessive heterogeneity. Their section on heterogeneity (section 2 in Riani et al 2010) really just demonstrates that they don't have a good model of the errors, required for any realistic analysis.

Damon et al. say "The results, together with the statistical assessment of the data prepared in the British Museum, were forwarded to Professor Bray of the Istituto di Metrologia 'G. Colonetti', Turin, for his comments. He confirmed that the results of the three laboratories were mutually compatible, and that, on the evidence submitted, none of the mean results was questionable".
Riani et al ignore this and work on the basis that there is heterogeneity. In 2010 they said:
Fanti et al. "The 12 datings, furnished by the three laboratories, show a lack of homogeneity"​
Riani et al. "The twelve results from the 1988 radio carbon dating of the Shroudof Turin show surprising heterogeneity. We try to explain this lack of homogeneity by regression on spatial coordinates."​
By Riani et al 2013 the heterogeneity was not only undeniably present, it was "egregious" - last sentence of section 2 Heterogeneity "We now use a spatial analysis to try to discover the source of the egregious heterogeneity in the readings on the TS."

They fit a spatial model when they don't have good spatial data. Their generation of "387,072 possible cases to analyse" is really just fiction, they don't seem to realise that all except one (at most!) of these cases are wrong - there is only one "real" arrangement and they don't know which (if any) of the "possible" cases this is.

They fit a rectilinear model with no clear reason (but not surprising considering they have only 3 "real" x values. They conclude (Riani et al 2010 p14) "The effect is that of a decrease in age BP as x1 increases. The effect is not large over the sampled region; between x1 = 43 and 81, our estimate of the change is about two centuries. Extrapolation of this linear trend to unsampled values of x1 eventually leads to meaningless negative results". Such an impossible conclusion should have been taken as clear evidence that the model is just plain wring.

Overall this work illustrates the phrase "if you torture the data long enough, it will confess to anything"

Thank you for this.

Although as I mentioned earlier I just don't see how three Carbon 14 test results that overlap within 2 standard deviations and two that overlap within one are extremely or even moderately heterogeneous. Such results are far from rare in carbon 14 tests. I just don't get why this would b e considered unusual. All the statistical analysis in the world to find something extremely "wrong" with this just baffles me.
 
Saint Jabba took a completely different tack: He asserted that because the C14 tests agreed so well -- but let me quote -- "you guys should have been suspicious."

Of course, His Holiness Jabba I relied on many other virtual proofs that the Shroud was genuine, but his most ironclad of all was that the fix was in, and those scientists (so-called! indeed!) were in a conspiracy to discredit a relic which claimed his utter faith & belief, amen. Thus a pack of skeptics' lack of suspicion WAS FURTHER PROOF that the Shroud was miraculous.

He rested his case, as I recall, and be danged if even a single skeptic here could refute him.

I for one sure as hell gave up.
 
I've long wondered why this claimed relic gets so much attention compared to all the other relics that pepper churches throughout the world.
Hypothesis: because there is only one of it.
I have no reason to doubt the scientific and historical findings showing the shroud to be a medieval forgery. You've given me no reason I haven't already investigated. I have considerable reason to doubt the evidence purporting to shroud to be a 1st century Jewish burial cloth, and further purporting that the forgery evidence is non-credible. I have given you those reasons and invited you to address them.
So you're saying there's a chance...
 
Thank you for this.

Although as I mentioned earlier I just don't see how three Carbon 14 test results that overlap within 2 standard deviations and two that overlap within one are extremely or even moderately heterogeneous. Such results are far from rare in carbon 14 tests. I just don't get why this would b e considered unusual. All the statistical analysis in the world to find something extremely "wrong" with this just baffles me.
I really don't have time to go into it, but looking for overlapping confidence intervals (not simple sd multiples) is sometimes, somewhat, related to tests for difference. There's lots on the web about it, try reading (slowly and repeatedly) this link
 
OK, I'll bite.

Reviewing this kind of work takes time, and IRL I don't have much spare, so expect delayed responses from me - reading the paper you cited and writing this post used a lot of time.

I'm hesitant to get involved because you have used three distinct statistical approaches to support your proposition that the sample results are so heterogenous as to indicate subterfuge.
  1. In #70 you used non-overlap of 1 sd intervals. I addressed that briefly in #98 .
  2. Next, in #159 you relied on the "chi^2 test" saying "If the carbon 14 dating fails the chi^2 test, then the results are no good" without giving any more detail of the "chi^2 test". In #176 I pointed you to Bray and said "If you want to continue with your heterogeneity argument, you really need to address that expert opinion." In #196 you said "Look at table two [in Damon et al], and the reported X^2 value of 6.4". I addressed that in #203.
  3. Next, in #206 you said "How about a paper with two statisticians contributing?" linking to a paper you apparently hadn't read (#215) and I'm practically certain you hadn't understood, and tried to reverse the burden of proof ("Can you explain why it is wrong?").
Overall it appears that you use statistics as a drunk uses a lamppost — for support rather than illumination. If I see signs that you are moving goalposts, or trying a gish gallop, or evading my questions and comments, then I'll stop engaging with you - I value my time above your approval.

OK, on to the statistics...

Firstly I'll address the crux of your proposition, that data differences ("dispersion") between labs suggests that the samples did not come from one source. In #76 you said:
That doesn't matter. No matter what date, if the samples don't agree, then the samples were not from the same thing.​
The control samples do agree on the date, that means the control samples were from the same items.​

This evidences a fundamental misconception (not restricted to you!), that heterogeneity is binary - things are either heterogenous or not. In reality nothing is perfectly homogenous, there are always real differences between subsamples - not just measurement errors. Those differences may be too small to matter but, no matter how small, they can be demonstrated given enough data and appropriate data analyses. If data analysis doesn't show a difference that does not mean there is no difference, only that the data and/or analysis are inadequate to show the difference.

This illustrates one of the problems with null hypothesis significance tests (NHST), we don't believe the null hypothesis anyway (see, for example, link). We know the SoT is not perfectly homogenous, it is at least somewhat heterogenous. If NHST tests do not demonstrate heterogeneity then either the data or the data analysis are inadequate.

To support your proposition you would need to show that the sample dispersion was so great as to be incompatible with "samples ... from the same thing". But we do not know the dispersion expected of "samples ... from the same thing". The control samples give some relevant information, but it would be a great assumption that the heterogeneity of the ToS and the controls were similar.

The non-overlap of standard deviations you used first (#70) and the chi^2 analysis used by Damon et al. compare the between lab dispersion to (estimates of) within lab dispersion, not relevant to your proposition. link is relevant here.

Even worse, in this data set samples and labs are inextricably confounded, it isn't possible to distinguish sample effects from lab effects. Again, the control samples give some relevant information, but it would be a great assumption that between lab differences in treatment of ToS samples were the same as differences in their treatment of controls.

To add to the difficulties, Damon et al. Table 1 gives only 12 results for the ToS, 4 for each sample/lab. With such a small data set no statistical test will have useful power, you really have no chance of showing undue heterogeneity unless it is massive.

OK, on to the paper of which you asked "How about a paper with two statisticians contributing?"

TL;DR This paper really is rubbish. It's approaches are "different" enough to be considered wacky. Even if their conclusion ("... a decrease in age BP as x1 increases ...") were correct, it would not support your conclusion that the 3 samples did not all come from the SoT.

There are (at least) three versions of the paper:

1) You linked to Fanti et al. (2010) That 5 page document is from a Workshop Proceedings and "The purpose of this paper is to summarize the results obtained in Ref. 2". Ref 2 is Riani M., Atkinson A.C., Fanti G., Crosilla F.: “Carbon Dating of the Shroud of Turin: Partially Labelled Regressors and the Design of Experiments”, which I'll refer to as...

2) Riani et al. (2010). The link to this in Fanti et al <https://www.lse.ac.uk/collections/statistics/research/RAFC04May2010.pdf> gives a 404 error but I found it at Researchgate . The coincidence in dates (May 4, 2010) suggests this 20 page document is from the same workshop as Fanti et al (2010).

3) Riani et al (2013) Riani et al (2013) by the same authors is an 11 page article in in Stat Comput (2013)

My comments apply to all three, but I've concentrated on Riani et al (2010) as being the most complete. Pulling it apart point by point would be tedious and pointless. I'll highlight some of the worst bits, if you want more justification of my opinion, indicate what would satisfy you.

The Riani et al approach is wildly unusual and, in my opinion, fatally flawed.

They have only 12 data points,far too few to show excessive heterogeneity. Their section on heterogeneity (section 2 in Riani et al 2010) really just demonstrates that they don't have a good model of the errors, required for any realistic analysis.

Damon et al. say "The results, together with the statistical assessment of the data prepared in the British Museum, were forwarded to Professor Bray of the Istituto di Metrologia 'G. Colonetti', Turin, for his comments. He confirmed that the results of the three laboratories were mutually compatible, and that, on the evidence submitted, none of the mean results was questionable".
Riani et al ignore this and work on the basis that there is heterogeneity. In 2010 they said:
Fanti et al. "The 12 datings, furnished by the three laboratories, show a lack of homogeneity"​
Riani et al. "The twelve results from the 1988 radio carbon dating of the Shroudof Turin show surprising heterogeneity. We try to explain this lack of homogeneity by regression on spatial coordinates."​
By Riani et al 2013 the heterogeneity was not only undeniably present, it was "egregious" - last sentence of section 2 Heterogeneity "We now use a spatial analysis to try to discover the source of the egregious heterogeneity in the readings on the TS."

They fit a spatial model when they don't have good spatial data. Their generation of "387,072 possible cases to analyse" is really just fiction, they don't seem to realise that all except one (at most!) of these cases are wrong - there is only one "real" arrangement and they don't know which (if any) of the "possible" cases this is.

They fit a rectilinear model with no clear reason (but not surprising considering they have only 3 "real" x values. They conclude (Riani et al 2010 p14) "The effect is that of a decrease in age BP as x1 increases. The effect is not large over the sampled region; between x1 = 43 and 81, our estimate of the change is about two centuries. Extrapolation of this linear trend to unsampled values of x1 eventually leads to meaningless negative results". Such an impossible conclusion should have been taken as clear evidence that the model is just plain wring.

Overall this work illustrates the phrase "if you torture the data long enough, it will confess to anything"
Excellent! I was going to cover the stats and @bobdroege7 inane comments on them but I'll leave it to you.
 
That's why the Archbishop would need to prove the shroud was a fake.
Oh look, more conspiratorial bollocks.
and historical findings showing the shroud to be a medieval forgery. You've given me no reason I haven't already investigated. I have considerable reason to doubt the evidence purporting to shroud to be a 1st century Jewish burial cloth, and further purporting that the forgery evidence is non-credible. I ha
You refuse to accept the overwhelming proof that you are wrong. You rail against the radiocarbon dating with ludicrous claims of fraud. You made assertions above Jerusalem limestone that have been debunked. You claim the medieval weave of the shroud was in common use in the first century Middle East, and fail to support your bald assertion.
So why did the Arizona lab combine 8 dates into 4?
That was covered in Nature some decades ago. Try and catch up.
 
Thank you for this.

Although as I mentioned earlier I just don't see how three Carbon 14 test results that overlap within 2 standard deviations and two that overlap within one are extremely or even moderately heterogeneous. Such results are far from rare in carbon 14 tests. I just don't get why this would b e considered unusual. All the statistical analysis in the world to find something extremely "wrong" with this just baffles me.
Desperation.



A more general point.
Look, I've made it clear before that I consider religious belief to be a form of mental illness.

The shroudies are a subset of this, people who have a need to believe in some sort of god, the xian one for example, but who lack the faith to do so. So they need props, something to justify, usually to themselves, that their belief, nonsensical as it is to an objective observer, is true. Something to silence the inner dissent that keeps telling them that "it's all bollocks". This is unlike many religious believers who can simply accept, or at least ignore, all the factual problems wit their belief. In their case their blind faith is strong enough to overwhelm all doubts. They can accept the bible is littered with errors, that there is no evidence for Jesus and all the other problems with xianity.
Shroudies are fundamentally weak, they need their prop, a dubious piece of cloth that's a pretty obvious fake. Believers can accept the science, that it's a medieval fake.
 
The shroudies are a subset of this, people who have a need to believe in some sort of god, the xian one for example, but who lack the faith to do so. So they need props, something to justify, usually to themselves, that their belief, nonsensical as it is to an objective observer, is true. Something to silence the inner dissent that keeps telling them that "it's all bollocks".
This was actually Jabba's claimed position: he didn't believe, but he wanted to, so he was trying to prove that the shroud was genuine (and, elsewhere, that he was immortal).
 
Given that @bobdroege7 doesn't appear likely to be addressing the supposed evidence for his prior assertions and @KAJ is ably demolishing the mythical statistical issues let's take a look at the Pray Codex.

Just what is the 'Pray Codex'?
Also known as the Sacramentarium Bolvense, it's a codex, basically a book with pages bound together, as opposed to a scroll (a cutting edge concept as late as the eighth century in Europe). It's dated to the period 1192-95 CE, though this doesn't guarantee that all the inclusions were from that period.

The 'Pray' bit comes from the Jesuit who found it, György Pray, a Hungarian scholar, historian and librarian. He found the book in 1770.
It's currently in the National Library in Budapest, where I saw it years ago, and is considered important, not for any Jesus nonsense or it's supposed connection to a certain faked shroud, but as an element of Hungarian history. The book contains the oldest known Hungarian text.
Like many such codices it's a bit of a scrapbook of odds-and-ends, a prayerbook, notes on the legal system of Coloman the Learned, musical notes and lyrics of songs, notes on the descent of the Hungarian crown, and more.
It was created at the Benedictine Monastery at Boldva in Hungary.

What's that to do with a shroud created in France?
Nothing really. However there is an illustration in the codex that Certain People (cough, cough, shroudies) claim confirms the existence of the shroud

The illustration shows the body of Jesus being prepared for burial (and also the supposed resurrection, complete with empty tomb and angel, very Mt28:2).

Now there are some similarities with the shroud image, the body is shown naked with the arms on the groin for example, but these are characteristic of the artistic style of the period.
  • I'd like it appreciated by my audience that I am exerting great self-control to avoid digressions. There are numerous other matters I could be rambling on about in this post.
  • But I'm going to allow myself one digression (that all SO#2 is allowing me) into the genitalia (or lack thereof) of Jesus, because it meshes rather well with a gaming scenario I'm working on atm that involves Russian crypto-xian castration cults and their alien overlords.
  • Me? Weird?
  • Why does the image in the Pray Codex and on the shroud share one notable similarity, the hands crossed over the groin? Because of a detail of the crucifixion that tends to be forgotten today, the emasculation of Christ. It was accepted (AND NO, I'M NOT MAKING THIS UP, go look at paintings of the crucifixion by people like Lorenzetti and di Martini) that Christ lacked the usual male 'equipment', either because of the 'son of god' bit or as part of the who being-put-to-death business that Jesus was castrated before the nailing. Medieval art (i.e. the illustrations of the codex and the shroud) fit into the artistic tradition of de-emphasis of his genitals (coverage by hand or cloth et cetera) until the revolution that was 'Ostentatio genitalium'which reverse this trend in the Renaissance. I blame the Franciscans.
  • OK, at the risk of drawing the wrath of C, I'm going to mention that the existence of Christ's genitals was debated by the church in the Middle Ages. If anyone plans a The Name of the Rose knock-off with this premise I expect a cut.
  • Pun not intended.
Back to the main story. I promise there will be no further reference to the genitalia of any dubiously real godlings.

Some shroudies claim that the shroud fabric shows a herringbone pattern, akin to the weaving pattern of the Shroud. However a quick glance shows that the pattern on the cloth in the illustration isn't a true herringbone and it's not like that shroud weave in size or configuration.
It's wishful thinking in short.

Next there are is the supposed comparison in the burn holes between the manuscript and the shroud. This really needs an illustration and I'm borrowing this one from Ray Downing.

Shroud & Codex.jpg
On the left is the shroud (note the herringbone weave is not visible) and on the left the codex. Who considers them, to re-use a phrase, and "unusually close match"? Even after scaling the images and changing orientation to suit such a match.

Now most people looking at the codex don't actually see supposed burns in the supposed shroud of the supposed son-of-god, the see part of the tomb. You can check out a picture on wiki, here.

And there, friends, the story of a book important to Hungarian nationalism draws to an end. No doubt @bobdroege7 will be posting his usual well-thought-out refutation, with sources and citation. Or Gish Gallop on to the Sudarium of Oviedo or the Image of Edessa.
I need to go shopping and....fraternise. That's all folks.
 
So what is the claim relating to the Pray Codex?

Is it that an illustration in a medieval manuscript has stylistic similarities to the image on an artifact, and allegedly shows a weave pattern similar to the artifact, and this shows that the artifact is not a medieval forgery? Or is it that the codex illustration is of the same piece of cloth as the artifact but predates the carbon dating, which shows that the carbon dating is wrong but also, since it doesn't show the miraculous image, that the image is a medieval forgery?
 
So what is the claim relating to the Pray Codex?

Is it that an illustration in a medieval manuscript has stylistic similarities to the image on an artifact, and allegedly shows a weave pattern similar to the artifact, and this shows that the artifact is not a medieval forgery? Or is it that the codex illustration is of the same piece of cloth as the artifact but predates the carbon dating, which shows that the carbon dating is wrong but also, since it doesn't show the miraculous image, that the image is a medieval forgery?
I think that depends on what the shroudie in question is attempting to gloss over.....
Usually it boils down to the image is obviously the shroud, so it existed before the C14 dating, so the dating is wrong, so the shroud is real, so Jesus was real, so god.
With a side of herringbone woven cloth wa obviously common before twelfth century so the shroud is genuine, et cetera.
Basically a medieval illustration outweighs actual science because god.
 
I hadn't even noticed before that the stick figure illustrations in the Pray Codex don't show an image of Jesus on the "shroud", post resurrection, if a Shroud is even supposed to be what is shown. So I guess the Pray Codex shows that the image of Jesus would have had to have been added later?

And the Pray Codex supposedly indicates four small holes caused by a fire shown in the Turin Shroud. Not only is it missing the much larger fire damaged areas, but there was no such fire at the time of the resurrection, which the scene depicts.

So... wow. The Pray Codex works against Shroud authenticity more than for it.
 
Last edited:
And the Pray Codex supposedly indicates four small holes caused by a fire shown in the Turin Shroud. Not only is it missing the much larger fire damaged areas, but there was no such fire at the time of the resurrection, which the scene depicts.
I think the fire damage was much later than the Codex. So the Codex is supposedly showing damage that hadn't happened yet.
 
Last edited:
Much later, and historically documented. But the Pray Codex fan club seems to think that just means more evidence of fraud.
It's straight out of Jabba's playbook: anything that can be framed as casting even the slightest doubt on a medieval origin is claimed as evidence that it's a particular piece of 2,000 year old cloth. Even if it's actually evidence that the shroud is not the particular piece of cloth.
 
Last edited:
It's straight out of Jabba's playbook: anything that can be framed as casting even the slightest doubt on a medieval origin is claimed as evidence that it's a particular piece of 2,000 year old cloth. Even if it's actually evidence that the shroud is not the particular piece of cloth.
Its so weird, because you would expect a sincere True Believer in God would be eager to stamp out any whiff of fraudulent relics. Yet this obvious forgery is being revered.
 
Saint Jabba took a completely different tack: He asserted that because the C14 tests agreed so well -- but let me quote -- "you guys should have been suspicious."

Of course, His Holiness Jabba I relied on many other virtual proofs that the Shroud was genuine, but his most ironclad of all was that the fix was in, and those scientists (so-called! indeed!) were in a conspiracy to discredit a relic which claimed his utter faith & belief, amen. Thus a pack of skeptics' lack of suspicion WAS FURTHER PROOF that the Shroud was miraculous.

He rested his case, as I recall, and be danged if even a single skeptic here could refute him.

I for one sure as hell gave up.

Oh, not quite true. He's still waiting for you to give him the go-ahead. :D

His last post here:

- If Sackett hadn't asked, I was going to quit bothering you guys. I have assumed that he has just been jerking my chain, but just in case he wasn't I kept paying attention...
- Anyway, unless he says something today to convince me otherwise, I'll never bother you guys again.
 
I think the fire damage was much later than the Codex. So the Codex is supposedly showing damage that hadn't happened yet.
Or the shroud and the fire damage happened centuries earlier than thought and the later fire (and the reported damage to the cloth) didn't happen.
It's conspiracies all the way down.
 

Back
Top Bottom