Does the Shroud of Turin Show Expected Elongation of the Head in 2D?"

Now @bobdroege7, instead of trying to distract attention with your chronic lack of understanding of statistics, why not address the questions I've repeatedly put to you.
Now, let's pass your pathetic attempts at distraction, and demonstration of your ignorance of stats, and get back to the questions I actually asked of you.
1. Where are the examples of herringbone weave fabric dating to your date for the creation of the alleged shroud?

2. When was the fire that caused the damage to the Lirey cloth that you claim is shown on the Pray Codex, and where is your evidence?

3. Where is the evidence for the secret AMS testing of a single alleged thread from the Lirey cloth?
 
The numbers are there in the paper, the insinuation follows from those numbers.
No, that just begs the question. There are numbers are in the paper, but your interpretation of them (and/or that of the authors you're relying upon) as necessarily invalidating the results has nothing to do with what Damon et al. said or should have said. You have wrongly attributed a series of conclusions and judgments to Damon et al. that they plainly did not make. Pretending that those judgments are some inevitable consequence of the numbers just jumps to your desired conclusion while skipping the statistical and contextual arguments you've demonstrated are hard for you. It doesn't somehow make your prior attribution truthful.

Can you calculate...?
We'll come back to this, because the mess you've made in that paragraph may take a whole post by itself to clean up.

And it is still the chi^2 test that makes it invalid, that's still in the Damon paper.
The χ2 value is in the paper, but again your interpretation of it as necessarily invaliding the test is not a judgment Damon et al. made, or deceptively failed to make. It is therefore incorrect for you to attribute it to them. You don't get to paste your interpretations and conclusions on Damon et al., either to pretend they made them when they plainly did not, or paradoxically to try to call them liars for not making them when you think they should have. Nor do you get to pretend your interpretations simply follow naturally from the numbers with no further thought and that the authors you're pillorying are somehow pinioned by them. You have a right to disagree with Damon et al. You don't have a right to misrepresent them.

Keep in mind that I don't keep fallacies firmly in mind.
I have no idea what fallacy you're talking about.

Science is not a popularity contest.
Nobody has said it is. Credibility for ideas in science is based on the rigor with which those ideas are supported, which includes such things as conformance to established norms in the relevant fields and a foundation of previous experience and expertise. A number of single-issue authors outside the field have attempted to pick away at Damon et al. to accuse them of all manner of statistical shenanigans. Yet those pickings have failed to rise to a cognizable level of rigor in the relevant sciences of archaeology and radiocarbon dating. Your authors want to pretend there's some cabal suppressing their views. They don't seem amenable to the notion that they just aren't doing good science, and that that's the real reason people ignore them.

You won't be able to explain anything to me unless you look at and discuss the data.
Which we have been doing systematically. But that means teaching you statistics, because statistics is how scientists look at data. So far you're a reluctant student. I've more or less committed to walking you slowly through a rebuttal to Casabianca, but stunts like shoving words into Damon's mouth thin my patience and make me question your good faith and whether I'm wasting my time.
 
Now @bobdroege7, instead of trying to distract attention with your chronic lack of understanding of statistics, why not address the questions I've repeatedly put to you.
Now, let's pass your pathetic attempts at distraction, and demonstration of your ignorance of stats, and get back to the questions I actually asked of you.
1. Where are the examples of herringbone weave fabric dating to your date for the creation of the alleged shroud?

2. When was the fire that caused the damage to the Lirey cloth that you claim is shown on the Pray Codex, and where is your evidence?

3. Where is the evidence for the secret AMS testing of a single alleged thread from the Lirey cloth?
1. They rotted away
2. The fire was before the Pray Codex was written.
3. It was secret, so not published.

I will not answer these questions again, I have given you all the information I have.

You conclusion is different than mine.
 
No, that just begs the question. There are numbers are in the paper, but your interpretation of them (and/or that of the authors you're relying upon) as necessarily invalidating the results has nothing to do with what Damon et al. said or should have said. You have wrongly attributed a series of conclusions and judgments to Damon et al. that they plainly did not make. Pretending that those judgments are some inevitable consequence of the numbers just jumps to your desired conclusion while skipping the statistical and contextual arguments you've demonstrated are hard for you. It doesn't somehow make your prior attribution truthful.


We'll come back to this, because the mess you've made in that paragraph may take a whole post by itself to clean up.


The χ2 value is in the paper, but again your interpretation of it as necessarily invaliding the test is not a judgment Damon et al. made, or deceptively failed to make. It is therefore incorrect for you to attribute it to them. You don't get to paste your interpretations and conclusions on Damon et al., either to pretend they made them when they plainly did not, or paradoxically to try to call them liars for not making them when you think they should have. Nor do you get to pretend your interpretations simply follow naturally from the numbers with no further thought and that the authors you're pillorying are somehow pinioned by them. You have a right to disagree with Damon et al. You don't have a right to misrepresent them.


I have no idea what fallacy you're talking about.


Nobody has said it is. Credibility for ideas in science is based on the rigor with which those ideas are supported, which includes such things as conformance to established norms in the relevant fields and a foundation of previous experience and expertise. A number of single-issue authors outside the field have attempted to pick away at Damon et al. to accuse them of all manner of statistical shenanigans. Yet those pickings have failed to rise to a cognizable level of rigor in the relevant sciences of archaeology and radiocarbon dating. Your authors want to pretend there's some cabal suppressing their views. They don't seem amenable to the notion that they just aren't doing good science, and that that's the real reason people ignore them.


Which we have been doing systematically. But that means teaching you statistics, because statistics is how scientists look at data. So far you're a reluctant student. I've more or less committed to walking you slowly through a rebuttal to Casabianca, but stunts like shoving words into Damon's mouth thin my patience and make me question your good faith and whether I'm wasting my time.
Damon et al also calls it a chi^2 test in their paper.

" a X2 test was applied to the dates for each sample"

And still, they do draw a conclusion from the results.

"The results of this test, given in Table 2, show that it is unlikely that the errors quoted by the laboratories for sample 1 fully reflect the overall scatter."

If there are errors in your measurements that you don't account for, that may invalidate your measurements, or it may not.

One thing you are still avoiding talking about is whether the data shows sufficient asymmetry to even calculate anything. See fig 1 in Damon et al.
 
Except the radiocarbon dating has stood up to examination.

And yet when only a tiny fringe of "scientists" (many, if not most, of which aren't actually scientists) support a theory, while the vast majority consider it drivel, it probably is drivel.

Ah yes, holding fast in the face of facts.....
A tiny fringe... that sounds like the ad populum fallacy.

I am trying to address the facts as presented in the Damon et al paper.
 
1. They rotted away
How very convenient.....
2. The fire was before the Pray Codex was written.
So you keep claiming. I asked when it happened and where is your evidence that it happened.
3. It was secret, so not published.
Right..... Except in a shroudies book after the supposed investigator was safely dead.
Now, where is the evidence? What AMS facility? How did they test such a miniscule sample?
I will not answer these questions again, I have given you all the information I have.
Running away. Typical.
You conclusion is different than mine.
Yep, mine is based on reality. Not wish fulfillment.
 
Damon et al also calls it a chi^2 test in their paper.
Yes, the Ward and Wilson test is "a chi-squared test." The Pearson test that you first erroneously referred us to is also "a chi-squared test." Any test that involves computing a test statistic that is meant to be χ2-distributed can be called "a chi-squared test." Again, just because χ2 values appear in the paper doesn't mean your interpretation of the purpose and implications of the test that produced those values is correct or that it necessarily follows from those values. Once again, you seem to be regarding these statistical tests as some kind of a machine where you put data in the top and turn a crank and a pat answer comes out the bottom that requires no further thought.

And still, they do draw a conclusion from the results.
But they don't draw your conclusions.

"The results of this test, given in Table 2, show that it is unlikely that the errors quoted by the laboratories for sample 1 fully reflect the overall scatter."
Asked and answered. A statement of the form, "X doesn't seem to explain all the variance in Y," is not at all equivalent or predicatory to a statement, "There is too much variance in Y for Y to be valid." You are purporting that the latter statement necessarily follows from the first in some way that cannot be questioned.

None of this tap dance fixes your misrepresentation of Damon et al. First you told us that the judgment that there was "too much error" in the dates and that the "dating was invalid" was "straight from the Damon paper." But when pressed, all you could come up with was the statement that one stated cause of scatter didn't seem to be the entire cause in this case. Then you switched sides and seemed to tacitly admit that the conclusions you were trying to pin on Damon et al. weren't actually in the paper, but that the authors were somehow dishonest for not stating them anyway. Now you're trying to fix that mess of contradiction by telling us the conclusions that you really want to put in Damon's mouth so obviously follow from the numbers themselves that there can be no question about them—straight-up begging the question.

The statement you made and attributed to Damon et al. is not in their paper. They are not dishonest in any way for not having drawn the same conclusions as you. Your conclusions do not inexorably follow from the χ2 values reported.

If there are errors in your measurements that you don't account for, that may invalidate your measurements, or it may not.
You're the one telling us it invalidates the measurement in this case, therefore it's your burden of proof. Begging the question does not carry that burden.

A root-cause analysis of what caused a purported outlier in a radiocarbon dating run is generally not possible according to what we find in the literature. But if you think it is, by all means describe how and give some examples. Accounting for unknown or unknowable error is exactly what statistics is for, and why we properly call it the study of uncertainty. If you could figure out all the root causes for why something didn't go the way you expected, you wouldn't have uncertainty—you'd have certainty. The entire raison d'être for distributions like the normal distribution is to help us reason mathematically about things we can't account for any other way.

What are you supposed to do when you have a χ2 value for radiocarbon dates that exceeds the critical value for your desired confidence interval? What is common in the archaeology field? What do Ward and Wilson recommend? You told us a few days ago that the dating was invalid as the result of the Ward and Wilson test, but now you seem to be softening and getting ready to accept that the Ward and Wilson test is not some black-and-white slam-dunk. (Hint: Ward and Wilson actually don't treat it that way in their own paper.) Damon et al. accounted for the scatter in the Sample 1 interlaboratory data by not combining the data directly as they could do for the other samples, and as advised in Ward and Wilson, but by applying a different statistical aggregation method supported by the t-distribution and common in the chemistry field.

One thing you are still avoiding talking about is whether the data shows sufficient asymmetry to even calculate anything. See fig 1 in Damon et al.
I have no idea what you're talking about. And I don't recall you bringing it up before, so it's hardly something I'm "avoiding." But by all means keep trying to direct my rebuttal. It's hilarious how desperate you are to control the discussion and steer it away from things you can't fathom. Fig. 1 in Damon is a pretty picture. The actual data are elsewhere in the paper.

A tiny fringe... that sounds like the ad populum fallacy.
Hardly. Science is about vigorously challenging each others' methods and findings. Most of science is about doing just that, not even breaking new ground. Ward and Wilson spend two-thirds of their paper explaining why previous methods devised by their colleagues weren't any good. Christopher Ramsey at Oxford spends half his paper telling us why Ward and Wilson's method is no good. If the science in Damon et al. is so bad, where are the legions of radiocarbon dating experts that should be rising up to show those egregious flaws? Why are the only people offering criticism people from outside the field and—in their other work—focused only on the shroud of Turin?

I am trying to address the facts as presented in the Damon et al paper.
Not really. You're trying to foist a conclusion regarding those facts by vigorously begging its question and assiduously avoiding any sort of test of the reasoning that would ordinarily have to support such a conclusion.

Now let's go back (as promised) and look at your own attempt to analyze the lower-level data. I should warn you that it's a sign of bad faith when a person who has just learned a tidbit of new information goes off running in some direction with it according to a revised argument (now featuring the tidbit) but always in service of the same old preconceived notion.

Can you calculate the standard deviation of the 12 data points and tell me you get 1260 to 1390 using +/- 1.1 sd and +/- 2.6 sd?
No, because that doesn't follow. Why did you do it that way? The t5-distribution was part of a combined solution that kept the inverse-square weightings for intra-laboratory data while the pooled means were reckoned according to the t-distribution (but not according to t2) in order to maintain congruence with the other samples in Table 2. This is accounted for by the notations on Table 3. If you had just wanted to lump all the runs from all the labs together and treat them as one homogeneous data set, you should have used the factors from t9 (per Damon) or t11 (per your implied homogeneity), but either way something that doesn't entail any pooling. But then your results would not compare correctly with the control samples.

Using those 12 data points, I get a much wider range.
You don't show your work, so it's hard to imagine where all your mistakes might be. But yes, if you simply treat all 12 radiocarbon dates from all the runs for Sample 1 as t5-distributed data (and it's not correct to do so), you probably came up with 1102-1420 CE according to N{689, 61} and 95% CI giving you ±(2.6×61).

As a check, do the same thing with Sample 4, using the ordinary ±2σ. Instead of 1263-1283 CE cal (Damon, Table 3), you'll get radiocarbon dates 1091-1354 CE. So there again, a much larger interval.

All you're showing with this is that using the wrong method gives you the wrong answer.

Also, I get 1259 for the unweighted mean in table 2, which is below the range 1260 to 1390, how is that possible?
It's possible because you're incorrectly comparing radiocarbon dates directly with calibrated calendar dates. Table 2 is radiocarbon dates in years before 1950. Table 3 is calibrated calendar dates. In Damon, Fig. 2, you see the calibration curve with the inside and outside dates at 95% CI for Sample 1 (the shroud) shown as intercepts. This is the proper method. You don't calibrate each radiocarbon date and then compute the confidence. You do all the statistical operations on the radiocarbon dates and then look up your calendar dates based on your final error (not just the mean).

In Table 3, the calibrated values for Sample 1 at 95% confidence are 1262-1312 CE and 1353-1385 CE. That's because of the two intercepts for the 660 yr BP lower limit on the radiocarbon date. (That's mean radiocarbon date from Table 3, or unweighted mean radiocarbon date from Table 2, minus its error: 691-31.) However, note that the paper states the values (meaning the calibrated calendar dates) have been "rounded up/down to the nearest 10 yr." (Damon, p. 614). From the calibrated dates, the numbers in whatever color this is are the upper and lower values in calibrated calendar years. 1262 is rounded down to 1260 and 1385 is rounded up to 1390.

But the key concept is that if you attempt to find calendar age by resolving the radiocarbon BP epoch (1950 CE) against mean radiocarbon date—691 yr BP—with simple subtraction, what you're left with is still a radiocarbon date. You can't compare 1259 in radiocarbon years to 1260 CE (or 1262 CE) in calendar years.

Here's a suggestion for you that might be a bit more revealing. Do a Ward and Wilson test on the intra-laboratory findings for each sample and then on the aggregated (non-pooled) data points from Table 1 for each sample. Put an X in the cell for each combination of laboratory and sample that fails the Ward and Wilson test at 95% CI. Then tell me what you think it might mean.

Sample 1​
Sample 2​
Sample 3​
Sample 4​
Arizona
Oxford
Zurich
Aggregated
 
Last edited:
1. They rotted away
2. The fire was before the Pray Codex was written.
3. It was secret, so not published.

I will not answer these questions again, I have given you all the information I have.

You conclusion is different than mine.
How do you know that any of the answers you gave are correct ones?
 
Yes, the Ward and Wilson test is "a chi-squared test." The Pearson test that you first erroneously referred us to is also "a chi-squared test." Any test that involves computing a test statistic that is meant to be χ2-distributed can be called "a chi-squared test." Again, just because χ2 values appear in the paper doesn't mean your interpretation of the purpose and implications of the test that produced those values is correct or that it necessarily follows from those values. Once again, you seem to be regarding these statistical tests as some kind of a machine where you put data in the top and turn a crank and a pat answer comes out the bottom that requires no further thought.


But they don't draw your conclusions.


Asked and answered. A statement of the form, "X doesn't seem to explain all the variance in Y," is not at all equivalent or predicatory to a statement, "There is too much variance in Y for Y to be valid." You are purporting that the latter statement necessarily follows from the first in some way that cannot be questioned.

None of this tap dance fixes your misrepresentation of Damon et al. First you told us that the judgment that there was "too much error" in the dates and that the "dating was invalid" was "straight from the Damon paper." But when pressed, all you could come up with was the statement that one stated cause of scatter didn't seem to be the entire cause in this case. Then you switched sides and seemed to tacitly admit that the conclusions you were trying to pin on Damon et al. weren't actually in the paper, but that the authors were somehow dishonest for not stating them anyway. Now you're trying to fix that mess of contradiction by telling us the conclusions that you really want to put in Damon's mouth so obviously follow from the numbers themselves that there can be no question about them—straight-up begging the question.

The statement you made and attributed to Damon et al. is not in their paper. They are not dishonest in any way for not having drawn the same conclusions as you. Your conclusions do not inexorably follow from the χ2 values reported.


You're the one telling us it invalidates the measurement in this case, therefore it's your burden of proof. Begging the question does not carry that burden.

A root-cause analysis of what caused a purported outlier in a radiocarbon dating run is generally not possible according to what we find in the literature. But if you think it is, by all means describe how and give some examples. Accounting for unknown or unknowable error is exactly what statistics is for, and why we properly call it the study of uncertainty. If you could figure out all the root causes for why something didn't go the way you expected, you wouldn't have uncertainty—you'd have certainty. The entire raison d'être for distributions like the normal distribution is to help us reason mathematically about things we can't account for any other way.

What are you supposed to do when you have a χ2 value for radiocarbon dates that exceeds the critical value for your desired confidence interval? What is common in the archaeology field? What do Ward and Wilson recommend? You told us a few days ago that the dating was invalid as the result of the Ward and Wilson test, but now you seem to be softening and getting ready to accept that the Ward and Wilson test is not some black-and-white slam-dunk. (Hint: Ward and Wilson actually don't treat it that way in their own paper.) Damon et al. accounted for the scatter in the Sample 1 interlaboratory data by not combining the data directly as they could do for the other samples, and as advised in Ward and Wilson, but by applying a different statistical aggregation method supported by the t-distribution and common in the chemistry field.


I have no idea what you're talking about. And I don't recall you bringing it up before, so it's hardly something I'm "avoiding." But by all means keep trying to direct my rebuttal. It's hilarious how desperate you are to control the discussion and steer it away from things you can't fathom. Fig. 1 in Damon is a pretty picture. The actual data are elsewhere in the paper.


Hardly. Science is about vigorously challenging each others' methods and findings. Most of science is about doing just that, not even breaking new ground. Ward and Wilson spend two-thirds of their paper explaining why previous methods devised by their colleagues weren't any good. Christopher Ramsey at Oxford spends half his paper telling us why Ward and Wilson's method is no good. If the science in Damon et al. is so bad, where are the legions of radiocarbon dating experts that should be rising up to show those egregious flaws? Why are the only people offering criticism people from outside the field and—in their other work—focused only on the shroud of Turin?


Not really. You're trying to foist a conclusion regarding those facts by vigorously begging its question and assiduously avoiding any sort of test of the reasoning that would ordinarily have to support such a conclusion.

Now let's go back (as promised) and look at your own attempt to analyze the lower-level data. I should warn you that it's a sign of bad faith when a person who has just learned a tidbit of new information goes off running in some direction with it according to a revised argument (now featuring the tidbit) but always in service of the same old preconceived notion.


No, because that doesn't follow. Why did you do it that way? The t5-distribution was part of a combined solution that kept the inverse-square weightings for intra-laboratory data while the pooled means were reckoned according to the t-distribution (but not according to t2) in order to maintain congruence with the other samples in Table 2. This is accounted for by the notations on Table 3. If you had just wanted to lump all the runs from all the labs together and treat them as one homogeneous data set, you should have used the factors from t9 (per Damon) or t11 (per your implied homogeneity), but either way something that doesn't entail any pooling. But then your results would not compare correctly with the control samples.


You don't show your work, so it's hard to imagine where all your mistakes might be. But yes, if you simply treat all 12 radiocarbon dates from all the runs for Sample 1 as t5-distributed data (and it's not correct to do so), you probably came up with 1102-1420 CE according to N{689, 61} and 95% CI giving you ±(2.6×61).

As a check, do the same thing with Sample 4, using the ordinary ±2σ. Instead of 1263-1283 CE cal (Damon, Table 3), you'll get radiocarbon dates 1091-1354 CE. So there again, a much larger interval.

All you're showing with this is that using the wrong method gives you the wrong answer.


It's possible because you're incorrectly comparing radiocarbon dates directly with calibrated calendar dates. Table 2 is radiocarbon dates in years before 1950. Table 3 is calibrated calendar dates. In Damon, Fig. 2, you see the calibration curve with the inside and outside dates at 95% CI for Sample 1 (the shroud) shown as intercepts. This is the proper method. You don't calibrate each radiocarbon date and then compute the confidence. You do all the statistical operations on the radiocarbon dates and then look up your calendar dates based on your final error (not just the mean).

In Table 3, the calibrated values for Sample 1 at 95% confidence are 1262-1312 CE and 1353-1385 CE. That's because of the two intercepts for the 660 yr BP lower limit on the radiocarbon date. (That's mean radiocarbon date from Table 3, or unweighted mean radiocarbon date from Table 2, minus its error: 691-31.) However, note that the paper states the values (meaning the calibrated calendar dates) have been "rounded up/down to the nearest 10 yr." (Damon, p. 614). From the calibrated dates, the numbers in whatever color this is are the upper and lower values in calibrated calendar years. 1262 is rounded down to 1260 and 1385 is rounded up to 1390.

But the key concept is that if you attempt to find calendar age by resolving the radiocarbon BP epoch (1950 CE) against mean radiocarbon date—691 yr BP—with simple subtraction, what you're left with is still a radiocarbon date. You can't compare 1259 in radiocarbon years to 1260 CE (or 1262 CE) in calendar years.

Here's a suggestion for you that might be a bit more revealing. Do a Ward and Wilson test on the intra-laboratory findings for each sample and then on the aggregated (non-pooled) data points from Table 1 for each sample. Put an X in the cell for each combination of laboratory and sample that fails the Ward and Wilson test at 95% CI. Then tell me what you think it might mean.

Sample 1​
Sample 2​
Sample 3​
Sample 4​
Arizona
Oxford
Zurich
Aggregated
I never referred to Pearson's chi^2 test, it has always been the chi^2 test as performed in the Damon paper, at least now you are willing to call it a test. But maybe you will admit that it shows heterogeneity, but maybe not.

There is no begging of the question, because I am not confirming the results of the test, quite the opposite, the test and the value published in Damon do not support the validity of the measurements.

Damon reported the calendar dates for the shroud as 1260 to 1312 and 1353 to 1384. Which is not the kind of distribution expected for the radioactive decay of C-14. The distribution should be normal, not a distribution with two peaks and a hole in the middle. Unless, you have a sample of mixed ages.
 
I never referred to Pearson's chi^2 test
You attempted to instruct your critics about "the chi-squared test" by linking to the Wikipedia article for the Pearson test. You didn't specifically call it out by that name because you weren't aware that there was more than one kind of test and that calling it out by name is sometimes necessary.

...at least now you are willing to call it a test.
I never said it wasn't. Any references otherwise were trying to explain to you the difference between the χ²-distribution and the notion of a statistical test based on that distribution. You were laboring under the misconception that there could ever be only one kind of "chi-squared test."

But maybe you will admit that it shows heterogeneity, but maybe not.
I've given you my reasons for not accepting your conclusions. I'm engaged in an exercise to help you understand the scientific and mathematical principles behind those reasons. It is becoming apparent that you don't wish to participate in that exercise.

There is no begging of the question, because I am not confirming the results of the test, quite the opposite, the test and the value published in Damon do not support the validity of the measurements.
You are simply demanding that we accept your conclusion as if it were an inevitable consequent—that the premise and the conclusion are essentially identical.

Damon reported the calendar dates for the shroud as 1260 to 1312 and 1353 to 1384. Which is not the kind of distribution expected for the radioactive decay of C-14. The distribution should be normal, not a distribution with two peaks and a hole in the middle. Unless, you have a sample of mixed ages.
This is completely ignorant of a major, well-known factor in radiocarbon dating. See Stuiver and (a completely different) Pearson. See also R.E. Taylor Radiocarbon Dating: An Archaeological Perspective (for a scientific audience) and J.D. Macdougall Nature's Clocks (for a lay audience) for more information.

Further a mixed sample won't give you two different radiocarbon dates, each corresponding to a different age of the constituent material. A run on a sample of mixed ages will simply give you a single radiocarbon date skewed proportionally according to the fraction of each kind of sample. If different runs draw differently from the allegedly mixed sample, then that is accommodated in the pooling step.

At this point it's clear you don't know what you're talking about and have very little interest in learning.
 
Last edited:
How do you know that any of the answers you gave are correct ones?
He has faith.....

Basically if his convolutions aren't true then @bobdroege7 must be wrong, and he can't accept that.
Hence the handwaving as to why the 3:1 herringbone cloth must have been available in the first century, but no other samples survived. And no evidence of the necessary looms either of course. Not one mention anywhere.

The ironic thing here is that there are actually 'herringbone' cloth samples from the first century (and centuries earlier). In fact there is one from Pompeii in 79CE!!!

But now I must be permitted a textile nerdery digression; the pattern of the fabric of the Lirey cloth is actually a 'chevron' rather than a 'herringbone' weave (I mentioned this previously, naturally @bobdroege7 paid no attention). Specifically it's a a 3:1 rather than a 2:2 weave. The former requires a four treadle loom. Shockingly there is no evidence of such a loom in the first century, or indeed for centuries afterwards.
All this demonstrated that @bobdroege7 not only hasn't a clue about the fabric of the Lirey cloth, but is also utterly ignorant about weaving in general.


Now, skipping over @bobdroege7's claims of a fire that somehow damaged the non-existent cloth in a manner that supposedly resembles an illustration in the Pray Codex that doesn't actually resemble a shroud at all, there is the secret radiocarbon test.

I don't know if Case made the entire story up himself and attributed it to Heller in a pathetic attempt to give it credibility but I suspect this is true. I think Heller, despite his numerous faults, retained too much credibility to have fabricated the story. Rather telling is Case's failure to publish his alleged interview with Heller before the latter was safely dead.
Now, let's look at what was published (and no @bobdroege7 there was no "paper", just Case's awful book).

In that year [1982] physician and biophysicist John Heller of the New England Institute for Medical Research in Ridgefield (CT, USA) sent to the University of California a thread of the Shroud extracted from the area of the Raes sample. The thread was divided into two parts and dated: one half turned out to date back to 200 A.D. and the other half to 1000 A.D. It should be pointed out that one of the two halves was starched.
Now. anyone familiar with the basics of AMS carbon dating will notice this is complete bollocks. There was no UC AMS facility in 1982 for a start. And the reference to a single thread from the Raes sample is another dead giveaway. As an aside, this nonsense is one reason I suspect that Case rather than Heller fabricated (!) the story; Heller wouldn't have made the basic mistakes.

Now, how much would a single thread from the Raes sample weigh?
Max length would be 4cm (the longest dimension of the sample). The density of the shroud is approximately 40mg/cm
2 with approximately 25 threads per centimetre (or 38 for the warp, but let's give the most optimistic case).
So the thread would weigh approximately three milligrammes.
Which is far lower than the ability of AMS to date in 1982.
Oops.
 
Yes, do you understand what "I don't know" means.
Of course I do. However, that wasn't the only answer you presented. You said, "I don't know," but then you also said, "It may have something to do with the inflection point of the curve." Out of an abundance of deference I wanted to ascertain which answer you were going to stick with. Since my charity was met with derision, I vow to be a much more demanding teacher from now on.

Theoretically, it is a continuous curve with two inflection points, so I don't see any fundamental shifts.
But the question wasn't about theory. It was about the empirical reality. I get that it's very attractive to reason about the elegant shapes of the curves and the similarly elegant math that defines them. In fact that's where statisticians spend most of their time. The question was what we would see in the data that corresponds to these divisions.

The answer is: absolutely nothing.

"I don't know," is a perfectly good answer for a student to give to that question. But being able to say confidently that there is no visible reflection of the statistical abstractions in the actual measured data marks an important step from student to practitioner. It underscores that the notion of categorical "error" in experimental statistics is entirely normative and not at all the product of some empirical, analytical exercise. This flies firmly in the face of your assertions based on the rigidity in marginally related concepts as regulatory specifications and industrial tolerances.

Not even Ward and Wilson agree with you. In their paper they characterize a χ² value of 6.16 as "borderline" acceptable for χ²(ν=2, α=0.05) ≈ 5.998. Incidentally they also go on to show how adding more samples (and thereby more degrees of freedom) bring the χ² value back down below the critical value without dismissing or discarding prior data that initially seemed like an outlier. It also buttresses Damon et al. in their procedure for re-reckoning the Sample 1 data first by relying on the t-distribution (which is appropriate when you don't know where your population mean and standard deviation might be) and applying a procedure from experimental chemistry that distributes the weight of the errors proportionally among pooled and non-pooled data points. This gives them enough degrees of freedom to accommodate what might be an outlier. Keep in mind that in a purely normative categorization, the inherently probabilistic nature of that determination decays to an arbitrary yes-no answer.

But the underlying problem is that the very nature of radiocarbon dating makes outliers an inevitability even in the most careful circumstances. And this makes identifying true outliers difficult because the scatter in good data provides only a limited basis from which to fit a distribution. The Ward and Wilson method applies a weighted error moment in the form of an inverse-square factor. And while this is certainly defensible from the mathematics of pooled, normally-distributed measurements, it's not the only way to reason about the probability of an outlier.

Casabianca et al. attempt to analyze the raw data and the reported data using the OxCal software. But they disingenuously characterize their results they got. In his 2004 paper, Bronk Ramsey (who wrote OxCal) cautions against mechanically rejecting findings that fall below 60%, citing the expected variation even in valid data. Other criteria must be used to reject an outlier, he writes. Bronk Ramsey himself bases his work on J. Christen's 1994 chapter, "Summarizing a Set of Radiocarbon Determinations: a Robust Approach." This is the foundation of the Bayesian approach. Bayesian statistics are entirely different from frequentist approaches such as Libbey or Ward and Wilson. In his chapter, Christen analyzes the Nature data for the shroud of Turin and shows how his approach concludes that the Sample 1 data is markedly less likely to be an outlier than Ward and Wilson's method suggests. He then goes on to explain why, and to reanalyze other classical radiocarbon datings. And rather than a categorical approach, Christen (and then subsequently Bronk Ramsey) agree to stay faithful to the probabilistic nature of the underlying data. Nothing empirical in the data supports such a categorization; it's purely normative, and erases information from consideration! And they say, "Don't just apply this rigidly or mechanically!"

The takeaway is that the Ward and Wilson test is not the only useful, defensible, and appropriate method of summarizing radiocarbon dating. And not even Ward and Wilson apply it as rigidly as you seem to think is foregone, inevitable, and indisputable. You won't even acknowledge the existence of a line of reasoning going from χ² ≈ 6.352 to "Invalid! Fraud!" None of the responsible authorities here agree with that question-begging.

I did the Ward and Wilson analysis that I challenged you to do on the intra-laboratory data. Casabianca et al. do this too, but they obscure it behind a lot of handwaving about the raw data and their various conspiracy theories. Instead of a tedious table of numbers, I'm just putting an X in the cell for every intra-laboratory test that fails the Ward and Wilson test (i.e., T ≥ χ²(ν, α=0.05) for the appropriate degrees of freedom).

Sample 1​
Sample 2​
Sample 3​
Sample 4​
Arizona
X​
X​
X​
Oxford
Zurich
Aggregated
X​
X​
X​

I've left out the actual figures, but I can supply them upon request if you want them. What I took away from the inter-laboratory figures (not represented in this table) is that the T values for Samples 2-4 were well below the critical values, while the Sample 1 T value was only just above the critical value (something like 4%). In the non-normative, non-categorical world that archaeology actually operates in for radiocarbon dating, that's a salient factor. It's not that archaeologists suddenly started thinking differently in 1994. It's that archaeology took that long to find a statistical model that better represented how they had been thinking all along—properly reflecting the variance they expected in the data, regardless of the arbitrary categorizations that erased the smooth, probabilistic nature of the underlying data.

Now what does that table suggest to you? Casabianca et al. want to attribute the high χ² value for Sample 1 to contamination. And they wind up their paper suggesting all the different things allegedly found in Sample 1 that might have skewed the results older. But for that to be the case, we should see scatter (as measured in the Ward and Wilson test) more prevalently in the Sample 1 column. We would expect the other two labs to have more scatter in their various runs, depending on the degree of contamination allegedly present in the sample. Instead the broad scatter is more closely associated with the Arizona lab. The scatter in their intra-laboratory findings is singularly responsible for the scatter in the aggregated, non-pooled data (bottom row).

The Ward and Wilson test is especially sensitive to error and will disproportionately represent scatter in a particular pool. This won't be immediately apparent in a ±1σ illustration such as Damon et al., Fig. 1. And the major complaint with the Ward and Wilson test was that it was too sensitive to the expected empirical variation in radiocarbon dating, especially with a very few degrees of freedom. The effect of few degrees of freedom is that there are only a few values that χ² can take on, and therefore a "borderline" categorical reckoning really requires additional thought—the next available χ² might lie well within the confidence interval, nudged outside it only by the coarse behavior of any model with few degrees of freedom. Ward and Wilson explicitly wrestled with this; it's not as if they kept it a secret. And they give advice, which Casabianca ignores. Christen finally escaped the frequentist woes altogether and confirmed according to Bayes what everyone had had to correct by other means previously, including in Damon. Damon compensated for the suspected oversensitivity of the Ward and Wilson test by remodeling the data using the t-distribution and a somewhat less sensitive error moment. The price he paid for this was the broader confidence interval, which is intuitively appropriate. You won't learn any of this from Casabianca.

Now we can speculate endlessly about what might be going wrong in the Arizona lab that causes them to have broad scatter in three of four samples. But it doesn't matter. What matters is that the effects of scatter are correlated to the lab, not to the sample. Casbianca et al. would rather spin conspiracy theories about how the raw data was massaged than confront the simple statistical fact that their claims are contradicted by the data. They can speculate all they want about contamination in the shroud sample. And you can stomp and whine all you want, saying that the only way you can get scatter like that is from different fabrics in the sample. But the data plainly disagree.

Damon et al. remain good science not because they are popular or mainstream, but because they adhere to defensible conventions and principles of the archeaological sciences, up to and including giving statistical analysis its proper role in the process. Casabianca et al. remain relatively obscure not because they advocate an unpopular or religiously-motivated interpretation, but because they mask their poor reasoning behind conspiracy theories and inappropriate statistical formalism that actually ignores much of what their cited authorities caution against.
 
Of course I do. However, that wasn't the only answer you presented. You said, "I don't know," but then you also said, "It may have something to do with the inflection point of the curve." Out of an abundance of deference I wanted to ascertain which answer you were going to stick with. Since my charity was met with derision, I vow to be a much more demanding teacher from now on.


But the question wasn't about theory. It was about the empirical reality. I get that it's very attractive to reason about the elegant shapes of the curves and the similarly elegant math that defines them. In fact that's where statisticians spend most of their time. The question was what we would see in the data that corresponds to these divisions.

The answer is: absolutely nothing.

"I don't know," is a perfectly good answer for a student to give to that question. But being able to say confidently that there is no visible reflection of the statistical abstractions in the actual measured data marks an important step from student to practitioner. It underscores that the notion of categorical "error" in experimental statistics is entirely normative and not at all the product of some empirical, analytical exercise. This flies firmly in the face of your assertions based on the rigidity in marginally related concepts as regulatory specifications and industrial tolerances.

Not even Ward and Wilson agree with you. In their paper they characterize a χ² value of 6.16 as "borderline" acceptable for χ²(ν=2, α=0.05) ≈ 5.998. Incidentally they also go on to show how adding more samples (and thereby more degrees of freedom) bring the χ² value back down below the critical value without dismissing or discarding prior data that initially seemed like an outlier. It also buttresses Damon et al. in their procedure for re-reckoning the Sample 1 data first by relying on the t-distribution (which is appropriate when you don't know where your population mean and standard deviation might be) and applying a procedure from experimental chemistry that distributes the weight of the errors proportionally among pooled and non-pooled data points. This gives them enough degrees of freedom to accommodate what might be an outlier. Keep in mind that in a purely normative categorization, the inherently probabilistic nature of that determination decays to an arbitrary yes-no answer.

But the underlying problem is that the very nature of radiocarbon dating makes outliers an inevitability even in the most careful circumstances. And this makes identifying true outliers difficult because the scatter in good data provides only a limited basis from which to fit a distribution. The Ward and Wilson method applies a weighted error moment in the form of an inverse-square factor. And while this is certainly defensible from the mathematics of pooled, normally-distributed measurements, it's not the only way to reason about the probability of an outlier.

Casabianca et al. attempt to analyze the raw data and the reported data using the OxCal software. But they disingenuously characterize their results they got. In his 2004 paper, Bronk Ramsey (who wrote OxCal) cautions against mechanically rejecting findings that fall below 60%, citing the expected variation even in valid data. Other criteria must be used to reject an outlier, he writes. Bronk Ramsey himself bases his work on J. Christen's 1994 chapter, "Summarizing a Set of Radiocarbon Determinations: a Robust Approach." This is the foundation of the Bayesian approach. Bayesian statistics are entirely different from frequentist approaches such as Libbey or Ward and Wilson. In his chapter, Christen analyzes the Nature data for the shroud of Turin and shows how his approach concludes that the Sample 1 data is markedly less likely to be an outlier than Ward and Wilson's method suggests. He then goes on to explain why, and to reanalyze other classical radiocarbon datings. And rather than a categorical approach, Christen (and then subsequently Bronk Ramsey) agree to stay faithful to the probabilistic nature of the underlying data. Nothing empirical in the data supports such a categorization; it's purely normative, and erases information from consideration! And they say, "Don't just apply this rigidly or mechanically!"

The takeaway is that the Ward and Wilson test is not the only useful, defensible, and appropriate method of summarizing radiocarbon dating. And not even Ward and Wilson apply it as rigidly as you seem to think is foregone, inevitable, and indisputable. You won't even acknowledge the existence of a line of reasoning going from χ² ≈ 6.352 to "Invalid! Fraud!" None of the responsible authorities here agree with that question-begging.

I did the Ward and Wilson analysis that I challenged you to do on the intra-laboratory data. Casabianca et al. do this too, but they obscure it behind a lot of handwaving about the raw data and their various conspiracy theories. Instead of a tedious table of numbers, I'm just putting an X in the cell for every intra-laboratory test that fails the Ward and Wilson test (i.e., T ≥ χ²(ν, α=0.05) for the appropriate degrees of freedom).

I've left out the actual figures, but I can supply them upon request if you want them. What I took away from the inter-laboratory figures (not represented in this table) is that the T values for Samples 2-4 were well below the critical values, while the Sample 1 T value was only just above the critical value (something like 4%). In the non-normative, non-categorical world that archaeology actually operates in for radiocarbon dating, that's a salient factor. It's not that archaeologists suddenly started thinking differently in 1994. It's that archaeology took that long to find a statistical model that better represented how they had been thinking all along—properly reflecting the variance they expected in the data, regardless of the arbitrary categorizations that erased the smooth, probabilistic nature of the underlying data.

Now what does that table suggest to you? Casabianca et al. want to attribute the high χ² value for Sample 1 to contamination. And they wind up their paper suggesting all the different things allegedly found in Sample 1 that might have skewed the results older. But for that to be the case, we should see scatter (as measured in the Ward and Wilson test) more prevalently in the Sample 1 column. We would expect the other two labs to have more scatter in their various runs, depending on the degree of contamination allegedly present in the sample. Instead the broad scatter is more closely associated with the Arizona lab. The scatter in their intra-laboratory findings is singularly responsible for the scatter in the aggregated, non-pooled data (bottom row).

The Ward and Wilson test is especially sensitive to error and will disproportionately represent scatter in a particular pool. This won't be immediately apparent in a ±1σ illustration such as Damon et al., Fig. 1. And the major complaint with the Ward and Wilson test was that it was too sensitive to the expected empirical variation in radiocarbon dating, especially with a very few degrees of freedom. The effect of few degrees of freedom is that there are only a few values that χ² can take on, and therefore a "borderline" categorical reckoning really requires additional thought—the next available χ² might lie well within the confidence interval, nudged outside it only by the coarse behavior of any model with few degrees of freedom. Ward and Wilson explicitly wrestled with this; it's not as if they kept it a secret. And they give advice, which Casabianca ignores. Christen finally escaped the frequentist woes altogether and confirmed according to Bayes what everyone had had to correct by other means previously, including in Damon. Damon compensated for the suspected oversensitivity of the Ward and Wilson test by remodeling the data using the t-distribution and a somewhat less sensitive error moment. The price he paid for this was the broader confidence interval, which is intuitively appropriate. You won't learn any of this from Casabianca.

Now we can speculate endlessly about what might be going wrong in the Arizona lab that causes them to have broad scatter in three of four samples. But it doesn't matter. What matters is that the effects of scatter are correlated to the lab, not to the sample. Casbianca et al. would rather spin conspiracy theories about how the raw data was massaged than confront the simple statistical fact that their claims are contradicted by the data. They can speculate all they want about contamination in the shroud sample. And you can stomp and whine all you want, saying that the only way you can get scatter like that is from different fabrics in the sample. But the data plainly disagree.

Damon et al. remain good science not because they are popular or mainstream, but because they adhere to defensible conventions and principles of the archeaological sciences, up to and including giving statistical analysis its proper role in the process. Casabianca et al. remain relatively obscure not because they advocate an unpopular or religiously-motivated interpretation, but because they mask their poor reasoning behind conspiracy theories and inappropriate statistical formalism that actually ignores much of what their cited authorities caution against.
Damon et al do not remain good science because they did not release their data until it was FOIAed by Casabianca.

Their reported range of 1262-1312 and 1352-1384 is clearly incorrect because the historical age is ~1350, so they should have dismissed that part.

When you did your calculations, which data did you use? I am suggesting that Damon did the calculations wrong.

Van Haelst calculated a different value. Which one is correct? https://www.shroud.com/vanhels5.pdf

Damon et al reported 4 measurements for the Arizona data, yet they did 40 measurements, and the 4 they did report were 8 measurements combined into 4.

And you asked a question implying there was a specific answer, I don't know is just as good as absolutely nothing.
If you aspire to be a good teacher, don't ask trick questions.
 
Damon et al do not remain good science because they did not release their data until it was FOIAed by Casabianca.
That assertion is, as I've already shown, a blatant and rather silly lie.
Their reported range of 1262-1312 and 1352-1384 is clearly incorrect because the historical age is ~1350, so they should have dismissed that part.
Untrue. As has been shown before.
When you did your calculations, which data did you use? I am suggesting that Damon did the calculations wrong.
Oh good grief....
And you asked a question implying there was a specific answer, I don't know is just as good as absolutely nothing.
If you aspire to be a good teacher, don't ask trick questions.
Is this supposed to mean something?
 
Damon et al do not remain good science because they did not release their data until it was FOIAed by Casabianca.
No, that's just Casabianca's conspiracy-mongering.

Their reported range of 1262-1312 and 1352-1384 is clearly incorrect because the historical age is ~1350, so they should have dismissed that part.
No, that's not how it works. The potential age range must include 1262 because the calibration is unambiguous for that end of the interval. Because the calibration is ambiguous for the other numbers, the upper limit of the interval gets a reasonably hard cutoff at 1384 CE. Then the correct way to interpret what you're wrongly considering to be an excluded subinterval is that according to the probability distribution underlying the Stuiver and Pearson method, the upper limit of the calendar date interval is not likely to fall within the date rage 1312-1352 CE to 95% confidence. The absolutely wrong way to interpret would be that the lower limit of the calibrated calendar interval can be 1352 CE.

Honestly, you either need to grapple with the complicated reality of what you're trying to argue or admit that it's over your head and concede. Every post from you brings a new attempt to shoehorn what is rather messy and difficult science into your pidgin understanding of it, and every day you're just as confident as you were the day before that one of the most highly scrutinized examples of radiocarbon dating is "clearly" wrong on that new basis.

You're really falling all over yourself trying to make this your new slam-dunk. Yesterday the multiple intercept was irrefutable evidence that Sample 1 had multiple ages of fibers—a non sequitur of truly flabbergasting magnitude. Today the multiple intercept is irrefutable evidence that the radiocarbon dating was incompatible with dating by historiographical methods. Neither of these has the slightest grounding in reality. Imagine what desperate new mud-against-the-wall claim we'll be treated to tomorrow.

When you did your calculations, which data did you use? I am suggesting that Damon did the calculations wrong.
As I probably already stated, I used the data in Nature, as did Christen. When you say that Damon "did the calculations wrong," is that your opinion based on your own work, or are you just restating other people's claims?

You proffered a rejection of Damon based on a single number reported in the paper: the test statistic T ≈ 6.4 > χ²(ν=2, α=0.05) ≈ 5.991. That requires you to accept Damon's math up to that point, because his math is what's giving you your smoking gun. So why are you equivocating now? Yes, I understand that a great deal of Casabianca's argument involves trying to stir up a mess in the raw data. But that's something you have to ignore for now if you're trying to stand by the argument that the computed chi-squared value alone tells you that you must reject the Sample 1 dating because it alone proves heterogeneity.

You don't get to sidestep this step of the rebuttal by suddenly trying to invoke the raw data and your authors' enthusiastic flogging of Damon over it. You made a claim that relied on the Nature data being reliable enough to give you the number you used, so that's where we'll look for this step.

Van Haelst calculated a different value. Which one is correct? https://www.shroud.com/vanhels5.pdf
Ambiguous and irrelevant. I'm not going to wade through a meandering 28-page self-published manuscript trying to guess what "different value" you mean in my presentations versus that one.

I invited you to discover for yourself the more likely explanation for scatter in the Sample 1 data. When you proved incapable or unwilling to do it, I did it for you and laid out the explanation that Casabianca obscures behind fiddling with raw data and spinning conspiracy theories. All you can seem to do in response is attempt to wallow in the same minutia and sidestep the salient answer.

Can you explain why the Ward and Wilson test for intralaboratory agreement clearly establishes that wide scatter in the Arizona lab results across the board tracks with wide scatter in the unpooled results? Can you explain why—if Sample 1 contained a mix of fibers, as you claim—the Oxford and Zurich labs got such good intralaboratory results for the same cloth, according to that supposedly telltale math? If you think my Xs are in the wrong spots, do your own math and put them where you think they belong. If you have a different theory for why the Xs are in the Arizona row rather than in the Sample 1 column (under your hypothesis that Sample 1 is mixed fabric) now's the time to say so.

If you want to talk about someone else's work, we can do that after we've finished with Casabianca and if you can supply a peer-reviewed, academically published version. (I'm entertaining Casabianca only because he is published.) But as usual, until then, stop trying to derail the rebuttal.

Damon et al reported 4 measurements for the Arizona data, yet they did 40 measurements, and the 4 they did report were 8 measurements combined into 4.
The sources I cited to earlier explain why the results have been (and should be) pooled for analysis across different laboratories. These are authors who have unquestionable qualifications in radiocarbon dating, and some of whom Casabianca accepts as experts in some respects. But his unwillingness to engage with their wisdom on how to aggregate the data for the most useful results is part of why Casabianca remains largely ignored. Neither he nor his coauthors have any experience or training in radiocarbon dating. It's all a bunch of statistics-only handwaving by inexperienced authors.

You'll be pleased to know that the Christen method is applied to the unpooled data, but that's because its numerical theory is entirely different from Libbey's or Ward and Wilson's and derives its own model for probability that isn't based on distributions. As such, Christen's method identifies A1.1 and O1.1 as potential outliers. And, correctly, he simply removes those two runs as likely outliers without speculating about why they're outliers, without conspiratorially handwaving about how their presence somehow invalidates the whole process, and without committing some scientific cardinal sin. And with A1.1 and O1.1 excluded as outliers, all the remaining results for all laboratories for Sample 1 snap into place with a very high degree of agreement. In 1988, only two outliers out of 12 pools would have been a good result. No one disputes that outliers could have been present, or were likely present. But you and your authors desperately want the answer to outliers to be that we have to throw out the whole study as invalid due to incorrigibly heterogeneous data. That's just not how radiocarbon dating works.

Christen's is the modern (and more mathematically defensible) way of doing what Damon, Ward, Williams, and Libbey were already doing back in the 1980s with somewhat more crude methods. Contrary to what you think, and to what Casabianca thinks, the correct response in the field to inevitable outlier data is not a hair-on-fire exercise whereby you throw the whole study into doubt and rant about conspiracies to doctor the data and hide the evidence. What's sad is how Casabianca uses the Christen statistical method (via Christopher Bonk Ramsey's implementation in OxCal), but utterly ignores what these experts say to do with the answers it gives you. Casabianca just jumps to the conclusion that the sample itself must have been irredeemably compromised.

And you asked a question implying there was a specific answer, I don't know is just as good as absolutely nothing.
If you aspire to be a good teacher, don't ask trick questions.
I don't aspire to be a good teacher—I am a good teacher, if my students' ratings were any indicator. And I wager have more experience than you in teaching very difficult subjects at the college level, so I'll keep my own counsel about the best way to go about it.

I phrased the question like that because your argument from the very beginning presumed that normative intervals in distributions worked like some kind of real-world counterpart in actual data, akin to regulatory specifications or industrial tolerances. And you correctly noted that those tolerance-and-specification concepts in industry often do have implications in real-world measurement and operation. And some of them are calculated using statistics. But despite my best efforts to disconnect that rigidity concept declaratively in your mind from the very different concept of normative convention in science, you simply never considered that they could possibly be two different things.

Since I couldn't simply tell you they were different and have you believe me, I had to come up with a way for you to convince yourself that they were different. So I created a line of questioning designed to give you an incentive to try to identify and defend the sameness. Only when you realized that you couldn't did you come to believe what I and others had been telling you all along. And no, I don't take credit for that. Socrates invented this method.

Now quit griping and deal with my rebuttal as I stated it.
 
Last edited:

Back
Top Bottom