Does the Shroud of Turin Show Expected Elongation of the Head in 2D?"

The answer is the 68-95-99 rule. It's not a trick question, and no, you didn't know any such thing ahead of time.

Because it's the most commonly used of the 68, 95, and 99 figures in the experimental sciences.

I first asked you why Casabianca used a 95% confidence interval. That was because you were reading χ2 critical values from a table that included many columns for different confidence intervals. It's important to know why Casabianca thought was the appropriate confidence interval because if you use a different one, the numbers you're using to drive the heterogeneity argument change. If a yes-no determination ends up being highly sensitive to a parameter, you had better be sure to set that parameter at a defensible place.

But you didn't get it. You answered that Casabianca used 95% because Damon used 95%. That's a true statement but an incomplete answer. It just kicks the question down the road to ask, "Why then did Damon et al. use a 95% CI?"

You still didn't get it. I had to finally lay the 99% (3 standard deviations) figure on you before you had enough to Google the answer.

Now that you know where the 95% CI comes from, I've asked the next question in my line of questioning. In an ordinary, valid data set, what observable, fundamental shifts occur in the underlying data at those {68, 95, 99, ...} CI boundaries that let you say something about data that falls outside them? And yes, when you finally come up with an answer to that question, I'll ask the next question and so forth until you finally understand my rebuttal to Casabianca.


Yes, the answer is that Damon used 68% and 95% because those correspond to μ±1σ and μ±2σ respectively in the normal distribution. Some physics needs 4σ or 5σ confidence to be accepted, while I occasionally want to aim for 6σ. (I.e., Six Sigma, but don't mistake engineering for experimental science. I don't want to have to relitigate that.)

No, this doesn't just hold for the normal distribution.

The normal (Gaussian) distribution is the Mother of All Distributions.

Properties like mean, standard deviation, score, and degrees of freedom are defined for all these distributions, but they're formulated differently in each case because the mathematical definition of each distribution is different. But the concepts transfer. The concept of standard deviation is generic enough to apply to all distributions that have one, and theories developed in those terms for one distribution generally apply to the others, because that's the way we've formulated them to work.

When I asked you why Casabianca used the 95% CI, if not for the 68-95-99 rule of thumb, you insinuated that only he knows—because it couldn't possibly be for that rule, in turn because (according to you) that rule only works for normally-distributed data. But then Damon also used 68% CI and 95% CI, also for reasons that have nothing to do with the 68-95-99 rule? And also for reasons that only he knows?

I'm going to give you a chance to retract that silly, knee-jerk answer. If Casabianca is just making up numbers to use in his analysis and not explaining where they came from, wouldn't that make him a poor scientist? Neither Damon nor Casabianca has to explain why they're using 68% and 95% CIs because everyone already knows. And yes, it's because of the 68-95-99 rule.

The next question then is to examine how those feel-good intervals relate to the measurable properties of the data they describe. We want to reason about coin tosses or fish eggs or carbon isotope atoms, not just abstract statistics. So what marks a change in the underlying data corresponding to those whole-number sigma boundaries? What happens in the data that lets us say—according to where they lie with respect to those intervals—that we should reject that data? The intervals effectively categorize the data. What similar signs of discrete categorization should I be able to see in the actual data that corresponds to those elegant intervals?


It's my rebuttal. You're desperately trying to take it away from me and drive it away from the weak part of the argument. The weak parts are the premises upon which you place such great faith in the rigidity of the statistical analysis and therefore its ability to claim that Damon et al. cheated. In turn, I've determined that your great faith is based on assumptions you've made about elementary statistics rather than an education in it.

You're repeatedly trying to bait me into answering questions that presume your premises are correct. My rebuttal is instead aimed at your premises, hoping to get you to understand why they're not as correct as you may think. You can either deal with that fact or you can admit you're not prepared to have your argument addressed in the proper manner. You aren't entitled to demand an answer that is both correct and simplified to fit your existing knowledge.


It's my rebuttal.

I'm sure you were good at your job, and I'm sure your understanding of statistics was sufficient at the time to allow you to be successful. But you clearly don't have the proper background in statistics to understand why Casabianca's claims are not considered credible by archaeologists.

As we've discussed, that's not really the issue. You've tried to declare the 68-95-99 rule irrelevant to Damon and Casabianca because you wrongly think it applies only to the normal distribution. In your haste to knee-jerk your way around every time you have to admit you just learned something, you've forgotten that your argument relies on those intervals being defined for the χ2-distribution (else where did that table come from and why does it matter?) and having the same meaning as they do for other distributions. And you ended up in a corner where you can't tell us where Casabianca, Damon, et al. got the intervals they used if not from the rule.

So the tl;dr answer to your question is, "It doesn't matter." The subtext is, "By even asking it you're displaying ignorance that we need to keep correcting."
First, your post is lengthy, appropriate, and almost completely true.

In my job, I did not have the luxury of working with normal distributions, mostly with radioactive decay, which is usually near normal but skewed. There is a gray area between nice normal distributions and those that where you can not use statistics like standard deviation and the others.

That's my point with the Damon data for the shroud, to paraphrase a fictional baseball coach in Japan, coaching actor Tom Selleck, your distribution has a hole in it.

Yes, I knew about the 68-95-99.7 correlation with standard deviations, do not incorrectly round it to 99, but yes I did not know it was considered a rule. And I knew, and know 6 sigma is one in a million.

You can not apply it to bad data.

And yes, Damon cheated, unless you consider averaging data points before reporting your data.

That was one of the things revealed by Casabianca et al, in requesting access to the data from Damon et al's paper.

And it is true that I don't know what happens at one sigma, two sigma, etc. The inflection points on a normal distribution are close to the one sigma, that might not be the answer you are looking for, I don't know.

I will discuss anything you want, but the end goal for me is to discuss the distribution of the Data as reported by Damon, and Casabianca.

Because it is radioactive decay we are talking about and it better be normal or gaussian to be acceptable.

That's the question for me, does the Damon data have an acceptable distribution?
 
Last edited:
You can not apply it to bad data.
This is a naive claim. Applying it to data is one way of assessing whether data are "bad." That's the premise behind Casabianca's argument. Goodness of fit has to be reckoned according to a certain measure of confidence. As our desired confidence varies, so does the probability that the fit is good, or, conversely, that there are outliers that might make for "bad" data.

In science, we publish according to agreed-upon confidence intervals derived from elegantly-expressed mathematical demarcations in the canonical Gaussian distribution. You seem to have finally come around why we to this, so congratulations. What I want to know is how those intervals (which now have the effect of being categories) are reflected as a discretization or categorization that we can see in the actual underlying data. Please do your best to answer that question. "I don't know," is an acceptable answer that I promise not to hold in derision.

I use "bad" in scare quotes because you're still laboring under the "must fit the specifications" misconception and still trying to foist a conclusion.

I will discuss anything you want, but the end goal for me is to discuss the distribution of the Data as reported by Damon, and Casabianca.
That seems like just another way of saying you reserve the right to derail my rebuttal to suit you. And again you're trying to sidestep the real problem and suggest that your goals, not mine, get to decide the course to follow. I don't care what you're interested in. You challenged me to rebut Casabianca, so you're going to sit there and let me do it the way I think is right.
 
Last edited:
Yes, I knew about the 68-95-99.7 correlation with standard deviations, do not incorrectly round it to 99
That's a cosmetic quibble. All the figures are approximations. It's not somehow more correct to clip off one element at one decimal place and leave the others as integers. You'll see plenty of references to 68-95-99 without the additional decimal on the 99, and we don't call it the "68.26894921-95.44997361-99.73002039 rule" for obvious reasons. It's just a convenient name. Don't read too much into it.

Now it matters (cosmetically and arithmetically) when you start to include percentage equivalents to μ±kσ where k ≥ 3, which then means you're not really using the rule of thumb anymore. All those values lie between 99% and 100% which means you're computing them using an integral over a probability density function (blech!) or reading them off a table and paying a lot more attention to what comes after the decimal place. Those figures are all 99-point-something, therefore the "something" becomes the interesting part and you won't want to approximate them aggressively.

but yes I did not know it was considered a rule.
It's a rule of thumb, which in turn leads to an agreed-up convention, which in turn allows us to compare findings across lots of sciences—insofar as that's useful. The whole point of descriptive statistics is to turn uncertain measurements in domain-specific, raw-number, real-world values into things we can accurately and uniformly reason about with confidence in their congruity. So for all normally-distributed data (just as an example) in every empirical field, a z-score is a z-score is a z-score. This is the power of statistical reasoning. There's also a down side.

And it is true that I don't know what happens at one sigma, two sigma, etc. The inflection points on a normal distribution are close to the one sigma, that might not be the answer you are looking for, I don't know.
This is a partial answer to my question, and I see that you provided it before I asked it again. You're at least trying, which is much better than where we've been for the past few weeks. Thanks for that.

I'm asking what you would see in the data. I understand it's tempting to look at the graph of the idealized distribution, but the question is about what happens to the data at those standard deviation boundaries where the data can be reasonably expected to conform to whatever distribution you're using. Maybe this is where I'll recant and say that playing around with data in Excel or some other tool might help you visualize it.
 
Last edited:
All those values lie between 99% and 100% which means you're computing them using an integral over a probability density function (blech!)

Dude, now I am offended!

(I love the integrated normal distribution; the normsdist function is probably my favorite function in excel!)
 
I have Damon's paper open in window right now. Please direct me to where it says, as you purport:
"The results of this test, given in Table 2, show that it is unlikely that the errors quoted by the laboratories for sample 1 fully reflect the overall scatter. The errors might still reflect the uncertainties in the three dates relative to one another,"

There you go.
 
Dude, now I am offended!

(I love the integrated normal distribution; the normsdist function is probably my favorite function in excel!)
Me, I like probability functions...

or, maybe I don't.

I do like pretty pictures of them, done by others, but doing them myself...
 
Sorry chap,

That's actually straight from the Damon paper.
Really? I have yet to find the phrase:
That dating is invalid, the AMS dating stands as invalid.
:rolleyes:


Now, let's pass your pathetic attempts at distraction, and demonstration of your ignorance of stats, and get back to the questions I actually asked of you.
1. Where are the examples of herringbone weave fabric, as used in the Lirey cloth, that date to the first century. You seem to have changed your mind on where the cloth came from, altering your early assertions that it was from the Middle East to your current claims of Italy.
Let's see your evidence.

2. You, and some of your fellow shroudies, claim that the Pray Coxex (dating from the late twelfth century) shows the Lirey cloth, more than a century before the fake shroud was created, and specifically that the marks shown in an illustration, which you claim is of a shroud, show the burns visible on the Lirey cloth.
You previously claimed that these were from the molten silver, until I pointed out to you that that fire occurred in 1532, i.e. more than three hundred years after the Pray Codex, whereupon you frantically back-peddled (after disappearing for a while). You have later claimes that the marks on the alleged shroud shown in the Pray Codex are actually burns from another earlier fire.
When was this fire and where is your evidence?

3. You claimed that a single thread allegedly from the supposed shroud was subjected to a secret AMS test, based exclusively on an (alleged) interview of Heller by Case. Now I am still waiting for your evidence of this. Starting with which AMS facility carried out the test and why there are no records (I assume you'll spout more conspiratorial drivel).
Where is the evidence for the secret testing?
 
"The results of this test, given in Table 2, show that it is unlikely that the errors quoted by the laboratories for sample 1 fully reflect the overall scatter. The errors might still reflect the uncertainties in the three dates relative to one another,"
No, a statement of the form, "X does not explain the amount of Y," is not equivalent to saying, "There is too much Y to be valid." Nowhere does Damon report what you attribute to him regarding whether the amount of scatter means the dating is invalid.
 
No, a statement of the form, "X does not explain the amount of Y," is not equivalent to saying, "There is too much Y to be valid." Nowhere does Damon report what you attribute to him regarding whether the amount of scatter means the dating is invalid.
Yes it does, it shows Damon et al's error analysis was incomplete. Claiming that your data is more precise than it is, is fraud.

I used excel to compare the standard deviations of their published data with the actual standard deviation of the published dates.

For example, the standard deviation of the 4 dates from Arizona: 591, 690, 606, and 701 is 56.5, which is a bit higher that what they published.

Also this, from Damon: "The spread of the measurements for sample 1 is somewhat greater than would be expected from the errors quoted."

Of course Damon did not report that, it sinks their paper like the Bismark sank the Hood.

Perhaps now we can discuss the chi^2 test, or the data in the Damon paper.

Since radioactive decay is random, like throwing dice, or coin flipping, the data should have a distribution like that.

But it does not, it's like an old American tv show, Twin Peaks.
 
Perhaps now we can discuss the chi^2 test, or the data in the Damon paper.
Is this...
And it is true that I don't know what happens at one sigma, two sigma, etc. The inflection points on a normal distribution are close to the one sigma, that might not be the answer you are looking for, I don't know.
...your final answer to the question :—
In an ordinary, valid data set, what observable, fundamental shifts occur in the underlying data at those {68, 95, 99, ...} CI boundaries that let you say something about data that falls outside them?
* * *
So what marks a change in the underlying data corresponding to those whole-number sigma boundaries? What happens in the data that lets us say—according to where they lie with respect to those intervals—that we should reject that data? The intervals effectively categorize the data. What similar signs of discrete categorization should I be able to see in the actual data that corresponds to those elegant intervals?
Or are you still working on it?
 
Yes it does, it shows Damon et al's error analysis was incomplete. Claiming that your data is more precise than it is, is fraud.
Damon made no such claim.

In fact, you split Damon's sentence right in the middle. At least you were honest enough to leave the trailing comma, but it's disingenuous of you to put words in Damon's mouth that you should have known he didn't say. And in fact you outright accuse him of deception and concealment—
Of course Damon did not report that, it sinks their paper like the Bismark sank the Hood.
—for not "admitting" that he mishandled an alleged outlier as some of his critics have claimed.

What Damon presented as a followup to what you wrongly characterize as some kind of smoking gun, to wit—
Also this, from Damon: "The spread of the measurements for sample 1 is somewhat greater than would be expected from the errors quoted."
—was the "studentization" of the data for Sample 1, using the correct and perfectly acceptable process that I alluded to in the lengthy post you agreed was on point. Damon didn't just leave the unexpected scatter hanging. What he said was that because the χ2 test revealed the unlikelihood that the reported error could account for the spread in the data, the reported error couldn't be used to derive the appropriate confidence intervals for Sample 1 according to the straightforward model of normality. Instead the confidence intervals had to be inferred from the studentized data, using the t-distribution instead of the normal distribution. This then was reported in Table 3, where the methods of obtaining the two confidence intervals was properly identified.

You may want to take issue with the studentization of the data as method of accommodating a potential outlier. But to claim Damon somehow knew the scatter meant the dating was invalid and that he therefore chose to conceal something or deceptively claim precision that the data did not support is straight-up conspiratorial nonsense. Keep firmly in mind that Damon et al. are still considered good science while Casabianca et al. enjoy essentially zero credibility outside the small group of shroud authenticists. I'm trying to explain to you why that is, and the answer (not surprisingly) isn't that the whole world is biased against Casabianca.
 
Last edited:
Is this...

...your final answer to the question :—

Or are you still working on it?
Yes, do you understand what "I don't know" means.

Theoretically, it is a continuous curve with two inflection points, so I don't see any fundamental shifts.
 
No, someone else offered the judgment that Damon's error analysis was incomplete and therefore invalid. You insinuated that the judgment of invalidity could be found in Damon. That's simply not true.
The numbers are there in the paper, the insinuation follows from those numbers.

Can you calculate the standard deviation of the 12 data points and tell me you get 1260 to 1390 using +/- 1.1 sd and +/- 2.6 sd?
Using those 12 data points, I get a much wider range.

Also, I get 1259 for the unweighted mean in table 2, which is below the range 1260 to 1390, how is that possible?

And it is still the chi^2 test that makes it invalid, that's still in the Damon paper.
 
Keep firmly in mind that Damon et al. are still considered good science while Casabianca et al. enjoy essentially zero credibility outside the small group of shroud authenticists. I'm trying to explain to you why that is, and the answer (not surprisingly) isn't that the whole world is biased against Casabianca.

Keep in mind that I don't keep fallacies firmly in mind.

Science is not a popularity contest.

You won't be able to explain anything to me unless you look at and discuss the data.
 
The numbers are there in the paper, the insinuation follows from those numbers.
Ah, wallpaper words.
Can you calculate the standard deviation of the 12 data points and tell me you get 1260 to 1390 using +/- 1.1 sd and +/- 2.6 sd?
Using those 12 data points, I get a much wider range.

Also, I get 1259 for the unweighted mean in table 2, which is below the range 1260 to 1390, how is that possible?

And it is still the chi^2 test that makes it invalid, that's still in the Damon paper.
:rolleyes:
 
Keep in mind that I don't keep fallacies firmly in mind.
Except the radiocarbon dating has stood up to examination.
Science is not a popularity contest.
And yet when only a tiny fringe of "scientists" (many, if not most, of which aren't actually scientists) support a theory, while the vast majority consider it drivel, it probably is drivel.
You won't be able to explain anything to me unless you look at and discuss the data.
Ah yes, holding fast in the face of facts.....
 

Back
Top Bottom