Does the Shroud of Turin Show Expected Elongation of the Head in 2D?"

Speaking as a layperson: you're right, but for the wrong reasons. People tend to be skeptical for the same reason that if you held up, say, a pointy piece of bone and claimed it was a unicorn horn, when scientists had analyzed it already and determined it was a walrus tusk.
And for some, it matters that this particular "unicorn horn" comes from a gift shop that has a well-known reputation for passing off walrus tusks and unicorn horns. I'm always amazed and amused when people who are so deeply steeped in one side of a controversy try to pass themselves off as the clear-eyed ones.
 
You can address the arguments made in Casabianca's paper or not, just adhoming him doesn't help.

He claims that the data lack homogeneity, because it fails the chi^2 test according to the method of Ward and Wilson. That's the same method as in the Damon paper, did you just read the Damon paper and learn what chi^2 method they used, it's the first time in the thread that Ward and Wilson have been mentioned.
All of this nonsense has been addressed.
The failure of the chi^2 test indicates a level of homogeneity too high for valid radiocarbon dating.
Bollocks.
 
Right, all three labs detected contamination, they were able to remove some but not all of it.
Will you be able to produce evidence for this assertion? As you have failed to do for your secret radiocarbon test, your first century herringbone weave cloth, et cetera.....
Three separate textile experts examined photos of the sampled area and noted that it was from a repair.
An utter lie. Which we've covered previously.
 
It is not ad hominem to reject unfounded claims to expertise. If you believe otherwise, report the post.
Like many such, @bobdroege7 does not seem to grasp what "argumentum ad hominem" actually means.
No, you haven’t. You think the test itself is the proof. The logic of the paper tacitly asks the reader to trust the judgement of the authors in interpreting the results in context.
A very important point.
 
Yes, the contamination and shroud repairs are visible and/or documented.
A lie.
It's not magic nor is it invisible.
And yet only the magic few, most of whom have never examined the Lirey cloth, can see them....
That has been explained repeatedly to you.
No. You have asserted these claims repeatedly but failed to provide plausible supporting evidence.
Like your secret radiocarbon test claim.....
 
Speaking as a layperson: you're right, but for the wrong reasons. People tend to be skeptical for the same reason that if you held up, say, a pointy piece of bone and claimed it was a unicorn horn, when scientists had analyzed it already and determined it was a walrus tusk. Most people don't even have a basic reason to accept the very premise of what it is purported to be, much less claiming the radiocarbon dating was wrong because they were all a bunch of bungling oafs that didn't know what they were doing
OT, we encountered a 'unicorn boy' last weekend.....
 
Like many such, @bobdroege7 does not seem to grasp what "argumentum ad hominem" actually means.
It's quite common for people to mistake voir dire of an expert witness for an ad hominem attack. It's even easier to do under the common pattern of pseudoscience where you may find 90% defensible science combined with 10% sketchy speculative or conclusory stuff—often implied or cleverly disguised. Everyone wants the rebuttals to focus on the unremarkable science or math and ignore the part that's actually broken. A statistical test typically gives you a measurement of uncertainty. Whether you can tolerate that degree of uncertainty for your purpose is a judgment call, not a math problem. It seems that those who advocate Casabianca want statements like
The failure of the chi^2 test indicates a level of homogeneity too high for valid radiocarbon dating.
to sound like an inevitable mathematical fact, not an expert opinion in a specialized field. And if it's just mechanically derived via Ward and Wilson, with no interpretation necessary, then you don't need any relevant expertise. Anyone could do it, and if you start to question whether someone has the relevant expertise to do it then you're ignoring the "real" argument and evidence.
 
It's quite common for people to mistake voir dire of an expert witness for an ad hominem attack.
Absolutely. I remember my testifying experiences.
It's even easier to do under the common pattern of pseudoscience where you may find 90% defensible science combined with 10% sketchy speculative or conclusory stuff—often implied or cleverly disguised. Everyone wants the rebuttals to focus on the unremarkable science or math and ignore the part that's actually broken. A statistical test typically gives you a measurement of uncertainty. Whether you can tolerate that degree of uncertainty for your purpose is a judgment call, not a math problem. It seems that those who advocate Casabianca want statements like

to sound like an inevitable mathematical fact, not an expert opinion in a specialized field. And if it's just mechanically derived via Ward and Wilson, with no interpretation necessary, then you don't need any relevant expertise. Anyone could do it, and if you start to question whether someone has the relevant expertise to do it then you're ignoring the "real" argument and evidence.
Real Science is hard. Especially if you really want magic.
 
re: the chi-square test

to sound like an inevitable mathematical fact, not an expert opinion in a specialized field. And if it's just mechanically derived via Ward and Wilson, with no interpretation necessary, then you don't need any relevant expertise. Anyone could do it, and if you start to question whether someone has the relevant expertise to do it then you're ignoring the "real" argument and evidence.

I remember in the old Bevington book on scientific statistics (I think it is Bevington) talking about things like failing statistical tests when looking at data sets where the data don't agree to within the stated uncertainty of the datapoints. In that situation, the conclusion was, their uncertainties are too small. Likely there are unidentified factors that are contributing to a larger uncertainty than was assigned.

It is never the conclusion that "therefore the data are for different samples"

If you take the reported dates from the 14C dating and double the uncertainty, what happens to the chi-square test? All of a sudden it's fine.

What does that tell you? That you don't need 30 CE contamination.
 
re: the chi-square test



I remember in the old Bevington book on scientific statistics (I think it is Bevington) talking about things like failing statistical tests when looking at data sets where the data don't agree to within the stated uncertainty of the datapoints. In that situation, the conclusion was, their uncertainties are too small. Likely there are unidentified factors that are contributing to a larger uncertainty than was assigned.

It is never the conclusion that "therefore the data are for different samples"

If you take the reported dates from the 14C dating and double the uncertainty, what happens to the chi-square test? All of a sudden it's fine.

What does that tell you? That you don't need 30 CE contamination.
'Data Reduction and Error Analysis'? A classic. In fact if @bobdroege7's claims about his background are true he should have been exposed to it
 
I did. It wasn’t the kind of rebuttal you were prepared for, so apparently you don’t understand it.


Irrelevant. You confidently thought it was another test and tried to instruct people to that effect. This indicates you don’t understand the test or how it’s used in the field. Sure, Casabianca et al. propose to tell you, but they’re not experts in the field either.


It is not ad hominem to reject unfounded claims to expertise. If you believe otherwise, report the post.


No, you haven’t. You think the test itself is the proof. The logic of the paper tacitly asks the reader to trust the judgement of the authors in interpreting the results in context.
Not so fast, how did you rebut it again, maybe you didn't or maybe I did not understand it.

No, I did not think it was another test, I have always referred to the chi^2 test as performed in the Damon paper, no other chi^2 test.

I don't think the test is proof, you know what proofs are for I assume. It is evidence, you should know the difference.

You are gaslighting Casabianca et al, they do not ask anyone to trust their judgement, they cite others and provide data, such that one can test their claims.
 
re: the chi-square test



I remember in the old Bevington book on scientific statistics (I think it is Bevington) talking about things like failing statistical tests when looking at data sets where the data don't agree to within the stated uncertainty of the datapoints. In that situation, the conclusion was, their uncertainties are too small. Likely there are unidentified factors that are contributing to a larger uncertainty than was assigned.

It is never the conclusion that "therefore the data are for different samples"

If you take the reported dates from the 14C dating and double the uncertainty, what happens to the chi-square test? All of a sudden it's fine.

What does that tell you? That you don't need 30 CE contamination.
Right, and edited to add, which chi^2 test?

How close to agreement are these two statements?

"show that it is unlikely that the errors quoted by the laboratories for sample 1 fully reflect the overall scatter."

"Likely there are unidentified factors that are contributing to a larger uncertainty than was assigned."

One is from the Damon paper.

So, how do you double the uncertainty?

Not in my bag of tricks.

The contamination is from the 15th century or later, or anytime the shroud was repaired.
 
Last edited:
'Data Reduction and Error Analysis'? A classic. In fact if @bobdroege7's claims about his background are true he should have been exposed to it
From my training, one measures uncertainty instead of arbitrarily assigning it.

Even if you read it from a calibration sheet, someone still measured it.
 
The problem is in the inexpert interpretation of the results of the test. You really want the problem to be somewhere else that’s easier for you to defend. That’s how pseudoscience works.

If you want to trust their judgment, that’s your privilege. I don’t, for the reasons given. And it doesn’t seem like anyone else except shroud authenticists sees a problem.


The Wikipedia page you sent us to has absolutely nothing to do with the Ward and Wilson test. You didn’t know the difference, but you tried to play teacher and got it very wrong. I’m still not sure that you understand how different kinds of models—which can be radically different—can be predicated on the same distribution. That’s kind of a big deal in statistics.


Weasel words. You’re treating the paper as if a definitive answer just drops automatically out of the math and requires no expertise to put into contexts to answer questions like, “Does this mean the results can’t be trusted?”


Straw man. The logic of the paper requires you to accept their judgment regarding the results. That there are some objective elements to their argument does not preclude that.

Has Casabianca or any of the coauthors ever published anything in archaeology or radiocarbon dating for something other than the shroud? Do others in archaeology in general or radiocarbon dating in general recognize them as authorities? These are not ad hominem or otherwise improper questions.
Right,

You know that Casabianca and Ward and Wilson were published in the same journal.

And if you insist on a voir dire for Casabianca et al, the answer to your question is yes for the coauthors but no for Casabianca. I can provide a list of their publication for 50 bucks. Less than one dollar a citation.

That does not mean their conclusions are true, but we have the opportunity to check their work.

Math does not lie. Yes that is the way I treat data. If the data meet certain conditions, you can eliminate the null hypothesis. Your scholarly opinion means nothing at this point.
 
You know that Casabianca and Ward and Wilson were published in the same journal.
Your point being ... ?

And if you insist on a voir dire for Casabianca et al, the answer to your question is yes for the coauthors but no for Casabianca. I can provide a list of their publication for 50 bucks. Less than one dollar a citation.
I'm supposed to pay you to do your homework?

That does not mean their conclusions are true, but we have the opportunity to check their work.
"We?" You have the responsibility to lay the foundation for the expert opinions you want people to believe.

Math does not lie.
Asked and answered. Statistics is the study of uncertainty. The mathematics of statistics will give you a number that represents the uncertainty in a measurement. But it does not tell you whether that degree of uncertainty is tolerable for your purposes. As I said, you want to pretend this is a completely cut-and-dried problem where a yes/no answer drops mechanically out of the bottom of some completely determinstic machine. That's not how it works.

Yes that is the way I treat data. If the data meet certain conditions, you can eliminate the null hypothesis. Your scholarly opinion means nothing at this point.
Means nothing to whom?
 
Your point being ... ?


I'm supposed to pay you to do your homework?


"We?" You have the responsibility to lay the foundation for the expert opinions you want people to believe.


Asked and answered. Statistics is the study of uncertainty. The mathematics of statistics will give you a number that represents the uncertainty in a measurement. But it does not tell you whether that degree of uncertainty is tolerable for your purposes. As I said, you want to pretend this is a completely cut-and-dried problem where a yes/no answer drops mechanically out of the bottom of some completely determinstic machine. That's not how it works.


Means nothing to whom?

Not my homework, but as an answer to your question.

Yes, I find Torrisi and Pernagallo to have sufficient qualifications in statistics, and you do not, otherwise you would not say Statistics is the study of uncertainty, because Statistics is the study of Data.

I would say you are pretending to understand statistics.

I am not pretending, that's is what it is. You run a statistical test, and if it is out of tolerance you do something different than if it is in tolerance.

And, this is the most important, they are instruments, not machines.
 

Back
Top Bottom