• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Does the Shroud of Turin Show Expected Elongation of the Head in 2D?"

Oh look a Fringe Reset/ @bobdroege7 seems to think we've forgotten he posted this nonsense before.


Sigh. Except the repairs portions of the Lirey cloth are easily seen and hence weren't used for the radiocarbon sampling.

The basic premise of radiocarbon dating is that there was no input of
14C into the material being tested.

Those would be the experts who state that the area sampled doesn't contain a patch? Those experts?

This is your usual drivel which has been debunked previously. Your inability to understand that rebuttal is your problem. Educate yourself.

That's because you have a desperate need for the Lirey cloth to be real. The analytic techniques have been explained.
Some repairs are more easily seen than others, but I can see the repairs on the pictures of the samples later destroyed by the testing.

That's what sequestered means, no input of carbon into the material tested, that means, according to you, the radiocarbon testing is not valid because the shroud has been repaired, too many times, that the knowledge of all the repairs has been lost.

And I don't care if it is real or not, beyond a scientific curiosity.
 
First, every measurement is inaccurate to some extent as the accurate value is unknown, that's freshmen science in any discipline. It is impossible to measure accuracy, you should know that. If you can't keep accuracy and precision separated, how can you discuss scientific measurements?

Second, your irrelevant is not, if a cloth has been patched with modern threads, that will throw off the carbon 14 date.

Thirdly, you never gave an adequate rebuttal to Casabianca et al, because you never have discussed the Chi^2 issue adequately. Nor their exposal of data hiding, and refusal to release their raw data for almost 30 years.

Fourthly, you can check back and see that I linked to a generic wiki post on Chi^2 tests and not to the specific Pearson Chi^2 test.

Data pooling as you call it, but that is not what Damon did, they had five data points and averaged two sets of two to combine the five data points into 3 to get better results. You OK with that? It would be a simple statistical homework problem to show that what they did is wrong.
Jesus Christ, man, you seem hell bent on embarrassing yourself, but I can't tell why.
 
First, every measurement is inaccurate to some extent as the accurate value is unknown, that's freshmen science in any discipline. It is impossible to measure accuracy, you should know that. If you can't keep accuracy and precision separated, how can you discuss scientific measurements?
You expressed your doubt in terms of "accuracy," not precision. I merely repudiated that doubt in like language without further comment. It seems if anyone needs a lecture on the difference between accuracy and precision, it is you. The Arizona lab certainly had a problem with precision. But there is no problem with accuracy here, as cross-laboratory results are consilient with other physical evidence. You have simply declared that results are "inaccurate" by virtue of some statistical test you barely understand, and a handwaving claim that there must have been some invisible patch from which the radiocarbon dating samples were drawn.

Second, your irrelevant is not, if a cloth has been patched with modern threads, that will throw off the carbon 14 date.
But not in any of the ways you claimed.

You first claimed that the supposed lack of homogeneity in the dates for Sample 1 (the shroud) was best explained by a mix of cloth in the samples. I invited you to do some more in-depth statistical analysis to confirm that hypothesis. You were not competent to do so, so I did it for you. I showed instead that the scatter in the data was better explained by a hypothesis evidently related to the Arizona lab's methods, which produced more scatter across all samples than the other laboratories. This would not be true if the problem were samples contaminated with modern cloth.

You then claimed that the bimodal distribution of the calendar dates was the result of two different kinds of cloth being present in the sample. That is so scientifically and statistically illiterate that I'm frankly embarrassed for you for your having even suggested such a thing. Even when that gross misconception was corrected, you still tried to make considerable conspiratorial hay out of quite a pedestrian statistical reality. You really aren't in a position to lecture the world on how heterogeneous specimens will bear out in calendar dates.

Thirdly, you never gave an adequate rebuttal to Casabianca et al, because you never have discussed the Chi^2 issue adequately. Nor their exposal of data hiding, and refusal to release their raw data for almost 30 years.
Again I disagree. Your inability to understand the rebuttal is not the same as my not having provided it. You simply have no desire to learn what needs to be known in order to understand why the rebuttal works, and you are not entitled to demand some other rebuttal that is both correct and fits withing your limited understanding. Consequently you pivot and fringe-reset the same old simplistic talking points and offer only "aw, shucks!" homespun attempts to present your willful ignorance as if it were a strength. Again, the world's scientists accept the Damon et al. dating as good science. They do not accept Casabianca's frantic mud-stirring as anything consequential. These are the facts. I'm trying to explain to you why those are the facts, and it is not for the conspiratorial reasons the authenticists complain about.

The Ward & Wilson test is one of several possible methods to identify probable outliers in pooled data. It is by no means the only method or even the best method. Damon et al. applied a method common in the late 1980s for accommodating outliers. Subsequent work using the same pooled data but more refined statistics confirms that the radiocarbon dating of the shroud was affected by surprisingly few outliers. Outliers are quite common in radiocarbon dating, according to Taylor. That Casabianca doesn't know what to do with probable outliers does not substantiate your claim that Damon et al. cheated.

Casabianca's accusation of "data hiding" is explained entirely by his misunderstanding of how the data must be pooled and what is appropriate reporting in the field. Instead of attempting to understand archaeology and radiocarbon dating, he chose to spin a bunch of conspiracy theories supported by no evidence. His claim that the statistic in the Ward & Wilson test constitutes a failure of the method to produce acceptably homogeneous results is not supported by Ward and Wilson themselves, so I can't imagine why you think my pointing this (and other relevant points) out to you is somehow insufficient.

Fourthly, you can check back and see that I linked to a generic wiki post on Chi^2 tests and not to the specific Pearson Chi^2 test.
Yes, there exists a separate Wikipedia article on the Pearson test to which you did not link. However, you insinuated that reading the article you did link would give us more information on why "the chi-squared test" is supposedly a slam-dunk failure for Damon et al. That article provides no such insight because it does not discuss the kind of test that Ward and Wilson devised. You didn't know that when you linked it.

The Ward & Wilson test is a test for homogeneity that uses a χ²-distribution, but works for continuously distributed data, not the categorized data your article describes. The data pooling you fail to understand is how we go from discretely sampled distributions to the continuous distributions that we would need for Ward and Wilson. The bulk of your article discusses some of the tests that use the χ²-distribution to reason about categorical data. The Pearson goodness-of-fit test is the most common of those tests, and thus receives the bulk of the attention in the article despite its also having its own article. The other tests the article discusses are also tests of categorical data. Don't be fooled by some of it having been sampled from data we understand to be normally distributed; those data must be put into contingency tables (i.e., categorized) before any of the other tests mentioned in that article will apply. That has nothing to do with Ward and Wilson, and nothing to do with the mathematics upon which they base their model, which requires no categorization.

Your article mentions homogeneity in one paragraph, in a very handwaving fashion with no examples and no reference to any actual homogeneity tests that use the χ²-distribution. Ward and Wilson's work doesn't even make the laundry list of other tests—again, categorical tests—that get no more mention in your article than a link.

The fact remains that when you sent us to that article, you had no clue what you were talking about. Our subsequent examination of your comprehension of statistics bore that out in sufficient detail. All you could muster by way of understanding was that the T-value didn't fit the "specifications" according to "the chi^2 test" (as if there were only one) and therefore the data had to be rejected. You had no idea where any of those values came from or what they meant, or even how the test works. The existence of separate Wikipedia articles for the various tests that use the χ²-distribution does not mean you somehow still understood what you thought you were directing us to. You Googled for "chi-squared test" and linked the first thing that popped up.

Data pooling as you call it, but that is not what Damon did, they had five data points and averaged two sets of two to combine the five data points into 3 to get better results. You OK with that?
Yes. And so is the entire rest of the archeology community. The process you describe is data pooling. It's necessary in this case to resolve what would otherwise be incompatible factors in the degrees of freedom in the individual runs. Thus normalized, the data can be properly tested for homogeneity (as Ward and Wilson show), or further categorized to be suitable for the other—albeit irrelevant—tests that use the χ²-distribution for goodness-of-fit into categories.

It would be a simple statistical homework problem to show that what they did is wrong.
In whose opinion? Casabiana et al. tried to make such an argument and got laughed at by the people with actual qualifications in the field. You tried to invoke other pro-authenticity authors (similarly unqualified) whose self-published offerings are even more pathetically misinformed, such as someone actually trying to apply a derivative of a binomial distribution! You are certainly not in a position to judge what is correct statistical modeling for a given problem.

Keep in mind you're claiming that one of the most noteworthy applications of one of the most talked-about measurement methods in all of science is "obviously" wrong, in a paper authored by more than twenty eminent people in the field. If that's the case then why aren't eminent scholars pointing this out to avoid embarrassment for their field? Why instead are people like Taylor (who literally wrote the literal college textbook on radiocarbon dating) citing to Damon et al. favorably? Why is the supposedly best evidence of this purported scientific fraud authored by people with no qualifications in the relevant field?

You are wrong, it's not a rule, it's a guideline, and one that Damon et al did not follow.
The first rule of statistics is to know your data and how they are supposed to behave. That informs your use of statistical norms and models and the interpretation of the results of statistical analysis. When you suggest that no one else would have any reason to use anything other than a 95% CI, real science just sort of does a facepalm in your general direction. Damon et al. reported their data according to the norms of archaeology, and specifically according to the norms of radiocarbon dating in conformance with other scholars I showed you, and in conformance with other radiocarbon dating I have had done in my professional work. You exhibit no appreciable expertise in statistics and no apparent desire to learn further. Perhaps that position of willful ignorance is not a good position from which to announce that the pioneers of a given field are doing it wrong.

Damon et al. report their findings at several stages according to both ~1σ and ~2σ confidence intervals, because both are proper in their field and both contribute to an understanding of the strength of the data. The only figure to which they apply only the 2σ confidence interval was in the application of the Ward and Wilson test, and then only because casting a broad net for as much data as practical is the only way to identify what might be possible outliers. You do this even when the bulk of the data might be quite closely clustered.

What neither you nor Casabianca understands is how to handle probable outliers in radiocarbon dating. You can either throw them out and perform the analysis on the better-clustered subset of data, or you can use a different statistical model that accommodates the effect of outliers in small datasets. Damon et al. did the latter, switching out a straightforward weighted combination for an unweighted combination method commonly used in chemistry for similar problems. It's not as if he just made something up. Other researchers who came afterward followed the discard approach according to Bayesian methods that don't rely on the fitness of a distribution and confirm that the data are otherwise suitably clustered. The resulting radiocarbon date did not meaningfully change as a result, showing that Casabianca's inexpert thumping of the χ²-value can't be any sort of the smoking gun he makes it out to be.

When you tell people I haven't discussed the chi-squared test "adequately," that's simply your answer. You want that part of the argument to be some sort of slam-dunk that requires either a simplistic slam-dunk rebuttal that you can understand, or requires addressing Casabianca's piles of irrelevant, distractionary computation and irresponsible conspiracy-mongering to overcome. The answer is simply that the χ²-value doesn't play the role in the analysis that you and your sources desperately want to be the case. You don't understand what that measurement means, and that's why you're violating the most important rule in statistics—know your data.

Discarding outliers is a proven, time-honored, and acceptable method of addressing questions of homogeneity in scientific datasets. Casabianca et al.'s inexperience with that sort of thing does not make them somehow the whistleblowers they fancy themselves to be. Subsequent analysis had to throw out only 2 out of 12 data pools—one of them tentative—even while the Ward & Wilson method found only 1. In this kind of a analysis is the number of outliers that you have compared to the total, not the mere fact that an outlier exists. If you had to throw out 10 of 12 data pools in order to get the result to pass a homogeneity test to a desired degree of confidence, then your data are truly heterogeneous and you would have a problem. Not so here.

But Damon et al. didn't throw out any data pools. In fact, because they knew their data and knew that they could only reason about probable outliers, and what were the expectations for homogeneity, they made the decision to perform the analysis assuming it was valid data and take their lumps in the form of a broader distribution of dates in the final result. It beggars belief how this is somehow evidence of dishonesty on their part. The Ward & Wilson test told them it was likely that one or more of their data pools were outliers, and they accommodated that finding in the most conservative of the acceptable ways, according to methods already well-established in the field for small data sets of physical measurements.

And since you seem to chronically forget this, Ward and Wilson themselves had no problem with that. Their own previous paper reported data that they considered to be suitably clustered even though the T-value wasn't "within specifications."

Or did you not know that was in their paper?
Their paper is ignorant of how data are handled in archaeology, specifically in radiocarbon dating. I presented a number of papers both before and after the shroud of Turin dating—written by actual qualified experts in the field—that presents a more complete picture. You did not understand any of them, and instead tried only to use them day by day to formulate increasingly absurd and simplistic attempts at yet another slam-dunk argument.

No, you don't get to keep thumping your fist on Casabianca and insist that despite your unwillingness to understand the topic, he must still somehow be magically right. He is not an authority, and you are not competent to determine whether he is or should be considered one. Keep in mind that when you first invoked him, you couldn't even spell his name right. Try going out and studying with a real intent to learn, and not just to get a smattering sufficient to shore up your predetermined beliefs from day to day. Or if that's not something you want to do, have the humility to accept your place. Nobody is forcing you to believe that the shroud is a medieval forgery. But if you're going to go on the vocal offensive and attack sound science, you'll need more than you have.
 
Last edited:
First, every measurement is inaccurate to some extent as the accurate value is unknown, that's freshmen science in any discipline. It is impossible to measure accuracy, you should know that. If you can't keep accuracy and precision separated, how can you discuss scientific measurements?
Sigh.
Second, your irrelevant is not, if a cloth has been patched with modern threads, that will throw off the carbon 14 date.
Except it wasn't repaired, as we know, in the radiocarbon sample area. THis is delusional nonsense.
Thirdly, you never gave an adequate rebuttal to Casabianca et al, because you never have discussed the Chi^2 issue adequately. Nor their exposal of data hiding, and refusal to release their raw data for almost 30 years.
Untrue.
Fourthly, you can check back and see that I linked to a generic wiki post on Chi^2 tests and not to the specific Pearson Chi^2 test.

Data pooling as you call it, but that is not what Damon did, they had five data points and averaged two sets of two to combine the five data points into 3 to get better results. You OK with that? It would be a simple statistical homework problem to show that what they did is wrong.
You, really haven;t a clue, have you?
 
Some repairs are more easily seen than others, but I can see the repairs on the pictures of the samples later destroyed by the testing.
:rolleyes: :jaw-dropp:covereyes
Are you actually trying to tell us that you claim that you can see signs of repairs on the poor quality photographs of the sampled area? Despite the actual experts, who say the cloth first hand and examined it under magnification failed to do so?
This is self-deluding nonsense.

That's what sequestered means, no input of carbon into the material tested, that means, according to you, the radiocarbon testing is not valid because the shroud has been repaired, too many times, that the knowledge of all the repairs has been lost.
This is delusional strawmannery at its worst. You made the claim about a magically invisible repair, and yet you have failed to produce evidence for it.
And I don't care if it is real or not, beyond a scientific curiosity.
Of course you do. You've obsessed with the Lirey cloth. Your behaviour in this thread demonstrates your desperate need for it to be real.
 
As @bobdroege7 is back:
1. What exactly in the "Hymn of the Pearl" shows the existence of a shroud?
2. Have you asked the University of California about your claimed secret radiocarbon test?
3. Will you be addressing the size of the sample of the supposed shroud available for that secret radiocarbon test?
4. Will you be showing us evidence that cloth of a pattern similar to that of the Lirey cloth existed in the first century?
5. And what about the undocumented fire that caused the damage to the cloth that you claim appears in the Pray Codex?
 
You are wrong, it's not a rule, it's a guideline, and one that Damon et al did not follow.

Or did you not know that was in their paper?
Sorry, I didn't realise the paper was titled "Bob Droedge doesn't understand the basic rules of statisics and why they exist, thus severely impairing his credibility vis a vis the Shroud of Lirey".
 
:rolleyes: :jaw-dropp:covereyes
Are you actually trying to tell us that you claim that you can see signs of repairs on the poor quality photographs of the sampled area? Despite the actual experts, who say the cloth first hand and examined it under magnification failed to do so?

@bobdroege7's uncanny powers of perception come as no surprise. Things concealed to ordinary mortals lie plain before him, such as David Ford's professorship, the secret radiocarbon test at Berkeley, and the image of the Shroud in the Pray Codex illustrations.
 
Sorry, I didn't realise the paper was titled "Bob Droedge doesn't understand the basic rules of statisics and why they exist, thus severely impairing his credibility vis a vis the Shroud of Lirey".
I was referring to the Damon paper, who did not use the statistical rule, they used different multipliers for the 1 standard deviation and 2 standard deviations.

Try at least to spell my name right Gulliver.
 
You expressed your doubt in terms of "accuracy," not precision. I merely repudiated that doubt in like language without further comment. It seems if anyone needs a lecture on the difference between accuracy and precision, it is you. The Arizona lab certainly had a problem with precision. But there is no problem with accuracy here, as cross-laboratory results are consilient with other physical evidence. You have simply declared that results are "inaccurate" by virtue of some statistical test you barely understand, and a handwaving claim that there must have been some invisible patch from which the radiocarbon dating samples were drawn.


But not in any of the ways you claimed.

You first claimed that the supposed lack of homogeneity in the dates for Sample 1 (the shroud) was best explained by a mix of cloth in the samples. I invited you to do some more in-depth statistical analysis to confirm that hypothesis. You were not competent to do so, so I did it for you. I showed instead that the scatter in the data was better explained by a hypothesis evidently related to the Arizona lab's methods, which produced more scatter across all samples than the other laboratories. This would not be true if the problem were samples contaminated with modern cloth.

You then claimed that the bimodal distribution of the calendar dates was the result of two different kinds of cloth being present in the sample. That is so scientifically and statistically illiterate that I'm frankly embarrassed for you for your having even suggested such a thing. Even when that gross misconception was corrected, you still tried to make considerable conspiratorial hay out of quite a pedestrian statistical reality. You really aren't in a position to lecture the world on how heterogeneous specimens will bear out in calendar dates.


Again I disagree. Your inability to understand the rebuttal is not the same as my not having provided it. You simply have no desire to learn what needs to be known in order to understand why the rebuttal works, and you are not entitled to demand some other rebuttal that is both correct and fits withing your limited understanding. Consequently you pivot and fringe-reset the same old simplistic talking points and offer only "aw, shucks!" homespun attempts to present your willful ignorance as if it were a strength. Again, the world's scientists accept the Damon et al. dating as good science. They do not accept Casabianca's frantic mud-stirring as anything consequential. These are the facts. I'm trying to explain to you why those are the facts, and it is not for the conspiratorial reasons the authenticists complain about.

The Ward & Wilson test is one of several possible methods to identify probable outliers in pooled data. It is by no means the only method or even the best method. Damon et al. applied a method common in the late 1980s for accommodating outliers. Subsequent work using the same pooled data but more refined statistics confirms that the radiocarbon dating of the shroud was affected by surprisingly few outliers. Outliers are quite common in radiocarbon dating, according to Taylor. That Casabianca doesn't know what to do with probable outliers does not substantiate your claim that Damon et al. cheated.

Casabianca's accusation of "data hiding" is explained entirely by his misunderstanding of how the data must be pooled and what is appropriate reporting in the field. Instead of attempting to understand archaeology and radiocarbon dating, he chose to spin a bunch of conspiracy theories supported by no evidence. His claim that the statistic in the Ward & Wilson test constitutes a failure of the method to produce acceptably homogeneous results is not supported by Ward and Wilson themselves, so I can't imagine why you think my pointing this (and other relevant points) out to you is somehow insufficient.


Yes, there exists a separate Wikipedia article on the Pearson test to which you did not link. However, you insinuated that reading the article you did link would give us more information on why "the chi-squared test" is supposedly a slam-dunk failure for Damon et al. That article provides no such insight because it does not discuss the kind of test that Ward and Wilson devised. You didn't know that when you linked it.

The Ward & Wilson test is a test for homogeneity that uses a χ²-distribution, but works for continuously distributed data, not the categorized data your article describes. The data pooling you fail to understand is how we go from discretely sampled distributions to the continuous distributions that we would need for Ward and Wilson. The bulk of your article discusses some of the tests that use the χ²-distribution to reason about categorical data. The Pearson goodness-of-fit test is the most common of those tests, and thus receives the bulk of the attention in the article despite its also having its own article. The other tests the article discusses are also tests of categorical data. Don't be fooled by some of it having been sampled from data we understand to be normally distributed; those data must be put into contingency tables (i.e., categorized) before any of the other tests mentioned in that article will apply. That has nothing to do with Ward and Wilson, and nothing to do with the mathematics upon which they base their model, which requires no categorization.

Your article mentions homogeneity in one paragraph, in a very handwaving fashion with no examples and no reference to any actual homogeneity tests that use the χ²-distribution. Ward and Wilson's work doesn't even make the laundry list of other tests—again, categorical tests—that get no more mention in your article than a link.

The fact remains that when you sent us to that article, you had no clue what you were talking about. Our subsequent examination of your comprehension of statistics bore that out in sufficient detail. All you could muster by way of understanding was that the T-value didn't fit the "specifications" according to "the chi^2 test" (as if there were only one) and therefore the data had to be rejected. You had no idea where any of those values came from or what they meant, or even how the test works. The existence of separate Wikipedia articles for the various tests that use the χ²-distribution does not mean you somehow still understood what you thought you were directing us to. You Googled for "chi-squared test" and linked the first thing that popped up.


Yes. And so is the entire rest of the archeology community. The process you describe is data pooling. It's necessary in this case to resolve what would otherwise be incompatible factors in the degrees of freedom in the individual runs. Thus normalized, the data can be properly tested for homogeneity (as Ward and Wilson show), or further categorized to be suitable for the other—albeit irrelevant—tests that use the χ²-distribution for goodness-of-fit into categories.


In whose opinion? Casabiana et al. tried to make such an argument and got laughed at by the people with actual qualifications in the field. You tried to invoke other pro-authenticity authors (similarly unqualified) whose self-published offerings are even more pathetically misinformed, such as someone actually trying to apply a derivative of a binomial distribution! You are certainly not in a position to judge what is correct statistical modeling for a given problem.

Keep in mind you're claiming that one of the most noteworthy applications of one of the most talked-about measurement methods in all of science is "obviously" wrong, in a paper authored by more than twenty eminent people in the field. If that's the case then why aren't eminent scholars pointing this out to avoid embarrassment for their field? Why instead are people like Taylor (who literally wrote the literal college textbook on radiocarbon dating) citing to Damon et al. favorably? Why is the supposedly best evidence of this purported scientific fraud authored by people with no qualifications in the relevant field?


The first rule of statistics is to know your data and how they are supposed to behave. That informs your use of statistical norms and models and the interpretation of the results of statistical analysis. When you suggest that no one else would have any reason to use anything other than a 95% CI, real science just sort of does a facepalm in your general direction. Damon et al. reported their data according to the norms of archaeology, and specifically according to the norms of radiocarbon dating in conformance with other scholars I showed you, and in conformance with other radiocarbon dating I have had done in my professional work. You exhibit no appreciable expertise in statistics and no apparent desire to learn further. Perhaps that position of willful ignorance is not a good position from which to announce that the pioneers of a given field are doing it wrong.

Damon et al. report their findings at several stages according to both ~1σ and ~2σ confidence intervals, because both are proper in their field and both contribute to an understanding of the strength of the data. The only figure to which they apply only the 2σ confidence interval was in the application of the Ward and Wilson test, and then only because casting a broad net for as much data as practical is the only way to identify what might be possible outliers. You do this even when the bulk of the data might be quite closely clustered.

What neither you nor Casabianca understands is how to handle probable outliers in radiocarbon dating. You can either throw them out and perform the analysis on the better-clustered subset of data, or you can use a different statistical model that accommodates the effect of outliers in small datasets. Damon et al. did the latter, switching out a straightforward weighted combination for an unweighted combination method commonly used in chemistry for similar problems. It's not as if he just made something up. Other researchers who came afterward followed the discard approach according to Bayesian methods that don't rely on the fitness of a distribution and confirm that the data are otherwise suitably clustered. The resulting radiocarbon date did not meaningfully change as a result, showing that Casabianca's inexpert thumping of the χ²-value can't be any sort of the smoking gun he makes it out to be.

When you tell people I haven't discussed the chi-squared test "adequately," that's simply your answer. You want that part of the argument to be some sort of slam-dunk that requires either a simplistic slam-dunk rebuttal that you can understand, or requires addressing Casabianca's piles of irrelevant, distractionary computation and irresponsible conspiracy-mongering to overcome. The answer is simply that the χ²-value doesn't play the role in the analysis that you and your sources desperately want to be the case. You don't understand what that measurement means, and that's why you're violating the most important rule in statistics—know your data.

Discarding outliers is a proven, time-honored, and acceptable method of addressing questions of homogeneity in scientific datasets. Casabianca et al.'s inexperience with that sort of thing does not make them somehow the whistleblowers they fancy themselves to be. Subsequent analysis had to throw out only 2 out of 12 data pools—one of them tentative—even while the Ward & Wilson method found only 1. In this kind of a analysis is the number of outliers that you have compared to the total, not the mere fact that an outlier exists. If you had to throw out 10 of 12 data pools in order to get the result to pass a homogeneity test to a desired degree of confidence, then your data are truly heterogeneous and you would have a problem. Not so here.

But Damon et al. didn't throw out any data pools. In fact, because they knew their data and knew that they could only reason about probable outliers, and what were the expectations for homogeneity, they made the decision to perform the analysis assuming it was valid data and take their lumps in the form of a broader distribution of dates in the final result. It beggars belief how this is somehow evidence of dishonesty on their part. The Ward & Wilson test told them it was likely that one or more of their data pools were outliers, and they accommodated that finding in the most conservative of the acceptable ways, according to methods already well-established in the field for small data sets of physical measurements.

And since you seem to chronically forget this, Ward and Wilson themselves had no problem with that. Their own previous paper reported data that they considered to be suitably clustered even though the T-value wasn't "within specifications."


Their paper is ignorant of how data are handled in archaeology, specifically in radiocarbon dating. I presented a number of papers both before and after the shroud of Turin dating—written by actual qualified experts in the field—that presents a more complete picture. You did not understand any of them, and instead tried only to use them day by day to formulate increasingly absurd and simplistic attempts at yet another slam-dunk argument.

No, you don't get to keep thumping your fist on Casabianca and insist that despite your unwillingness to understand the topic, he must still somehow be magically right. He is not an authority, and you are not competent to determine whether he is or should be considered one. Keep in mind that when you first invoked him, you couldn't even spell his name right. Try going out and studying with a real intent to learn, and not just to get a smattering sufficient to shore up your predetermined beliefs from day to day. Or if that's not something you want to do, have the humility to accept your place. Nobody is forcing you to believe that the shroud is a medieval forgery. But if you're going to go on the vocal offensive and attack sound science, you'll need more than you have.

There were a mix of threads noted in all three samples, they were not all pure linen.
 
I was referring to the Damon paper, who did not use the statistical rule, they used different multipliers for the 1 standard deviation and 2 standard deviations.

Try at least to spell my name right Gulliver.
So was I Bob. You should really learn to quit when you've had your rear end handed to you.

Though, on second thoughts, don't; your ill informed witteringd are a great source of comedy.
 
@bobdroege7's uncanny powers of perception come as no surprise. Things concealed to ordinary mortals lie plain before him, such as David Ford's professorship, the secret radiocarbon test at Berkeley, and the image of the Shroud in the Pray Codex illustrations.

Sorry, but the University of California was not the one at Berkely, try Irvine.
 
Catsmate posted

"This is delusional strawmannery at its worst. You made the claim about a magically invisible repair, and yet you have failed to produce evidence for it."

Sorry, but I never made any claim about a magically invisible repair.

I do claim that no textile experts who could have provided evidence of a repair, made any input into the selection of the area sampled.

Those who directed the location of the sample ignored any advice about where to sample, especially the advice from Harry Gove.
 
Last edited:
JayUtah posted

"The Ward & Wilson test is a test for homogeneity that uses a χ²-distribution, but works for continuously distributed data, not the categorized data your article describes. The data pooling you fail to understand is how we go from discretely sampled distributions to the continuous distributions that we would need for Ward and Wilson."

Quick questions

Why did Damon et al use the Ward and Wilson test?

I don't understand how taking 5 measurements and reducing them to 3 takes discretely sampled distributions to a continuous distribution.

At least you admit that the Ward and Wilson test is a test for homogeneity, which is necessary for radiocarbon dating.
 
I was referring to the Damon paper, who did not use the statistical rule, they used different multipliers for the 1 standard deviation and 2 standard deviations.

Try at least to spell my name right Gulliver.
More evasion I see. :rolleyes:
There were a mix of threads noted in all three samples, they were not all pure linen.
Citation?
Sorry, but the University of California was not the one at Berkely, try Irvine.
We're still waiting for you to produce evidence for your secret radiocarbon test....
 
Do try and learn to use the quote function.

Catsmate posted

"This is delusional strawmannery at its worst. You made the claim about a magically invisible repair, and yet you have failed to produce evidence for it."

Sorry, but I never made any claim about a magically invisible repair.
As we have covered this before I don't plan to waste my time on repeating myself.

I do claim that no textile experts who could have provided evidence of a repair, made any input into the selection of the area sampled.
Untrue. The Lirey cloth was examined by several experts none of whom expressed any concerns regarding hidden patches.
Further, the cloth has been examined subsequently, also without evidence of you amazingly hidden repairs.

Give it up, you're wrong.
Those who directed the location of the sample ignored any advice about where to sample, especially the advice from Harry Gove.
:rolleyes:
 
Now @bobdroege7:
1. What exactly in the "Hymn of the Pearl" shows the existence of a shroud?
2. Have you asked the University of California about your claimed secret radiocarbon test?
3. Will you be addressing the size of the sample of the supposed shroud available for that secret radiocarbon test?
4. Will you be showing us evidence that cloth of a pattern similar to that of the Lirey cloth existed in the first century?
5. And what about the undocumented fire that caused the damage to the cloth that you claim appears in the Pray Codex?
 
Why did Damon et al use the Ward and Wilson test?
Because it was the prevailing model in 1988. I mentioned the other models to show two things: first, that more refined statistical methods confirm Damon et al. and refute Casabianca, and second to instill the notion that while the Ward & Wilson test is useful, it is not some magical One True Method for assessing homogeneity. You have expended considerable handwaving trying to argue that the χ²-value computed in Damon et al. is a singular slam-dunk that dooms any confidence in the result. The first rule of statistics remains to know your data. That means assessing what kind of statistical model is most likely to reveal what you want to know about it, based on how you know it will behave, and what the meaning should be of any computed statistic.

I don't understand how taking 5 measurements and reducing them to 3 takes discretely sampled distributions to a continuous distribution.
No, of course you don't.

At least you admit that the Ward and Wilson test is a test for homogeneity...
I never suggested otherwise.

...which is necessary for radiocarbon dating.
No. Again, you're trying to simplify the question down to some sort of binary slam-dunk. As I explained at length a couple of days ago, there are a number of ways to estimate the probability that one's radiocarbon date dataset contains outliers. They work in different ways and consequently do not produce exactly the same answer. This is common in statistics. Once an estimate suggests the presence of outliers, there are a number of ways to accommodate them statistically. The answer is not to reject the whole dating in slam-dunk fashion. It is certainly not to suggest that the true date can be 1,300 years off. The tests Damon et al. applied and the accommodations they provided are statistically valid, were commonly used in related fields in 1988, and have only been strengthened and confirmed by subsequent work using what we now consider to be better methods.
 
Last edited:

Back
Top Bottom