The "Process" of John Edward

BillHoyt said:

We can define the test any way we want, given that the null hypothesis, the data set, the distribution and the level of significance all work together. I choose "J" because it is the highest frequency initial in the population.

Cheers,

I understand that you can define the test any way you want. Of course. :)

If you are only interested in the letter J, then OK, that works. If you are interested in testing more letters, say k number of letters, doing your test k times isn't wise statistically, so you'd need some other statistical tool.
 
T'ai Chi said:


I understand that you can define the test any way you want. Of course. :)

If you are only interested in the letter J, then OK, that works. If you are interested in testing more letters, say k number of letters, doing your test k times isn't wise statistically, so you'd need some other statistical tool.

Quite right, T'ai. Especially if we're using a significance level cut-off of .05. I think I already addressed this in a post way back when.

Cheers,
 
Lurker said:
Again you try to obfuscate with the same poor analogy. My units are PEOPLE, not inches and feet. Not black people and white people.

Interesting that you bring up "black" and "white" people. Are aborigines not people, Lurker?

Cheers,
 
BillHoyt said:


Interesting that you bring up "black" and "white" people. Are aborigines not people, Lurker?

Cheers,

Ah, ye olde divert with strawman tactic. Sorry, Bill. I've seen it done much better by others.

As expected, you did not answer a single one of my questions. Where did you learn that non-skill from?

Sorry to say, it is people like you who give us skeptics a bad name.

Lurker
 
Lurker said:


Ah, ye olde divert with strawman tactic. Sorry, Bill. I've seen it done much better by others.

As expected, you did not answer a single one of my questions. Where did you learn that non-skill from?

Sorry to say, it is people like you who give us skeptics a bad name.

Lurker

Well, I know next to nothing about statistics, but I read this exchange anyhow, and I have to agree with Lurker here. Bill's responses are all very "lawyer-like", and I found them frustrating to read, and I wasn't even involved in the conversation.

Just my unsolicited two cents worth of observation. ;) .......neo
 
BillHoyt said:


Fascinating display, Thanz.
That's it? "Fascinating display"? Not even an attempt at a substantive comment?

I see that, as you have with Lurker, you avoid any questions that you don't want to answer. I'll try asking them again:

Do you agree or disagree that there is just as much statistical support for the A-hole in Lurker's office as there is for John Edward cold reading, as presented in this thread? Why or why not?

Are you even able to answer a direct question? I notice that your buddy CFLarsen has pretty much dropped out here. Maybe he can do a LarsenList (tm) of the questions you are avoiding. Of course he won't, because you are not a "believer", but one can dream....
 
Lurker said:


Ah, ye olde divert with strawman tactic. Sorry, Bill. I've seen it done much better by others.

As expected, you did not answer a single one of my questions. Where did you learn that non-skill from?

Sorry to say, it is people like you who give us skeptics a bad name.

Lurker

Strawman, Lurker? Diversion? Did you not read the Aussie web page? Are aborigines people, Lurker?
 
Thanz said:

That's it? "Fascinating display"? Not even an attempt at a substantive comment?

I see that, as you have with Lurker, you avoid any questions that you don't want to answer. I'll try asking them again:

Do you agree or disagree that there is just as much statistical support for the A-hole in Lurker's office as there is for John Edward cold reading, as presented in this thread? Why or why not?

Are you even able to answer a direct question? I notice that your buddy CFLarsen has pretty much dropped out here. Maybe he can do a LarsenList (tm) of the questions you are avoiding. Of course he won't, because you are not a "believer", but one can dream....
Thanz,

No. Sigh. I just can't answer questions anymore! Post after post after post of non-answers. Yep, that's me.

This question is so outlandish that your insistence on an answer is a "fascinating display" of your ignorance of science and statistics. There is no evidentiary basis for the existence of your concocted and insulting construct. None. That means, that Occam's razor must be invoked here and your hypothesis is running uphill.

The cold reading hypothesis presents nothing outlandish. It presents an hypothesis that does not require the multiplication of entities. It has, therefore, more statistical support.

An experiment never stands on its own.
 
BillHoyt said:

I just can't answer questions anymore! Post after post after post of non-answers. Yep, that's me.
Truer words have never been spoken.

This question is so outlandish that your insistence on an answer is a "fascinating display" of your ignorance of science and statistics. There is no evidentiary basis for the existence of your concocted and insulting construct. None. That means, that Occam's razor must be invoked here and your hypothesis is running uphill.
Actually, I thought it was a rather amusing construct, not insulting. But that is beside the point.

I was not asking about whether my theory has any support or validity outside of the statistics. Based on the statistics alone, the A-hole is just as supported as JE cold reading. There was a hypothesis, and the numbers supported it. We cannot rule out the A-hole based solely on the statistics.

The cold reading hypothesis presents nothing outlandish. It presents an hypothesis that does not require the multiplication of entities. It has, therefore, more statistical support.
No, it does not have more statistical support. It has more logical support, but not more statistical support. Statistics, like logic, are a tool to be used in testing a hypothesis. And in these two cases, the stats themselves lead at least equal support to both hypotheses.

You have admitted in this thread that if you were to look at more than one letter, the stats tool you have used is not appropriate. You have also chided me for not looking at other letters in Lurker's office. Yet, you have only looked at one letter and have seemed to conclude that you have done some sort of meaningful analysis. You haven't.

You know that 78 (or 85) is too small a sample to do the sort of meaningful analysis that this problem requires, but are stubbornly sticking to your pathetic J analysis as if it proves something. Well, it doesn't prove anything more than my A-hole analysis. Which is to say, it is pretty worthless in and of itself.

You have also spectacularly failed to show why luker's comparison of percentages is in any faulty, especially considering that you have accepted the same analysis done by others. You keep saying apples and oranges, but what Lurker has really done is compared one bushel of apples to the entire crop of apples (of which the bushel is a part) to see if the bushel is representative of the crop (which it wasn't). Do understand that, or are you going to avoid the issue some more?
 
BillHoyt said:


Strawman, Lurker? Diversion? Did you not read the Aussie web page? Are aborigines people, Lurker?

Bill, when are you going to stop beating your wife?

FYI, I was not totally clear but my black/white was an example. A more true example than inches feet.

And you know I did read the website. When are you going to actually respond to my post where I go through it as it relates to my problem? When are you going to show me the differences between the studies I provided that used the same methodology as me?

Oh, that's right. I am posting to Bill Hoyt and he is the consumate avoider of answering questions.

Just my opinion.
Lurker
 
Thanz said:

Truer words have never been spoken.


Actually, I thought it was a rather amusing construct, not insulting. But that is beside the point.

I was not asking about whether my theory has any support or validity outside of the statistics. Based on the statistics alone, the A-hole is just as supported as JE cold reading. There was a hypothesis, and the numbers supported it. We cannot rule out the A-hole based solely on the statistics.


No, it does not have more statistical support. It has more logical support, but not more statistical support. Statistics, like logic, are a tool to be used in testing a hypothesis. And in these two cases, the stats themselves lead at least equal support to both hypotheses.

You have admitted in this thread that if you were to look at more than one letter, the stats tool you have used is not appropriate. You have also chided me for not looking at other letters in Lurker's office. Yet, you have only looked at one letter and have seemed to conclude that you have done some sort of meaningful analysis. You haven't.

You know that 78 (or 85) is too small a sample to do the sort of meaningful analysis that this problem requires, but are stubbornly sticking to your pathetic J analysis as if it proves something. Well, it doesn't prove anything more than my A-hole analysis. Which is to say, it is pretty worthless in and of itself.

You have also spectacularly failed to show why luker's comparison of percentages is in any faulty, especially considering that you have accepted the same analysis done by others. You keep saying apples and oranges, but what Lurker has really done is compared one bushel of apples to the entire crop of apples (of which the bushel is a part) to see if the bushel is representative of the crop (which it wasn't). Do understand that, or are you going to avoid the issue some more?


I am answering the questions. You are simply not understanding. The J analysis is a valid analysis. You stubbornly insist on your sweeping assertion that 78 is inadequate. I answered early on that that statement is wrong. I then elaborated to inform you that the hypothesis, distribution model, significance level and the data themselves all determine whether or not 78i is adequate.

I then demonstrated this by constructing a valid hypothesis, null hypothesis and then proceeded to choose the most appropriate bin for my hypothesis, etc. "J" was not arbitrary. "J" is the most frequently seen initial. And it is spectacularly frequent for JE.

You then analyzed "A"s and showed them to be significant. I objected, whereupon you constructed your laughable hypothesis about "A"s to demonstrate, a posteriorisignificance. I then responded by choosing "B"s and showing them to be non-significant.

You don't get it. You just don't dive into data looking for your predetermined answer. This is what you did. No good grounds.
 
BillHoyt said:
I am answering the questions. You are simply not understanding. The J analysis is a valid analysis. You stubbornly insist on your sweeping assertion that 78 is inadequate. I answered early on that that statement is wrong. I then elaborated to inform you that the hypothesis, distribution model, significance level and the data themselves all determine whether or not 78i is adequate.

I then demonstrated this by constructing a valid hypothesis, null hypothesis and then proceeded to choose the most appropriate bin for my hypothesis, etc. "J" was not arbitrary. "J" is the most frequently seen initial. And it is spectacularly frequent for JE.
Well, I guess here is where we disagree. I do not think that an analysis of the letter "J" in isolation tells us anything meaningful as to whether JE is cold reading. There are other factors which may help to explain why the distribution of "J" in this sample does not appear to be random. It also tells us nothing about the distribution of other common letters or any rare letters.

I looked at some other tests, in particular the one suggested by T'ai Chi. Here is a short description of that test:

Chi-square goodness-of-fit test. The goodness-of-fit test is simply a different use of Pearsonian chi-square. It is used to test if an observed distribution conforms to any other distribution, such as one based on theory (ex., if the observed distribution is not significantly different from a normal distribution) or one based on some other known distribution (ex., if the observed distribution is not significantly different from a known national distribution based on Census data).
This sounds like it does exactly what I would descibe as a meaningful test of this data - it can compare the entire distribution of letters in JE guesses to the entire population. Do you agree that such an analysis would be much more meaningful, in terms of demonstrating whether JE consistently uses more frequent letters at the expense of less frequent letters (as we would expect a cold reader to do)?

Here is a description of the adequate sample for such a test:
Random sample data are assumed. As with all significance tests, if you have population data, then any table differences are real and therefore significant. If you have non-random sample data, significance cannot be established, though significance tests are nonetheless sometimes utilized as crude "rules of thumb" anyway.

A sufficiently large sample size is assumed, as in all significance tests. Applying chi-square to small samples exposes the researcher to an unacceptable rate of Type II errors. There is no accepted cutoff. Some set the minimum sample size at 50, while others would allow as few as 20. Note chi-square must be calculated on actual count data, not substituting percentages, which would have the effect of pretending the sample size is 100.

Adequate cell sizes are also assumed. Some require 5 or more, some require more than 5, and others require 10 or more. A common rule is 5 or more in all cells of a 2-by-2 table, and 5 or more in 80% of cells in larger tables, but no cells with zero count. When this assumption is not met, Yates' correction is applied.
According to this, each cell should have at least five. Or 80% of the cells should have at least five. Do you agree that 85 is inadequate for this analysis?

You then analyzed "A"s and showed them to be significant. I objected, whereupon you constructed your laughable hypothesis about "A"s to demonstrate, a posteriorisignificance. I then responded by choosing "B"s and showing them to be non-significant.
Hmm.... You looked at other letters to see if they were significant. Why did you not look at any other letters for YOUR hypothesis? J may be the most frequent letter, but that doesn't mean it is the only one of importance.
 
Lurker said:
FYI, I was not totally clear but my black/white was an example. A more true example than inches feet.
And aborigines are...?
And you know I did read the website. When are you going to actually respond to my post where I go through it as it relates to my problem? When are you going to show me the differences between the studies I provided that used the same methodology as me?
And aborigines are...? And yet the web page says "Users should present their statistical estimates as percentages where both numerator and denominator are data from the same census" Now why do you suppose they say this. After all, the denominator in any census is "people" right? An aborigines are "people", right?

Yet somehow this mysterious caveat. Somehow, despite the census denominator being the same, they advise users of the data to be sure both the numerator and denominator come from the same census. Maybe it is a side effect of Aussies living upside down and all that?

Oops. Nope. It has nothing to do with living upside down. Here is the NIH giving a similar caveat for dental data!

"The user is cautioned about comparing percentages or means and concluding that differences exist without considering the confidence interval. For example, although two percentages may appear to be different, such as 43.6 % and 47.2%, if their confidence intervals overlap the difference between the two percentages are not actually statistically different. In this case, no statement can be made suggesting that one percentage is significantly different from the other."
NIH Dental, Oral and Craniofacial Data Resource Center

Oh, no! The confidence interval is yet another consideration when comparing percentages! (Or maybe that's what the Aussies were getting at? Gee, I wonder.) It must be a conspiracy from all these whacky skeptics who give skepticism a bad name and who aren't as strong in math as they think they are.

One last time: you cannot compare percentages the way you did unless the denominator units are the same. This "sameness" includes statistical considerations such as those hinted at by the Aussie site and spelled out more directly at the NIH site. You had a small sample whose representativeness of the population was unknown, and you attempted to compare it with the population at large.

Now re-read the NIH quote and think about it in terms of your n=231 sample, the population and the comparison you tried to make. Are you really willing to make the claim that the confidence intervals for the sample counts you cited do not overlap? If not, then "no statement can be made suggesting that one percentage is significantly different from the other."
 
Thanz said:
Well, I guess here is where we disagree. I do not think that an analysis of the letter "J" in isolation tells us anything meaningful as to whether JE is cold reading. There are other factors which may help to explain why the distribution of "J" in this sample does not appear to be random. It also tells us nothing about the distribution of other common letters or any rare letters.
These other factors are other hypotheses that might also be tested. I didn't say it says anything about the rare letters. In order for the frequent ones to be used excessively, other letters must be used less frequently. We can focus on the upper tail of the frequent letters to look for the skew. We do not need to see the full histogram.
I looked at some other tests, in particular the one suggested by T'ai Chi. Here is a short description of that test:
I know the test. I commented on why it would not work for these data. Jeff Corey then suggested Fisher's exact test. Please refer to those posts.
Hmm.... You looked at other letters to see if they were significant. Why did you not look at any other letters for YOUR hypothesis? J may be the most frequent letter, but that doesn't mean it is the only one of importance.
I looked at another letter in a pedagogic response to your selection of "A". I didn't look at other letters for my test because you destroy the validity by dipping back into the same data over and over again. There are 26 letters and a 1 in 20 chance of getting a significant answer. We'd expect to stumble onto one. That is why my a prior selection of "J" is important. There was a rational basis for it, and it absolutely fit the hypothesis. If we speculate there may be skewing in favor of the most frequent initials, we'd expect to see that by looking at the absolutely most frequent initial.
 
BillHoyt wrote:

(from several pages back)

Now I'm assuming that J is the most frequent first initial. I would actually choose that from the control data; the names database. I am also assuming I would simply look at one such datum. There are, of course, several initials I could test in this way. The problem of data mining comes into play, though, and I really must set my significance level higher if I want to do this.


You'd actually want to set your significance level, 5%, lower.
 
T'ai Chi said:
BillHoyt wrote:

(from several pages back)


You'd actually want to set your significance level, 5%, lower. [/B]

Sorry, I was speaking too loosely. By "higher" I meant "more discriminatory".

Cheers,
 
BillHoyt said:

Oh, no! The confidence interval is yet another consideration when comparing percentages! (Or maybe that's what the Aussies were getting at? Gee, I wonder.) It must be a conspiracy from all these whacky skeptics who give skepticism a bad name and who aren't as strong in math as they think they are.


Bill, your sarcasm might be more effective had I not mentioned confidence intervals before as sources of error. Please see my quotes below. Now you have commenced selective gathering of evidence.

"And if you want to make the claim that the confidence intervals created by the standard deviations are too high that is your option."

"Now Bill, if you want to argue that the confidence interval for a 5% level of significance would be too broad to make inferences on the specific frequencies for each letter I will agree with you."


One last time: you cannot compare percentages the way you did unless the denominator units are the same. This "sameness" includes statistical considerations such as those hinted at by the Aussie site and spelled out more directly at the NIH site. You had a small sample whose representativeness of the population was unknown, and you attempted to compare it with the population at large.

Again with the selective gathering of evidence. Why show this one? Why do you STILL avoid explaining why the study I cited compared a CA city homicide rate to the CA state rate. Clearly they use POP A and POP B in the denominators. Why is all right for them but not for me? It looks like you gather the evidence that supports your theories and ignore contrary evidence. You must be a WOO-WOO!

Lurker
 
Lurker said:


Again with the selective gathering of evidence. Why show this one? Why do you STILL avoid explaining why the study I cited compared a CA city homicide rate to the CA state rate. Clearly they use POP A and POP B in the denominators. Why is all right for them but not for me? It looks like you gather the evidence that supports your theories and ignore contrary evidence. You must be a WOO-WOO!

Lurker

A case of 'Hoyt by his own petard', methinks.

:wink8:
 
Lurker said:
Bill, your sarcasm might be more effective had I not mentioned confidence intervals before as sources of error. Please see my quotes below. Now you have commenced selective gathering of evidence.

"And if you want to make the claim that the confidence intervals created by the standard deviations are too high that is your option."

"Now Bill, if you want to argue that the confidence interval for a 5% level of significance would be too broad to make inferences on the specific frequencies for each letter I will agree with you."
I know full well you first raised confidence intervals. That makes the irony of that percentage comparison all the more dramatic. It also befuddles me that you would not be able to glean from either the Aussie site or the NIH site the full import of their caveats on percentage comparisons.
Again with the selective gathering of evidence. Why show this one? Why do you STILL avoid explaining why the study I cited compared a CA city homicide rate to the CA state rate. Clearly they use POP A and POP B in the denominators. Why is all right for them but not for me? It looks like you gather the evidence that supports your theories and ignore contrary evidence. You must be a WOO-WOO!
Lurker, I have addressed the issue so many times, I am tired. The denominators must be the same units. If they are populations, then populatiion descriptive paramters become an issue. That was made clear by the Aussie site when it advised against mixing and matching numerators and denominators from different census data. They went on to speak to the issue of sampling biases and a bit on how to avoid getting tripped up on them. The NIH site spelled it out a bit more when it said the confidence intervals were a huge consideration when one compares percentages and attempts to identify differences as significant.

You compared the raw percentages of a sample of 231 with the population as a whole. Those denominators are not the same. The confidence interval of the 231 sample is far larger than that of the census data. Do the calculations; you will see the 60% signifance wash away.

The homicide rate data does not suffer the same problem. Do you see percentage differences there declared significant even though the error bars overlap? Your harping on the fact I haven't directly addressed this means you have totally overlooked the fact that both the sites I gave you gave descriptions of the conditions under which you can make such percentage comparisons. I cannot say where the disconnect is. Only you can say that.
 
BillHoyt said:

You compared the raw percentages of a sample of 231 with the population as a whole. Those denominators are not the same. The confidence interval of the 231 sample is far larger than that of the census data. Do the calculations; you will see the 60% signifance wash away.

The homicide rate data does not suffer the same problem. Do you see percentage differences there declared significant even though the error bars overlap?

First off, provide evidence for your assertion that the homicide rate data did not "suffer from the same problem". You made the claim, you provide the evidence.

Second, where did I mention significance of my data? I merely compared means.

Third, you said there was something wrong mathematically with my formula. I have yet to see you provide evidence of this.

Lurker
 

Back
Top Bottom