Cold Reading Demos at TAM2

Thanz said:
You would count 1 bird in 4 pieces as 4 birds and I am wrong mathematically.
Strawman. Again.

But he doesn't make letter guesses equally - nor would we expect him to. We expect that he makes more J guesses, as it is the most common initial. Further, what we see in the data is a letter guess with specific name examples. "J, like John or Joe" for example. There is no reason to believe that he would use the same proportion of examples for each letter. It could be that he uses more specific examples for the letter J than for others. This is different from just guessing J more often, and your method has no way of telling the two apart. In fact, it assumes the latter.
Yes, I misspoke. I did not mean "equally." I should have said "proportionately."

See above. All this means is that your total figure is wrong as well. Having both figures wrong does not enhance your data. How can you tell if he simply uses more examples for J rather than guessing J more often? Are you ever going to address the specific example I have posted?
Maybe you will start catching on after all. The numerator and denominator should be multiplied by the same factor, sir:

J / N becomes 3J / 3N. As 3/3 is 1, this becomes J / N.

The "J" proportion should not have increased dramatically. It is a simple mathematical fact. Yet it did. So, either it is an artifact created by the method or a reality of the underlying data. You think artifact. Demonstrate that this is so.
 
(with apologies to Thanz for the interruption)....

Bill,

(1) Doesn't it bother you that it would affect your count if we found out that JE uses more examples of 'J' names when he gives 'J' guesses than he does for other letters? and

(2) Suppose JE read 5 sitters. For each one he guesses an initial 'j' or a "J" name. This gives 5 'J' guesses.

Yet, what if, instead, he reads 5 sitters and never mentions 'J' until the last person where he says, "I'm getting a 'J-O' name for your grandfather....it's like John, Johnny, Jonathan .Joe,...something like that..."

Your method would count both scenarios exactly the same, whereas Thanz's method would count 1 "j" guess for the last example and 5 "J" guesses for the first.

How can you accept a method that does what yours does as being the best choice for measuring "J's"?
 
BillHoyt said:

Maybe you will start catching on after all. The numerator and denominator should be multiplied by the same factor, sir:

J / N becomes 3J / 3N. As 3/3 is 1, this becomes J / N.

The "J" proportion should not have increased dramatically. It is a simple mathematical fact. Yet it did. So, either it is an artifact created by the method or a reality of the underlying data. You think artifact. Demonstrate that this is so.
This is where you are incorrect. There is no reason to believe that the numerator and denominator should be multiplied by the SAME factor. While they are are both higher, there is no reason to believe that they are higher in the same amount.

As we have seen, the problem with your method is that JE guesses a letter or a letter and string of names or sometimes a string of names. You count each mention as equal, when in fact they are not. In one place, he may say "J, like John or Joe", and in another just say "B connection".

If those were the only two readings, just to keep it simple, you would count 3 J guesses and one B guess, for a total of 4, and a percentage of J guesses at 75%. I would count 1 guess of each, for a total of 2, and J guesses at 50%. You have increased the numerator by a factor of 3 and the denominator by a factor of 2. Your count would be the same as if there were 4 separate readings, and he said "J connection" in 3 of them and "B connection" in one reading.

If the underlying theory is that he goes to the J well too often, how are the two situations - that you count the same - actually equivalent? Isn't he using the J much more in the three readings than in the one? Doesn't he have a greater chance of a hit by saying J connection once in three readings rather than J, like John or Joe in one reading?
 
Diogenes said:
Would anyone be interested in a thread about " whether the moon is made of Swiss or Cheddar " ? :D
Maybe, but this isn't really the right forum for that. Shouldn't it be in the "Science" forum? :D
 
Clancie said:
(with apologies to Thanz for the interruption)....

Bill,

(1) Doesn't it bother you that it would affect your count if we found out that JE uses more examples of 'J' names when he gives 'J' guesses than he does for other letters? and

(2) Suppose JE read 5 sitters. For each one he guesses an initial 'j' or a "J" name. This gives 5 'J' guesses.

Yet, what if, instead, he reads 5 sitters and never mentions 'J' until the last person where he says, "I'm getting a 'J-O' name for your grandfather....it's like John, Johnny, Jonathan .Joe,...something like that..."

Your method would count both scenarios exactly the same, whereas Thanz's method would count 1 "j" guess for the last example and 5 "J" guesses for the first.

How can you accept a method that does what yours does as being the best choice for measuring "J's"?

This was already addressed in the original thread. We can't impose assumptions about the mechanism. We can't assume JE is calling out names and initials to refer to a single person per single sitter. If this doesn't make theoretical sense to you, simply look at JE's own transcripts. They are replete with examples of a single guess referring to two different people. One extreme example is a single letter guess referring to an entire family. I drop all mechanism assumptions and simply count his guesses.
 
Thanz said:
This is where you are incorrect. There is no reason to believe that the numerator and denominator should be multiplied by the SAME factor. While they are are both higher, there is no reason to believe that they are higher in the same amount.

It inheres in the statistics, sir. It inheres in the very methods of statistical inference. The proportions within a random population multiply by the popuation size, N.

There's no point in further discussions of this issue until you get some grounding in the underlying basics. I'd be happy to help you with that. But this discussion is at an end until you understand this most basic of points.
 
Ersby said:
Statistics is a psuedo-science. That much is clear.
It isn't a science at all. It is mathematics. Although, in the hands of neophytes, one gets mush.
 
BillHoyt said:


It inheres in the statistics, sir. It inheres in the very methods of statistical inference. The proportions within a random population multiply by the popuation size, N.

There's no point in further discussions of this issue until you get some grounding in the underlying basics. I'd be happy to help you with that. But this discussion is at an end until you understand this most basic of points.
Well, then explain it. If the number of times J is increased by your count is a factor of, say 5, and the factor your count increases the other letters (on average) is 3, how are the numerator and the denominator increasing by the same factor?
 
Thanz said:

Well, then explain it. If the number of times J is increased by your count is a factor of, say 5, and the factor your count increases the other letters (on average) is 3, how are the numerator and the denominator increasing by the same factor?

Go to a web site with an on-line stat calculator. Try out different values of N for the same distribution. Note that a rising tide lifts all boats. Do you see it?
 
BillHoyt said:


Go to a web site with an on-line stat calculator. Try out different values of N for the same distribution. Note that a rising tide lifts all boats. Do you see it?
Great idea, Mr. Hoyt. I went and used the online Poisson calculator that I used before. It can be found here

My original count was 9 "J" guesses out of a total of 43. J names are 13.36% of the population, so I used an expected value of 5.7448 and the actual count of 9. The probablility of >= 9 is .128, so we cannot reject the null hypothesis.

Your counting method overcounts everything. Even if we accept your assumption that everything is overcounted the same, we get some startling results.

If I multiply everything by 3, we get 27 J guesses out of a total of 129 guesses. With an expected value of 17.2344, and an actual count of 27, we get a probability of >= 27 of .018, and we reject the null hypothesis.

Overcounting clearly has an effect on the analysis - even if everything is overcounted at the same rate (which may or may not be true).
 
BillHoyt said:

The challenge remains the same. Find a study with an alpha equal to any of the values previously given. If you wish, let it be a study that uses the Bonferrni correction to an alpha equal to any of the values previously given.


I'll repeat what I said since you consciously or unconsciously ignored and then proposed a strawman argument:


Yeah, alpha "can" be anything, sure. That is far different from me saying I could find a paper(s) with a specific alpha, which is what you are demanding me to do.

Surely even you can see your own strawman waving at you with his straw hand. But perhaps you have unconscious bias, so maybe not.



Games, tr'olldini. That's all we get from you. Games. And badly played at that.

And you've given us a strawman argument. Way to go.
 
Ersby said:
Statistics is a psuedo-science. That much is clear.

Oooh that's why Statistical Science is no longer around. :rolleyes:

It is easy to lie with statistics, but it is even easier to lie without. :)
 
magicflute said:
Hmmm.... I wonder how many times JE gets an X, Y or Z connection? Most of these psychics will mostly guess names begining with just a few select letters, like J,K,M to name a few. It does not take a genius to figure out why.
Check this out...
http://www.ssa.gov/OACT/babynames/1999/top1000of90s.html
The J names are actually closer to 18% for the last 15 years.

It would be interesting to get a list of the frequency of names starting with the letters a through z for each year for say the last 30 years or so, or as far back as there is data.
 
Originally posted by Ersby
Statistics is a psuedo-science. That much is clear.
It's not really that bad.

Statistics as done by pseudo-statisticians is a pseudo-science.
 
Thanz said:

Great idea, Mr. Hoyt. I went and used the online Poisson calculator that I used before. It can be found here

My original count was 9 "J" guesses out of a total of 43. J names are 13.36% of the population, so I used an expected value of 5.7448 and the actual count of 9. The probablility of >= 9 is .128, so we cannot reject the null hypothesis.

Your counting method overcounts everything. Even if we accept your assumption that everything is overcounted the same, we get some startling results.

If I multiply everything by 3, we get 27 J guesses out of a total of 129 guesses. With an expected value of 17.2344, and an actual count of 27, we get a probability of >= 27 of .018, and we reject the null hypothesis.

Overcounting clearly has an effect on the analysis - even if everything is overcounted at the same rate (which may or may not be true).
Thanz,

You said:
"There is no reason to believe that the numerator and denominator should be multiplied by the SAME factor. While they are are both higher, there is no reason to believe that they are higher in the same amount."

I suggested you do this:
"Go to a web site with an on-line stat calculator. Try out different values of N for the same distribution. Note that a rising tide lifts all boats."

You then switched from a question about expectation to a question of observation. These are different issues. I'll use your same online resource to do what I was suggesting.

Let's set mean = .5 and start with N=10. We expect 5. The probability of observing 5 or more is .55. Let's mulitply the population size by 10. Now we expect 50. The probability of observing 50 or more is .51. Our expectation in Poisson is that the proportions remain the same.

Now let's move the observed away from the expected. Same mean, same N. We expect 5. We observe 8. The probability of observing 8 or more is .13. Let's muliply the population size by 10. Now we expect 50. The probability of observing 80 here is now .00006.

Poisson is a statistical distribution. If we sample larger population sizes, we still expect the same proportions to apply. But we also expect our observations will cluster more to the mean. That means, that if the percentage difference between observed and expected remains the same, the significance of that observed result increases.

If JE's repetitions of "I'm getting a J; like Joe or John" were truly random, we would expect repetitions of "I'm getting an X; like Xanadu or Xena," etc., on a random basis as well. We would expect those fluctuations to overwhelm small random perturbations in the "J"s that we see with smaller sample sizes. We would expect the percentage difference between observed and expected "J" frequencies to head to the mean; that is, to go down. That is the meaning of the fall off in the Poisson's pdf.
 
BillHoyt said:

You said:
"There is no reason to believe that the numerator and denominator should be multiplied by the SAME factor. While they are are both higher, there is no reason to believe that they are higher in the same amount."

I suggested you do this:
"Go to a web site with an on-line stat calculator. Try out different values of N for the same distribution. Note that a rising tide lifts all boats."
And your suggestion does not address my point above. There is no reason to think that JE actually does use examples more or less with J or any other letter. It is consistent with cold reading to spit out J, and then the most common J (John) in an attempt at a more impressive hit. The same cannot be said with less common letters, so it would be safer to just spit out the letter. So, the J's will be overcounted in proportion to the other letters as well as just simply overcounted. But anyway, let's move on to the next point.

You then switched from a question about expectation to a question of observation. These are different issues. I'll use your same online resource to do what I was suggesting.
Of course it is about observation. I claim that your method overcounts the guesses and is therefore inaccurate. Your reply is that it overcounts all letters the same, so both the numerator and denominator increase the same, so it doesn't matter. We have seen this is not the case.

Poisson is a statistical distribution. If we sample larger population sizes, we still expect the same proportions to apply. But we also expect our observations will cluster more to the mean. That means, that if the percentage difference between observed and expected remains the same, the significance of that observed result increases.

If JE's repetitions of "I'm getting a J; like Joe or John" were truly random, we would expect repetitions of "I'm getting an X; like Xanadu or Xena," etc., on a random basis as well. We would expect those fluctuations to overwhelm small random perturbations in the "J"s that we see with smaller sample sizes. We would expect the percentage difference between observed and expected "J" frequencies to head to the mean; that is, to go down. That is the meaning of the fall off in the Poisson's pdf.
I think that the error in your method should be clear now. It artificially increases the sample size without giving the sample the chance to cluster more to the mean. In essence, your extra counts are like taking a normal count and multiplying it by some common factor. We see that you cannot do this for an accurate statistical analysis.

Let's say we count coin flips, for example. If we count 6 heads out of 10, that is not a big deal. But if we treat each flip as if it were 1000 flips, and 6000 out of 10,000 flips were heads, it would look like it is not a fair coin.

By a rough estimation, your count doubles my count. That doubling, based on the same data, takes the J count from non-significant to significant.
 
Thanz said:
I think that the error in your method should be clear now. It artificially increases the sample size without giving the sample the chance to cluster more to the mean. In essence, your extra counts are like taking a normal count and multiplying it by some common factor. We see that you cannot do this for an accurate statistical analysis.

Didn't understand the explanation one bit, did you?
:dl:
 

Back
Top Bottom