Cold Reading Demos at TAM2

Clancie said:
Thanz,

Do you understand the point I'm making to Bill? Can you see how his method could easily be defeated by any cold reader who uses a string of initials plus name guesses for low frequency letters, but sticks to only initial guesses for the high frequency ones (like 'J')? Cold readers could do this; it wouldn't effect the hit rate at all.
I understand the point you are making, but only see someone doing this if the purported medium knew they were being analyzed in an actual test. I certainly don't see a "medium" doing this on a regular basis. But in any sort of formal test, yes it certainly is a danger.
 
BillHoyt said:
We do not have a situation of cX. We have a situation of sampling. Under the assumption of one guess - one person, the guesses are forced to one per sitter. Regardless of how many guesses are made. When we drop that assumption, we count each guess. That is not cX.
It's not precisely cX, I agree. That's just an extreme example: a random variable is perfectly correlated with itself.

But neither are the guesses entirely independent. Yet you're assuming that they are.
 
69dodge said:
It's not precisely cX, I agree. That's just an extreme example: a random variable is perfectly correlated with itself.

But neither are the guesses entirely independent. Yet you're assuming that they are.
No, I'm not. I am dropping the assumption of one guess = one person. That defines the counting method differently. The discussion about increasing the sample size was pedagogic for the obviously futile purpose of trying to get certain people here to understand there is nothing in the method that creates "J" artifacts.

The method is applied consistently and uniformly. It simply counts a Poisson process; how many "J"s versus "non-J"s. It counts "J"s and "non-J"s equally. It does not double count. It does not resample.
 
Your "In other words, ..." summary does not follow from the preceding formula.

69dodge,

I see the problem now. The mistake was mine. I left out a key word here. I should have said:

"In other words, the variance of an unweighted linear sum of random variables (regardless of dependence) is always less than or equal to the sum of the variances."
 
Originally posted by BillHoyt
No, I'm not [assuming independence of guesses].
[ ... ]
The method is applied consistently and uniformly. It simply counts a Poisson process; how many "J"s versus "non-J"s.
If the guesses aren't independent, the number of J's is not Poisson. If you assume it is Poisson, you're implicitly assuming they're independent.
 
69dodge said:
If the guesses aren't independent, the number of J's is not Poisson. If you assume it is Poisson, you're implicitly assuming they're independent.

Funny how radioactive decay is a Poisson process, isn't it? The secondary, tertiary, quartenary, etc. atomic decays that can result from collisions from the initial decay all add to the counts per time interval. Yet they keep modelling this decay as Poisson. I wonder how this relates to this claim?
 
BillHoyt said:

We don't assume.


Please don't pretend that there aren't any assumptions just because we are doing some math. :)


It was an attempt to get people to see that all the letters were being counted in the same way. Thanz ran off with this, misunderstood what his online calculator was revealing to him and then constructed the straw coin flip example. There is no multiplying of a statistic here.

Well, we can easily test that by having you and Thanz analyze the same transcripts.
 
Originally posted by BillHoyt
I should have said:

"In other words, the variance of an unweighted linear sum of random variables (regardless of dependence) is always less than or equal to the sum of the variances."
That takes care of the k<sup>2</sup>'s in the first term. But what about the second term with the p's?

If all the p's are zero, that means the variables are uncorrelated. In that case, the variance of the sum equals the sum of the variances. That equality might also hold if some of the variables are positively correlated and others are negatively correlated. But it doesn't hold in general.

(I have been talking about independence a lot. Possibly, in some cases, I should have been talking about uncorrelatedness instead. If two variables are independent, they are definitely uncorrelated, but the converse is not always true.)
 
T'ai Chi said:
Please don't pretend that there aren't any assumptions just because we are doing some math.
Already addressed.
Well, we can easily test that by having you and Thanz analyze the same transcripts.
Already been tried. You don't understand the results.
 
Mr. Hoyt -

It would appear that you are ignoring me. However, I would still like some answers to some basic questions. Let's start with these:
  • If we are trying to see if there is a mediumship process, shouldn't we try to honour that process as much as possible in the counting method?
  • Do you agree that the mediumship process involves attempts to gain a connection between the sitter and whatever "spirit" that JE claims to be communicating with?
  • Do you agree that when some says "Fruit, like an apple or a pear" they are talking about Fruit, and that apple or pear are simple examples of fruits?
  • If someone asks you for "a piece of fruit, like an apple or a pear" are they asking you for one item or three items?
  • Likewise, when JE says "A J connection, like John or Joe" is he looking for one J connection, or is he looking for three J connections?
 
T'ai Chi said:
Well, we can easily test that by having you and Thanz analyze the same transcripts.
We have done this. Mr. Hoyt and I have examined the exact same transcripts which were kindly posted by Renata.

I counted 9 J's out of 43 total guesses.

Hoyt counted 18 J's out of 85 guesses.

The proportion of J to the total remained about the same - about 21%. But because Hoyt overcounted, that proportion of J guesses appears more significant than it really is.
 
69dodge said:
That takes care of the k<sup>2</sup>'s in the first term. But what about the second term with the p's?

If all the p's are zero, that means the variables are uncorrelated. In that case, the variance of the sum equals the sum of the variances. That equality might also hold if some of the variables are positively correlated and others are negatively correlated. But it doesn't hold in general.

(I have been talking about independence a lot. Possibly, in some cases, I should have been talking about uncorrelatedness instead. If two variables are independent, they are definitely uncorrelated, but the converse is not always true.)

Sorry, I took your independence discussion literally. And you are right: if the variables are positively correlated (p > 0), then the variances can increase.
 
Thanz said:
The proportion of J to the total remained about the same - about 21%. But because Hoyt overcounted, that proportion of J guesses appears more significant than it really is.

That does not happen, sir. I increased the sample size by refining the counting method. That revealed an underlying truth about the data. The underlying truth about the data is that JE calls out Js far more frequently than would be expected from census data.
 
BillHoyt said:
That does not happen, sir. I increased the sample size by refining the counting method. That revealed an underlying truth about the data. The underlying truth about the data is that JE calls out Js far more frequently than would be expected from census data.
You did not refine the counting method, you used an illogical one. As illogical as counting "It's heads. Heads? Yeah, heads." as three coin flips of heads rather than one.

It did not reveal any underlying truth about the data. Your persistence in sticking with your illogical counting method does, however, reveal an underlying truth about your possible bias and intellectual honesty.

Now, are you able to answer the specific questions I have put to you?
 
Bill...

Let's say you're right, and mediums are just cold readers. Let's also say that, as cold readers, they -know- that they need to overrepresent the most common letter guesses in their readings, but not make it obvious.

How to do that? Well, your counting method provides a perfect answer--use initials for common letters, but also use infrequent letters, just being sure to follow them with strings of names instead of initials only.

The point, Bill, is that your method wouldn't detect cold reading when a medium did this.

And its not just hypothetical (or zoological :rolleyes: ) either. Or something that maybe a medium might become aware of only in the future (and why would we want to -assume- that knowing cold readers don't already understand this? :confused: )

For example, Suzane Northrop guesses almost every letter of the alphabet in her readings which, to me, made her readings seem very much what I'd expect from a cold reader.

However, if she doesn't only mention all the letters, but actually strings many name guesses out for only infrequent letters...and only a few or none on the high frequency ones...she would not look like a cold reader at all, by your analysis.

I need to listen to my tape again, but its entirely possible that she -does- do this. If so, would you be satisfied with the result--and feel it is statistically sound--that your counting method makes JE look like a cold reader, but not Suzane?
 
Clancie said:
Bill...

Let's say you're right, and mediums are just cold readers. Let's also say that, as cold readers, they -know- that they need to overrepresent the most common letter guesses in their readings, but not make it obvious.

How to do that? Well, your counting method provides a perfect answer--use initials for common letters, but also use infrequent letters, just being sure to follow them with strings of names instead of initials only.

The point, Bill, is that your method wouldn't detect cold reading when a medium did this.

So what?
 
Posted by Bill Hoyt

So what?
Lol. So, inaccurate or not, as long as your method makes JE look like a cold reader, its a-ok, right?

It doesn't matter to you that it wouldn't be reliable for any other mediums' cold readings?

As long as it discredits JE--even if the principle is wrong and it wouldn't get the same results for other cold readers--that's fine with you? Is that what you're saying?

And, still the idea of "bias" in choosing a method of analysis to get the results you want doesn't resonate with you at all? :confused:


Oh, come on, Bill. Just be adult and admit that your counting method is a bad idea. Go on! You can do it! We'll all think better of you for it, too. :)
 
I think there needs to be methods specifically designed to hopefully analyze trascripts of mediums' readings better. They need to take in to account several things:

a) looking at transcripts isn't an experiment but observational

b) determine the correct null hypothesis

c) decide what one is going to analyze (letter counts used in names? # of correct facts stated?, # of questions asked?, etc.)

d) determine the criteria for counting the thing(s) from c). As we've seen, there is already quite a controversy about how to most fairly count letters used in names. :)

e) be very aware of the assumptions one is making to do the test(s) (independence, etc), as well as aware of the possible unconscious bias involved

f) try to predict the outcome of analyzing future transcripts, and compare these results to reality, and to the results from the past analyses

g) continually improve the model
 
Clancie said:

Lol. So, inaccurate or not, as long as your method makes JE look like a cold reader, its a-ok, right?

It doesn't matter to you that it wouldn't be reliable for any other mediums' cold readings?

As long as it discredits JE--even if the principle is wrong and it wouldn't get the same results for other cold readers--that's fine with you? Is that what you're saying?

And, still the idea of "bias" in choosing a method of analysis to get the results you want doesn't resonate with you at all? :confused:


Oh, come on, Bill. Just be adult and admit that your counting method is a bad idea. Go on! You can do it! We'll all think better of you for it, too. :)
It is my sincere hope that this post be maintained by JREF for a long time. It is my further sincere hope to one day see you acknowledge (at least to yourself) that, at the time you wrote it, you had something to learn about critical thinking.

Your error begins with the assumption that because the context may change in such a way as to render a technique useless from time T<sub>1</sub> that it was, perforce, useless from time T<sub>0</sub>. This is patently absurd. I tried to save you from further embarassment via the zoological example, but woos rush in where skeptics fear to dread.

That you can't shift your model into zoology and see the parallels and (therefore) the failure of your point is telling, Clancie. So what if an intelligent person is capable of defeating a test? Does that mean the test was invalid up until the time the person figured a way around it? How can you maintain such a foolish assertion?

Speeders drove their cars as fast as they cared and dared until radar became generally available. Then the police had the advantage, and many speeders were caught. Then radar detectors became generally available, and the speeding population seemed to drop off. Then laser speed detection became available to police. The speeders were in trouble once more. But laser detectors became available to the speeders and, once again, the balance shifted.

The points are these: people are dynamic, adaptive systems. If medium X is called out in a certain way, you can bet your sweet bippie medium Y will do what they can to avoid detection. You can also bet your sweet bippie that those trying to determine the truth will thereafter adapt detection techniques to catch the new modus operandi. But at no point along the way, does an m.o. change negate prior detections.

Today we see police departments using radar, laser, planes, helicopters and pacing cars as techniques. Should we believe that those caught by radar twenty years ago need to appeal? Huh? What kind of bizzarely backward thinking is that?
 

Back
Top Bottom