Cold Reading Demos at TAM2

T'ai Chi said:
No, I think it is because your observed statistic (ie. your letter J counts) are high.

Oh. Was the method not consistenly applied? Was it not equally applied to all twenty-six initials? Are those initials not random variables? How was this artifact created then?
 
BillHoyt said:

Oh. Was the method not consistenly applied?


You tell me; it is your method, not mine.


Was it not equally applied to all twenty-six initials?


Probably. However, your analysis only focuses on the J counts and ignores counting the rest from what I can tell, so it probably doesn't matter.


Are those initials not random variables?

Their counts are.
 
First of all, let me say that I'd be very surprised indeed if John Edward, or anyone else for that matter, actually did turn out to be talking to the dead. (I wouldn't want to be called a woo-woo, you understand . . .)

Okay, now that that's out of the way . . .

Originally posted by BillHoyt
Drop the assumption about a feeble spirit trying desperately to mumble his name into JE's ears.
If that's what we're trying to disprove, we can't really drop it. Otherwise, we might end up disproving something entirely unrelated. What's the use of that?

When doing a significance test, all the probabilities are calculated based on the assumption that the null hypothesis is true. Then, if we end up with a small p-value, we can take that as evidence against the null hypothesis.

So, we need to be sure that the null hypothesis we use, actually is what we're trying to find evidence against. Otherwise, a small p-value tells us nothing that we care about.
Count the name guesses, and test the null hypothesis that they match the distribution from the census data. Name guesses. Name guesses.

Same question as given to Tr'olldini:

And just what happens to the Poisson distribution with this alleged "overcounting"?
It turns into a non-Poisson distribution.

If X and Y are independent Poisson random variables, each with mean m, then X + Y is Poisson with mean 2m. But X + X = 2X is not Poisson.

How do I know? Well, here's one way. The mean and variance of a Poisson random variable are equal. So, the variance of X is also m. However, 2X has mean 2m but variance 4m. So, it isn't Poisson.

Your null hypothesis includes the assumption that all the guesses are independent. But John Edward doesn't claim that all his guesses are independent. In fact, when he says something like, "I'm getting a 'J' . . . maybe Jack, or Joe, or Jake . . .", he is indicating rather explicitly that the three names are intended to be similar---in other words, that they are not independent.

So, you've provided statistical evidence against something that no one claimed to begin with. Yipee.

In short, I am totally with Thanz on this one. :D
 
69dodge said:
If that's what we're trying to disprove, we can't really drop it. Otherwise, we might end up disproving something entirely unrelated. What's the use of that?
But that's exactly how science operates. It drops assumptions, turns hypotheses upside down and disproves those null hypotheses. These eliminations of possibilities provide the logical forward momentum toward the truth.

We drop any assumptions about what we think we're seeing with JE and ask: do the names and name initials that he spouts seem to be sampled from the population of names at large. If they aren't (that is, we disprove the null hypothesis), we have evidence that something else is going on here.

When doing a significance test, all the probabilities are calculated based on the assumption that the null hypothesis is true. Then, if we end up with a small p-value, we can take that as evidence against the null hypothesis.
I'm sorry, but this is also wrong. The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean, and calculate the area under the Poisson pdf's tail. Again, no assumptions other than that the data collected are Poisson.

So, we need to be sure that the null hypothesis we use, actually is what we're trying to find evidence against. Otherwise, a small p-value tells us nothing that we care about.It turns into a non-Poisson distribution.
A failure to correctly state and test the null hypothesis doesn't turn the statistic non-Poisson. Did you say what you intended to say here?

If X and Y are independent Poisson random variables, each with mean m, then X + Y is Poisson with mean 2m. But X + X = 2X is not Poisson.

How do I know? Well, here's one way. The mean and variance of a Poisson random variable are equal. So, the variance of X is also m. However, 2X has mean 2m but variance 4m. So, it isn't Poisson.
I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:

"Theorem 4: Let X<sub>1</sub>,...,X<sub>n</sub> denote random variables that have means m<sub>1</sub>,...,m<sub>n</sub> and variances s<sup>2</sup><sub>1</sub>,...,s<sup>2</sup><sub>n</sub>. Let p<sub>ij</sub>, i <> j, denote the correlation coefficient of X<sub>i</sub> and X<sub>j</sub> and let k<sub>1</sub>,...,<sub>n</sub> denote real constants. The mean and the variance of the linear function
Y = E<sub>1</sub><sup>n</sup>k<sub>i</sub>X<sub>i</sub>
are, respectively,
m<sub>Y</sub> = E<sub>1</sub><sup>n</sup>K<sub>i</sub>m<sub>i</sub>
and
s<sup>2</sup><sub>y</sub>= E<sub>1</sub><sup>n</sup>k<sub>i</sub><sup>2</sup>s<sup>2</sup><sub>i</sub> + 2EE<sub>i</sub><sub><</sub><sub>></sub><sub>j</sub>k<sub>i</sub>k<sub>j</sub>p<sub>ij</sub>s<sub>i</sub>s<sub>j</sub>"

In other words, the variance of a linear sum of random variables (regardless of dependence) is always less than or equal to the sum of the variances.
 
BillHoyt said:

We drop any assumptions about what we think we're seeing with JE and ask: do the names and name initials that he spouts seem to be sampled from the population of names at large. If they aren't (that is, we disprove the null hypothesis), we have evidence that something else is going on here.
But your method doesn't do this. It equates initials with names. They are not the same. You have provided no justification for counting initial letters of names said in the same manner as if he just said "Starts with a J".

I'm sorry, but this is also wrong. The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean, and calculate the area under the Poisson pdf's tail.
But your "observed value" is wrong. It assumes that saying "John" is the same as saying "J connection", even if he had just said "J connection" right before it - as in "J connection, like John". While you continue to equate examples with initial guesses your count will always be wrong.

Again, no assumptions other than that the data collected are Poisson.
First, this is not true. There are other assumptions in your model. Second, you have been repeatedly asked to justify this assumption - that Poisson is applicable - and you have never done so. Again - why do you make this assumption?

I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:
You must be so proud to quote a textbook.

Can you explain in plain English why your method makes any sense? Can you answer my question about where you would put your money? Can you back up your assumption that Poisson is approriate? Can you actually answer any direct questions put to you?
 
Thanz said:
But your method doesn't do this. It equates initials with names. They are not the same. You have provided no justification for counting initial letters of names said in the same manner as if he just said "Starts with a J".
Statistics doesn't demand sameness; it demands categorization. Your point is specious.

But your "observed value" is wrong. It assumes that saying "John" is the same as saying "J connection", even if he had just said "J connection" right before it - as in "J connection, like John". While you continue to equate examples with initial guesses your count will always be wrong.
More specious points. You continue to be unable to substantiate your claim that the method produces an artifact.

First, this is not true. There are other assumptions in your model. Second, you have been repeatedly asked to justify this assumption - that Poisson is applicable - and you have never done so. Again - why do you make this assumption?
You enumerate no other assumptions. You simply assert. You also assert the result is an artifact and cannot demonstrate how that artifact was created.


You must be so proud to quote a textbook.

Can you explain in plain English why your method makes any sense? Can you answer my question about where you would put your money? Can you back up your assumption that Poisson is approriate? Can you actually answer any direct questions put to you?
Can you not understand that I am answering? That you are asking statistical questions that require statistical answers? Did you not read the original thread in which I directly and clearly explained what a Poisson process is and that the definitional criteria are met by JE's guesses?

How about you start listening with an intent to understand? The answers have all been given over and over.
 
Bill,

Forget about Poisson. Its totally irrelevant.

At the bottom of the previous page I explained to you how a cold reader could not only go undetected by your counting method*, but could actually easily use it to his own advantage*. (In fact, it's would be quite a good tip for JE, Northrop, etc. to do so, just in case).

This is the final nail in the coffin of your argument--i.e., you have looked at what JE did in the transcripts and tried to find a counting method that would make him look like a cold reader. But, generally applied, your method wouldn't work to detect cold reading--in fact, a cold reader should incorporate your counting premise since it would have no impact on his hits, but would successfully disguise his cold reading process to anyone who counted as you do.

No comment? :confused:




* Unlike Thanz's method.
 
BillHoyt said:

Statistics doesn't demand sameness; it demands categorization. Your point is specious.


More specious points. You continue to be unable to substantiate your claim that the method produces an artifact.
More avoidance from Mr. Hoyt. You have not backed up your method with logic. My points go to the heart of your method, and you have yet to address them.

As for how your method affects the results, I have demonstrated this at least twice. Your method counts 18 guesses at J for only 9 actual shots at a J hit. It counts 85 guesses when only 43 were made. By doubling both the observed J count and the total sample, you get a result that is not statistically accurate. Just as if you had counted 10 "heads" or 10 "tails" for every coin flip.

You enumerate no other assumptions. You simply assert.
Dude, did you not see the paragraph immediately before this in which I state that your method makes assumptions in the count?
You also assert the result is an artifact and cannot demonstrate how that artifact was created.
I have. At least twice.

Can you not understand that I am answering? That you are asking statistical questions that require statistical answers? Did you not read the original thread in which I directly and clearly explained what a Poisson process is and that the definitional criteria are met by JE's guesses?

How about you start listening with an intent to understand? The answers have all been given over and over.
You never explained why JE guesses are a Poisson process. If you have, and I have missed it, just point me to the post.

Next, I am asking you very basic questions that do not need a stat book to answer. I am asking you to back up your experimental design with simple logic. It is abundantly clear that you are unable to do this. You don't seem able to answer a direct question. Do I need to create a LarsenList for you?

How about answering my very simple question about where you would bet your money? That doesn't require a stat textbook to answer. Why don't you answer it?
 
Clancie said:
No comment? :confused:

A zoologist sets up an experimental observation method in the field that works. Until the leopards catch onto his presence and change their behavior. You would have us believe this puts a nail in the coffin of the original experimental procedure?

Clancie, study some basic science. For your own sake, please.
 
Bill,

Don't be an idiot.

Don't write about zoology. Address the problem with your counting method--that it can actually be used by cold readers to effectively disguise the cold reading process.

Or doesn't that design flaw bother you at all? (And, as for leopards...blah,blah...since you believe "mediums" are cold readers anyway, who's to say that all mediums other than JE haven't already thought of this idea and are using it already?)

The point is, that your method can't be generally applied and it therefore doesn't work.

Would you like to see the example again? :confused:
 
Thanz said:
More avoidance from Mr. Hoyt. You have not backed up your method with logic. My points go to the heart of your method, and you have yet to address them.
I have. In the original thread. I am not doing your homework for you.

As for how your method affects the results, I have demonstrated this at least twice. Your method counts 18 guesses at J for only 9 actual shots at a J hit. It counts 85 guesses when only 43 were made. By doubling both the observed J count and the total sample, you get a result that is not statistically accurate. Just as if you had counted 10 "heads" or 10 "tails" for every coin flip.
And I have responded to that. I just did again with the textbook quote that somehow you thought was funny. It was not. It was direct. It directly addresses 69dodge's error and it directly addresses your claim. The variance, sir, reduces. The observed gravitate toward the mean. The effect of increasing the sample size was random and follows this general rule. It is not as you so badly misdescribe it.

Your coin example is utterly specious. Go back to the Hogg & Craig theorem. This is a random variable we are looking at. This is not algebra.


Dude, did you not see the paragraph immediately before this in which I state that your method makes assumptions in the count?
Yeah, you jerk. Your complaint amounts to saying "You can't count fruit dropped per square yard of that orchard. I have pear trees and apple trees. How can you count fruit when some are apples and some are pears."

I'm sure you will disingenously fail to get that point.

You never explained why JE guesses are a Poisson process. If you have, and I have missed it, just point me to the post.
I'm not about to waste further time doing your research for you. Search for me using the phrase "poisson process."

Next, I am asking you very basic questions that do not need a stat book to answer. I am asking you to back up your experimental design with simple logic. It is abundantly clear that you are unable to do this. You don't seem able to answer a direct question. Do I need to create a LarsenList for you?

How about answering my very simple question about where you would bet your money? That doesn't require a stat textbook to answer. Why don't you answer it?
I have done this, sir. The problem here is multifaceted, but includes the basic fact that you have no understanding of either scientific method or inferential statistics. You are also one of the most intellectually lazy posters on this forum. Go back through the threads, you dolt.

Your "bet" is an inane red herring that doesn't warrant an answer. Stick to the freaking issues.
 
Originally posted by BillHoyt
I'm sorry, but this is also wrong.
Well, I think it's right. So there. :p
The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean
Yes, but how do we know what mean to expect?

We assume the null hypothesis is true, in other words, that John Edward really is doing what he claims he's doing, and then we calculate, based on that assumption, the probability that we'd observe a value at least as extreme as the value we actually did observe. If the probability (the p-value) is sufficiently small, we may take that fact as providing evidence against the null hypothesis.

If we're trying to disprove his claim, and he doesn't claim that all his guesses are independent, then we shouldn't include in our null hypothesis the assumption of independence. And without that assumption, there's no reason to suppose that the number of 'J' guesses follows (even approximately) a Poisson distribution.
A failure to correctly state and test the null hypothesis doesn't turn the statistic non-Poisson. Did you say what you intended to say here?
I did. If you read my original post instead of your quote of it, it might be clearer.
I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:

"Theorem 4: Let X<sub>1</sub>,...,X<sub>n</sub> denote random variables that have means m<sub>1</sub>,...,m<sub>n</sub> and variances s<sup>2</sup><sub>1</sub>,...,s<sup>2</sup><sub>n</sub>. Let p<sub>ij</sub>, i <> j, denote the correlation coefficient of X<sub>i</sub> and X<sub>j</sub> and let k<sub>1</sub>,...,<sub>n</sub> denote real constants. The mean and the variance of the linear function
Y = E<sub>1</sub><sup>n</sup>k<sub>i</sub>X<sub>i</sub>
are, respectively,
m<sub>Y</sub> = E<sub>1</sub><sup>n</sup>K<sub>i</sub>m<sub>i</sub>
and
s<sup>2</sup><sub>y</sub>= E<sub>1</sub><sup>n</sup>k<sub>i</sub><sup>2</sup>s<sup>2</sup><sub>i</sub> + 2EE<sub>i</sub><sub><</sub><sub>></sub><sub>j</sub>k<sub>i</sub>k<sub>j</sub>p<sub>ij</sub>s<sub>i</sub>s<sub>j</sub>"

In other words, the variance of a linear sum of random variables (regardless of dependence) is always less than or equal to the sum of the variances.
I guess the E's are supposed to represent summation signs, yes?

Your "In other words, ..." summary does not follow from the preceding formula.

I'm sure you can find somewhere in that book a statement like: If X is a random variable and c is a constant, the variance of cX is c<sup>2</sup> times the variance of X. That's pretty much all I used in my previous post.
 
69dodge said:
Well, I think it's right. So there. :pYes, but how do we know what mean to expect?

We assume the null hypothesis is true, in other words, that John Edward really is doing what he claims he's doing, and then we calculate, based on that assumption, the probability that we'd observe a value at least as extreme as the value we actually did observe. If the probability (the p-value) is sufficiently small, we may take that fact as providing evidence against the null hypothesis.
We don't assume. We count and test the hypothesis. The hypothesis is not an assumption. It is an assertion to be tested. The difference is significant.

If we're trying to disprove his claim, and he doesn't claim that all his guesses are independent, then we shouldn't include in our null hypothesis the assumption of independence. And without that assumption, there's no reason to suppose that the number of 'J' guesses follows (even approximately) a Poisson distribution.I did. If you read my original post instead of your quote of it, it might be clearer.I guess the E's are supposed to represent summation signs, yes?
We don't need to know either mechanisms or alleged mechanisms to characterize a Poisson process. I gave the definition in the other thread. Nobody had to ask gamma particles if they were expelled independently or not. We observed and tested.

Your "In other words, ..." summary does not follow from the preceding formula.

I'm sure you can find somewhere in that book a statement like: If X is a random variable and c is a constant, the variance of cX is c<sup>2</sup> times the variance of X. That's pretty much all I used in my previous post.
We do not have a situation of cX. We have a situation of sampling. Under the assumption of one guess - one person, the guesses are forced to one per sitter. Regardless of how many guesses are made. When we drop that assumption, we count each guess. That is not cX.

I am partly responsible for this misunderstanding because of my pedagogic usage of the 3(A,B,C) example. It was an attempt to get people to see that all the letters were being counted in the same way. Thanz ran off with this, misunderstood what his online calculator was revealing to him and then constructed the straw coin flip example. There is no multiplying of a statistic here.
 
Bill,

Are you going to address my point or not? Again, it is that your counting method is ineffective at detecting cold reading. In fact, cold readers could use your premise to actually disguise what they are doing and appear to not be cold reading at all.

Doesn't that bother you at all?

How would they do this? Its simple. They make guesses of initials only for their high-frequency letter guesses and intersperse them with initials plus a string of name guesses--but only for low frequency letters...

By doing this, obviously, they would be inflating the overall count while also effectively skewing your data and hiding the -actual- number of times they used high frequency letter guesses. And it wouldn't change their hit rate at all.

Not only would they still be cold reading (undetected by you), but your method would come up with some impressive stats showing no indication of cold reading at all.

If you find that your counting method gives you the result you want from 4 JE transcripts, but find that it doesn't detect cold reading patterns in other mediums--in fact, gives a misleading result (indicating NO cold reading when actually there was)....shouldn't you, as a "scientist", have a problem with that?
 
BillHoyt said:

I have. In the original thread. I am not doing your homework for you.
Sir, you simply have not. It is not about "doing my homework for me", it is about backing up your claims. You have not done so. If you think I am wrong - prove it. Poiont me to a post. I'm sure you'd love to prove me wrong here. But you can't.

The observed gravitate toward the mean. The effect of increasing the sample size was random and follows this general rule. It is not as you so badly misdescribe it.
You have not backed up this "random" claim of yours. He is not going to say "A J connection, like Alex".

Your coin example is utterly specious. Go back to the Hogg & Craig theorem. This is a random variable we are looking at. This is not algebra.
No, it isn't specious. It is the same as you sitting and recording coin flips done by others, and instead of counting the actual flips you count the number of times the flippers say something. Like "Heads? Yeah Heads. Okay, that one was heads" and then you record 3 heads for the one flip.

Yeah, you jerk. Your complaint amounts to saying "You can't count fruit dropped per square yard of that orchard. I have pear trees and apple trees. How can you count fruit when some are apples and some are pears."
No it doesn't. It is like me saying you shouldn't be including pears in your count of apples.

Or, if we want to go back to the dead birds in the field, it is me saying that you cannot count the bird that has been partially eaten and is now in 4 parts as 4 birds.

I am sure that you will fail to get the point, or just ignore it as usual.

I'm not about to waste further time doing your research for you. Search for me using the phrase "poisson process."
Can you point me to the post or not? You only posted a general description of Poisson. you never connected that process to JE or explained why JE should be considered a poisson process.

I have done this, sir. The problem here is multifaceted, but includes the basic fact that you have no understanding of either scientific method or inferential statistics. You are also one of the most intellectually lazy posters on this forum. Go back through the threads, you dolt.
I have looked through the threads, and you simply have not backed up your counting method.

If we are trying to see if there is a mediumship process, shouldn't we try to honour that process as much as possible in the counting method? And doesn't it make much more logical sense that when JE is saying "A J connection, like John or Joe" he is looking of one J connection, just like he says? That he is making one guess for a J hit? That he wants the sitter to make one connection to the letter J, and not three?

At least try to make a logical argument that he is trying to get 3 J connections (as your method assumes he does) or admit that you cannot.

Your "bet" is an inane red herring that doesn't warrant an answer. Stick to the freaking issues.
It is not a red herring at all, sir, and your reluctance to address it speaks volumes. your method equates the one reading with the three combined, when clearly they are not equivalent.
 
Thanz,

Do you understand the point I'm making to Bill? Can you see how his method could easily be defeated by any cold reader who uses a string of initials plus name guesses for low frequency letters, but sticks to only initial guesses for the high frequency ones (like 'J')? Cold readers could do this; it wouldn't effect the hit rate at all.

And there's no way his counting method would detect that pattern at all--in fact, his counting method would, instead, show results that were inconsistent with cold reading (unlike your method).

That is a huge problem (and stubbornly sticking with this counting method because he got the results he wanted from the JE readings clearly shows his bias). But, Bill-style, he doesn't seem willing to acknowledge this.
 
Clancie said:
Bill,

Are you going to address my point or not? Again, it is that your counting method is ineffective at detecting cold reading. In fact, cold readers could use your premise to actually disguise what they are doing and appear to not be cold reading at all.

Doesn't that bother you at all?

How would they do this? Its simple. They make guesses of initials only for their high-frequency letter guesses and intersperse them with initials plus a string of name guesses--but only for low frequency letters...

By doing this, obviously, they would be inflating the overall count while also effectively skewing your data and hiding the -actual- number of times they used high frequency letter guesses. And it wouldn't change their hit rate at all.

Not only would they still be cold reading (undetected by you), but your method would come up with some impressive stats showing no indication of cold reading at all.

If you find that your counting method gives you the result you want from 4 JE transcripts, but find that it doesn't detect cold reading patterns in other mediums--in fact, gives a misleading result (indicating NO cold reading when actually there was)....shouldn't you, as a "scientist", have a problem with that?

Follow my zoological example. That someone or something can change behavior after the fact and make further uses of that method impossible doesn't alter the fact that of what went before.
 

Back
Top Bottom