The "Process" of John Edward

Walter Wayne said:
Helln/Ellen and C or K both are in the "not J" bin. Spice names are more difficult, but since it is not a guess of the initial it probably doesn't belong within our sample.

It would be more difficult if he gave a J or G reading, in which case it clearly belongs with the test, but does not fit easily into the "J" bin or the not "J" bin.

Walt

Yes, they are in the "not J" bin, but they count in the denominator and, actually, dilute the significance of the "J" counts. JE enumerated 3 spice names, and I counted each one into its appropriate initial bin.

Cheers,
 
Clancie said:
I agree, but fortunately there was only one example like that in these readings so I just counted it as a "J", since he did mention the "J or G" sound, and we're not tallying "G".
Yes, we ARE tallying "G"s, Clancie. We are either tallying them as "G"s or tallying them as "not J"s, depending on the approach. Do you not see that, by doing so, I actually diluted the significance of the "J"s?
 
Posted by Bill Hoyt

Yes, we ARE tallying "G"s, Clancie. We are either tallying them as "G"s or tallying them as "not J"s, depending on the approach. Do you not see that, by doing so, I actually diluted the significance of the "J"s?
Bill, There was one instance where JE did that (compared with many where he listed a long string of 'J' names). No big impact in how you counted the 'G', whatever you decided to do with it.

(But, just out of curiosity, you counted it as 1 "G" guess and 1 "J' guess, right? That would be consistent with your method).

However, again, we're only talking about one example of one "J/G" guess. Its the I other choices you made, Bill, that skewed your results so much.
 
BillHoyt said:

Very good, Clancie. Now what happens as that denominator increases? What happens as all those "R", "Ronnie", "Reginald" and "DA name like Danny or David" get tallied. What is the number fed into the analysis, Clancie, and what happens?

The denominator goes up, lowering the "J" frequency and threatening to wash it out to insignificance. I applied the technique equally to "J"s and to "D"s and "M"s and "R"s, "B"s and all the letters. I did not distort the data as you keep trying to insinuate. I tried to resolve the problem of JE's blathering all over the place.
All this does is show that both of your numbers are incorrect - the number of J guesses AND the total number of guesses.

I don't see how incorrectly counting the total number of guesses somehow makes up for incorrectly counting the number of J guesses. If anything, it just makes your data more suspect.
 
Clancie said:

Bill, There was one instance where JE did that (compared with many where he listed a long string of 'J' names). No big impact in how you counted the 'G', whatever you decided to do with it.

(But, just out of curiosity, you counted it as 1 "G" guess and 1 "J' guess, right? That would be consistent with your method).

However, again, we're only talking about one example of one "J/G" guess. Its the I other choices you made, Bill, that skewed your results so much.
Oh, you mean like the 2 C and 1S where you would have done... what? Or, the three "T"s? Or two "R"s? Or two "L"s? How about the four "D"s and three "M"s?

Read the transcripts closely, Clancie. There are at least THREE instances in which JE guessed "J/G".

Think about my counting method carefully. What is the skew if I am counting "J"s and "not J"s by the same rule? One count is in the numerator and the other is in the denominator. Now start adding up the multi-Bs, multi-Rs, multi-Ds, multi-Ts, multi-Ls and on and on...

[edited to correct the "J/G" count to at least 3 -bh]
 
One more thing - Kerberos' count of hits was 14 J hits out of 78 guesses. If I pop those numbers into the ol' Poisson calculator, I get a probaility of >= 14 of .168, which also means that we cannot reject the null hypothesis.

Interesting that the only count that actually rejects the null hypothesis is Bill's.
 
Thanz said:
One more thing - Kerberos' count of hits was 14 J hits out of 78 guesses. If I pop those numbers into the ol' Poisson calculator, I get a probaility of >= 14 of .168, which also means that we cannot reject the null hypothesis.

Interesting that the only count that actually rejects the null hypothesis is Bill's.
Thanz,

Perhaps you've forgotten that Kerberos' analysis rejected his null hypothesis? My analysis excludes data his included, and my altered analysis rejected my null hypothesis.
 
Thanz said:

All this does is show that both of your numbers are incorrect - the number of J guesses AND the total number of guesses.

I don't see how incorrectly counting the total number of guesses somehow makes up for incorrectly counting the number of J guesses. If anything, it just makes your data more suspect.

I analyzed the data, saw numerous counting problems and decided on a methodological solution. The point you are ignoring is this: the method was applied equally, regardless of the forename initial involved.
 
Lurker moved his sad saga over to another thread. Here is my post in response to his tortured attempt to understand why N has nothing to do with the Poisson distribution!

Lurker,

The Poisson is not parameterized on N. Here is the equation:

PoissonEQn1t.gif

a is the count value for which we want the probability. m is the mean. No "n". Poisson doesn't care about n. If we look at an acre of land and find 12 dead crows, what was the N? Who knows, who cares? Poisson is not parameterized on N.

There was no error in Poisson that you uncovered. The error was in the wrongheadedness of your analysis.

Let us take a couple of runs with Poisson here to get the point across.

Let us say the average number of dead crows per acre is known to be 5. Let us go to Montana and set up 1 acre grids and count in each grid. We see one grid with 9 dead crows and wonder what is the one-tailed probability of that high (or higher) a count. We use a mean of 5, and look at the cdf for >=9. And we get .03.

Now let's go to counting initials. We pick an initial that has a frequency of .5. We count 9 such initials in a field of 10. We use a mean of 5 and look at the cdf for >= 9. And we get .03.

Now let's pick an initial that has a frequency of .05. We count 9 such initials in a field of 100. We use a mean of 5 and look at the cdf for >= 9. We get .03.

In the first case, there was no N in any part of our ciphering. Same result as the second case in which we only used N to figure the expected mean. Same result as the third case in which we used a very different N to figure the expected mean.

N, sir, was incidental. Poisson is not affected by it one iota.

I hope you will spare yourself further embarassment. This was very sad.
 
BillHoyt said:
I analyzed the data, saw numerous counting problems and decided on a methodological solution. The point you are ignoring is this: the method was applied equally, regardless of the forename initial involved.
I am not ignoring the fact that the method was applied equally to all initials. I am saying that the method is incorrect, and leads to incorrect counts for both the J value AND the denominator. Having both numbers wrong leads to a wrong analysis. It just means that ALL of your data is incorrect. I was not accusing you of bias in the data - in that you would count only extra "J" guesses. I am saying that your methodology was flawed from the beginning.
 
BillHoyt said:

Thanz,

Perhaps you've forgotten that Kerberos' analysis rejected his null hypothesis? My analysis excludes data his included, and my altered analysis rejected my null hypothesis.
I have not forgotton that at all. Kerberos was focussed at the other end of the spectrum - less common initials.

What I did was take his raw counting data, and applied YOUR hypothesis to it. And what I found was that YOUR null hypothesis could not be rejected if we used his counts.

It could also not be rejected if we used my counting methods.

The only analysis which could be used to reject your null hypothesis is the one that you did based on a flawed counting methodology.

I would like to know why you think it is appropriate to count "And they're also talking about somebody who would be known as either Richard or Rich, because a big R-connection that comes up connected to you" as more than one guess. In my count, I counted this as one R guess. From your description, I gather you counted it as 3 R guesses.
 
Still exploring the topic of Poisson.

Bill, lets take a step back and try a simple example. Let's say in the census that the letter "J" started in 90% of the population. Now let us say in the LKL transcript JE had a total of 85 guesses of which 80 were "J" guesses.

Apply Poisson just like you did previously arrives at this table, a portion of which I have excerpted:

Count CDF
75 0.461976422
76 0.507613296
77 0.552953827
78 0.597422424
79 0.640483788
80 0.681661216
81 0.72055101
82 0.756832342
83 0.790272365
84 0.820726672
85 0.848135548
86 0.872516699
87 0.893955297
88 0.912592261
89 0.928611673


AM I reading this incorrectly but my interpretation has the Poisson saying there is only a 87% of getting less that 86 "J" guesses in 85 tries. That there is a 13% of getting MORE than 85 "J" guesses in 85 tries.

Is my interpreation correct? IF not, where is it wrong? Thanks for your time.

BTW, results from EXCEL. Try it yourself. It is remarkably easy to set up.

Lurker
 
Lurker said:
AM I reading this incorrectly but my interpretation has the Poisson saying there is only a 87% of getting less that 86 "J" guesses in 85 tries. That there is a 13% of getting MORE than 85 "J" guesses in 85 tries.

Your error is in blue. Sadly, I have pointed this out so many times I am now blue in the face.
 
Let me put you out of your misery, Lurker. There is a 13% chance that you would count 85 "J"s. Period. No qualification on "number of tries". Are you with me yet?
 
Lurker with prob of J at 90%, it doesn't make sense, but for N=85 and prob=0.1336 we get the following for binomial and poisson distribution. Note the p=0.05 is at 18 for both.

Walt
 
Bill:

Thanks for the response. Recall in our example we took a 85 person sample, just like you did for the "J" analysis. What is the probability according to a Poisson analysis that we would get 85 or less "J"s using p=0.9?

Cheers,

Lurker
 
Walter Wayne said:
Lurker with prob of J at 90%, it doesn't make sense, but for N=85 and prob=0.1336 we get the following for binomial and poisson distribution. Note the p=0.05 is at 18 for both.

Walt

Walt, thanks! This is what I was interested in.

BTW, why doesn't it make sense when p=0.9 in my example provided? Could it be because Poisson is nto a good approximation at that level of p? THAT is what I was trying to get through to Bill which he refused to acknowledge.

Lurker
 
BillHoyt said:
Let me put you out of your misery, Lurker. There is a 13% chance that you would count 85 "J"s. Period. No qualification on "number of tries". Are you with me yet?
What?

For poisson distribution, m = pN, and thus the tail moves out as N increases. It would be a horrible tool if it didn't, because if p=0.05 occurs at lets say 18, then one could just count guesses until one gets to 18 and declare success.

Walt
 
Walter Wayne said:
What?

For poisson distribution, m = pN, and thus the tail moves out as N increases. It would be a horrible tool if it didn't, because if p=0.05 occurs at lets say 18, then one could just count guesses until one gets to 18 and declare success.

Walt

Why is it Tai Chi, Lurker, and Wayne all see that N is part of the Poisson distribution yet the self-styled expert, Bill Hoyt, does not?

This seems fairly selfevident.

Lurker
 
Walter Wayne said:
What?

For poisson distribution, m = pN, and thus the tail moves out as N increases. It would be a horrible tool if it didn't, because if p=0.05 occurs at lets say 18, then one could just count guesses until one gets to 18 and declare success.

Walt
Walt,

Ask yourself this: what was N for the crows? (See my post to Lurker.)

Cheers,
 

Back
Top Bottom