The "Process" of John Edward

Posted by Bill Hoyt

Yes, she has. But her view doesn't help the problem with JE's guesses.

Actually, Bill, if you read page 18-19, my count answers all those questions just fine. In fact three of us used pretty much the same method and got pretty much the same results.

Your results were way off of what everyone else got--with a much more inflated number of overall "guesses" and an artificially inflated number of "J's".
Sometimes he gives multiple names, apparently identifying a single person.
This has been addresses. No problem. (And no need to count, "I'm getting a 'J' like Joseph, John, Jim..." as four guesses of 'J'. It's one 'J' guess, Bill, one 'J' name matched with one person.
Posted by Bill Hoyt

Sometimes he give a single name and clearly states that he is identifying two people.

In which case, as someone else said, we count it as two guesses on 'J'. (It only happened once in these transcripts, btw, not a big deal--unless he says, "I've got two men with 'J' names...like Joseph, John, Jim..." and we count it as 8 separate guesses for 'J' which would be your method. :eek:
Posted by Bill Hoyt

Sometimes he says "nickname" for a person and ends up identifying a dog's name (not "nickname").

He got "spice name" symbolically, not phonetically. So by my rules...it's out of the count. Consistent. Simple.
 
I had presumed that Bill would tally the letters and guesses fairly and objectively. I do understand his point that he IS trying to be objective by including all names as seperate guesses. Thus, for a non-"J" name the multiple guesses work against supporting his "J" analysis results.

But I don't think he needed to do that. I think he could safely assume that each string of names was for one person, not new guesses.

And according to what others have said here, this simple understanding would change the results dramatically.

Interesting.

Lurker
 
Clancie said:
He got "spice name" symbolically, not phonetically. So by my rules...it's out of the count. Consistent. Simple. [/B]

My turn to agree with Clancie on this one. If we're counting name to letter guesses as compared the census then this guess shouldn't be considered because he didn't specify a letter.
Now if we have a census of common spice names we could compare it to that. ;)

Also note that this shouldn't be considered a hit either. It's obvious to me that JE was fishing for this one. (Who am I kidding? He fishes for all of them.:D )
 
BillHoyt said:

Yes, she has. But her view doesn't help the problem with JE's guesses. Sometimes he gives multiple names, apparently identifying a single person. Sometimes he gives a single name and clearly states that he is identifying two people. Sometimes he says "nickname" for a person and ends up identifying a dog's name (not "nickname").
All of these situations are easily accounted for in the method I have proposed.

One person, one letter = one guess, regardless of how many specific names he guesses. The net is only as wide as one letter.

Two persons, one letter = two guesses.

One person, two letters = two guesses. He is trying to widen the net to get a hit, like a cold reader does.

Nicknames - not part of our control data, therefore don't count. Treat this guess like a guess of a feather, or a calendar, or whatever.

My solution is to make the fewest assumptions about his process or intent and to simply count the separate guesses. Where he gives a combination of name and its corresponding nickname, I collapse those into a single guess.
The problem is, they are NOT separate guesses. "Bob or Bill or some B name" is one guess, with the net as wide as "B", and a couple of specifics hoping for a better hit.

Your method equates two readings where he says "I am getting a J connectoin" with one reading where he states "I am getting a "John" or "J" connection here". I do not agree that those are the same situation. My method counts them differently, yours counts them the same. And that is why your method is inaccurate.
Get your facts straight. I didn't start the ball rolling. I have defended the accuracy.
By "Get the ball rolling" I meant that you were the first to post your counting method. I suppose Kerberos did a count before you. Not that it really matters.

Also, you have not defended its accuracy. You have simply defended its ease of applicability to multiple situations. However, by equating these multiple situations as explained above, you are making your count inaccurate.

In this case, JE said "nickname" and he said "name". It turned out to be a dog, not a person. It turned out to be the dog's name, not its nickname. JE specifically said "salty", "pepper" and "cinnamon" as the name guesses.
As nickname guesses, not part of what is in our control data.

1. The name guesses do not have to be in the census.
Why not? The census is our control data. If nicknames don't appear there, they would not be counted, and we don't know what kind of distribution of letters of nicknames there might be. If you include nicknames, why not place names or object names as well?
2. "Pepper" and "Cinnamon" are both people's names. In the U.S., they are usually girl's names. Although "Pepper" has also been used as a man's nickname.
So what? He said that he was bringing through a NICKNAME. Something that would not be counted in the census, and therefore not part of the control data. The person with the spicy nickname would have a regular name as well, and that is the name that would be counted.

Also, counting a name guessed in this manner does not fit in with our overall theory. Our theory is that he will guess more popular letters at the expense of less popular letters. Here, he is not guessing a letter at all, rather, he is guessing some other sort of grouping.
3. The dog's actual name, "Ginger", was not counted as a JE guess and is also a female (human) surname.
That's fine. Especially considering, as I think Claus pointed out, it was supplied by the sitter not JE.

Now, I have explained my counting methods several times, including why I think it is more accurate. If you disagree, I would like to know why. What, specifically, makes my counting method inaccurate? What makes your counting method more accurate than mine?
 
Thanz said:
All of these situations are easily accounted for in the method I have proposed.
Really?

One person, one letter = one guess, regardless of how many specific names he guesses. The net is only as wide as one letter.

Two persons, one letter = two guesses.

One person, two letters = two guesses. He is trying to widen the net to get a hit, like a cold reader does.[/b]
Nice until you realize the guesses don't always correspond to people. And then you realize he doesn't specify how many people he is calling for, except for that one time he specifically said "two". Then you look closer and see him accepting dog's names. And then last names. Read on...

Nicknames - not part of our control data, therefore don't count. Treat this guess like a guess of a feather, or a calendar, or whatever.
Out go all the "C", "K", "J", "M", "R", etc guesses, too. None of those are in the census data.

The experiment does not require the name be in the census data. The census data simply sets up our expected mus for each letter bin.

The problem is, they are NOT separate guesses. "Bob or Bill or some B name" is one guess, with the net as wide as "B", and a couple of specifics hoping for a better hit.
Except when it isn't. When it refers to a last name. Or a dog's name. Or to two people. Or...

Your method equates two readings where he says "I am getting a J connectoin" with one reading where he states "I am getting a "John" or "J" connection here". I do not agree that those are the same situation. My method counts them differently, yours counts them the same. And that is why your method is inaccurate.
Huh?
By "Get the ball rolling" I meant that you were the first to post your counting method. I suppose Kerberos did a count before you. Not that it really matters.
Really? Read Clancie's claims. The previous results also refuted the null hypothesis.

Also, you have not defended its accuracy. You have simply defended its ease of applicability to multiple situations. However, by equating these multiple situations as explained above, you are making your count inaccurate.
I have said before "ease" has nothing to do with it. It has to do with counting consistency. I set the counting rules to match JE's actions.

As nickname guesses, not part of what is in our control data.
You need to understand the experiment is based on letter-bins. The census data sets up the expected mus. Period.
 
BillHoyt said:
Yep.
Nice until you realize the guesses don't always correspond to people. And then you realize he doesn't specify how many people he is calling for, except for that one time he specifically said "two". Then you look closer and see him accepting dog's names. And then last names. Read on...
When he makes them they do. When he says "I'm getting an "R" connection here..." he is guessing a name. It makes NO DIFFERENCE whatsoever how, or if, the sitter validates. We are counting his guesses, not his hits. So unless he specifies that he is getting a nickname, a dog, or a last name, we can safely assume that he is guessing a normal first name - as that is what he usually does. And if he specifically says "two" it is two guesses. If he doesn't, he is hoping for one hit. It is one guess.

Out go all the "C", "K", "J", "M", "R", etc guesses, too. None of those are in the census data.
Of course they are in the census data. What are you talking about? How else do we get 13.36% for J of this information is not in the census data? They count all the names. They don't count any nicknames. Get it?

The experiment does not require the name be in the census data. The census data simply sets up our expected mus for each letter bin.
That's right, it does. But we DON'T have any expected mus for NICKNAMES, as the census doesn't count them. Therefore, we shouldn't count them.

Except when it isn't. When it refers to a last name. Or a dog's name. Or to two people. Or...
Again, we are counting guesses, not hits. In this analysis, the hits are not relevant. Therefore, when he says "J connection" we count a guess. We still count a guess if the sitter validates it as a last name instead of a first name. What is important is the guess, not the sitter's response.

Let's make this a concrete example:

Reading 1:
JE: I am getting a "J" connection here.
Sitter: J?
JE: Yes, a "J" - like John, or Joe
sitter: I had an uncle Joe....

My method: one J guess.
BillHoyt:3? 4? J guesses?

Reading 2
JE: I am getting a "J" connection..
Sitter: My grandfather was John

Thanz:1 J
BillHoyt:1 J

Reading 3:
JE: I am getting a "Jim" connection here...
Sitter: Nope, I don't know any Jim
JE:What is the Canada connection?
Sitter: Blah blah

Thanz: 1 J
BillHoyt: 1 J

Reading 4
JE: I am sensing an older female
Sitter: My Mother has passed
JE: was her name "Jennifer"
Sitter: no, it was Roberta

Thanz: 1 J
Bill Hoyt: 1 J

Now, here is my problem with your counting method. In your method, reading 1 has as much weight as readings 2, 3, and 4 combined. However, in all cases, he is trying to make one J connection. Remember, we are trying to count how many times he will guess a certain letter, for cold reading purposes. If we have 3 separate readings (2, 3, 4) in which he makes a "J" guess, that is much different than the one reading with the multiple names. That distinction is lost in your method. My method counts all of them equally.

Really? Read Clancie's claims. The previous results also refuted the null hypothesis.
I am not sure what you mean by "read Clancie's claims".

Also, the previous results did not refute the null hypothesis, as I have already posted. Kerberos did a different analysis on the data. But if we take HIS raw counting numbers, and apply YOUR hypothesis to them, the null hypothesis cannot be rejected. The same thing happens with my count. The only count that rejects the null hypothesis is your flawed count.

I have said before "ease" has nothing to do with it. It has to do with counting consistency. I set the counting rules to match JE's actions.
My rules are just as consistent, and more accurate for our purposes.

You need to understand the experiment is based on letter-bins. The census data sets up the expected mus. Period.
I understand that perfectly. Nicknames, as they are not part of the census, are not part of setting up those expected mus. We have no idea what the expected mus would be for nicknames as opposed to proper names, so we should not count them.
 
Thanz said:
Also, the previous results did not refute the null hypothesis, as I have already posted. Kerberos did a different analysis on the data. But if we take HIS raw counting numbers, and apply YOUR hypothesis to them, the null hypothesis cannot be rejected. The same thing happens with my count. The only count that rejects the null hypothesis is your flawed count.

Thanz,

What? Sorry, this is so far from right, you'll need a plane ticket to get back. The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
 
BillHoyt said:

What? Sorry, this is so far from right, you'll need a plane ticket to get back. The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
Huh? Kerberos did an analysis that combined a bunch of the less frequently used letters. In order to do this, he had to count the number of guesses for each letter. His count was 14 J guesses out of a total of 78 (page 5 of this thread).

Your analysis was only of the letter J. You also had to count the number of letter guesses, but strictly speaking you did not have to keep separate totals for any letter other than J. Your count was 18 J guesses out of a total of 85. You used Poisson, and found that the probability of >= 18 was about .03, which rejected the null hypothesis.

If we apply Poisson to the numbers in Kerberos count, with an expected count of 10.42 and an actual count of 14, the probability of >= 14 is .168, which means that the null hypothesis cannot be rejected. I already posted this on page 19.

I also posted my count, 9 of 43, .128 - no rejection of null hypothesis.

I am simply applying your analysis to data collected by others, as I see your data collection as inaccurate. Is there some reason why I should NOT be able to apply your analysis to the raw data collected by Kerberos? He was counting the same thing - letter guesses. He just wanted to do a different analysis of them. That doesn't change the raw data he collected.
 
Thanz,

Take II:

The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
 
BillHoyt said:
Thanz,

Take II:

The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
Your reply was very quick, so I do not know if you have read my reply or not.

Quite simply, Kerberos did a different analysis with different control data (juninho's member distribution). I do not know what stat tools he used to come up with his analysis. My gut feeling (that I have expressed many times) is that the sample is too small to do the kind of combining that he has done.

However, I do not want to get into a debate regarding whether his method is valid or not. We got into this by me saying there was not enough data, and you insisted that there was - depending on what test you want to do. You then proposed a test that you insist has meaning to determine whether JE is cold reading. That test involves comparing the number of times that the letter J is guessed by JE to the general population proportion of J guesses. You were very specific in your formation of the hypothesis and null hypothesis:

I would frame the hypothesis that JE is cold-reading and his "J" guesses will show this by being significantly more frequent than chance alone would dictate. The next step is to turn this hypothesis on its head. This is called the "null hypothesis", and it would look like this: JE is not cold-reading and his "J" guesses will not be significantly higher than chance alone would dicatate Then I would select the significance level, probably .05 and look up the one-tailed test for significance. Why one-tailed? Because of how I framed the null hypothesis: I'm only interested in how unlikely it is to have had this high a number assuming his guesses were really random.
It seems to me that your null hypothesis is very specific, and not the same as Kerberos.

What I have done, is taken YOUR hypothesis and null hypothesis that you proposed as having real meaning for our purposes, and your preferred statistical tool (Poisson) to data collected by others. And we have seen the results. The only data that rejects YOUR specific null hypothesis is the flawed data you collected.
 
Thanz said:

Your reply was very quick, so I do not know if you have read my reply or not.

Thanz,

The essential null hypothesis here is that JE is cold reading. The choice of Js or Ks or groups of low-frequency initials is not integral to the hypothesis. How we choose to do the comparison to corroborate or refute the null hypothesis does not alter the null hypothesis.

Kerberos' approach rejected the null hypothesis of cold reading. My approach rejected the null hypothesis of cold reading.


:rolleyes:
 
Take III:

The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
 
BillHoyt said:
Kerberos' approach rejected the null hypothesis of cold reading. My approach rejected the null hypothesis of cold reading.
Kerberos' approach, which hasn't really been analytically dicussed herre (we don't know what stats tools he used, or whether they were appropriate) rejected the null hypothesis of cold reading based on DIFFERENT CONTROL DATA.

You were very specific in your formulation of the null hypothesis. It was italicized. You kept repeating that your analysis just rejected the null hypothesis. It did not rule out any other factors. It did not tell us anything about rare letters. It just told us about the J.

Further, your analysis only rejects the null hypothesis of cold reading when your flawed counting method is used to collect the data. How do you keep missing this point? Your data is flawed, and therefore we cannot trust your result. When ANY OTHER count posted here is used, your analytical approach DOES NOT reject the null hypothesis. This is the point I am making, and you keep avoiding. It doesn't matter what Kerberos test says - we are talking about your very specific null hypothesis.
 
BillHoyt said:
Take III:

The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?
As I have tried to point out, ad nauseum, it simply does not matter. Your question is irrelevant. It makes no difference whether Kerberos even had a null hypothesis, whether it was the same or different. I am just using his raw counting data. Get it? Understand? Will you now address the actual substantive issues here?

To recap:
1. Your counting method is inaccurate, for the reasons posted above and not addressed by you.

2. If your raw data is incorrect, your analysis cannot be trusted to be correct.

3. If we use the raw data collected by anyone else, and apply your test to it, the null hypothesis cannot be rejected.
 
Thanz said:

Kerberos' approach, which hasn't really been analytically dicussed herre (we don't know what stats tools he used, or whether they were appropriate) rejected the null hypothesis of cold reading based on DIFFERENT CONTROL DATA.

You were very specific in your formulation of the null hypothesis. It was italicized. You kept repeating that your analysis just rejected the null hypothesis. It did not rule out any other factors. It did not tell us anything about rare letters. It just told us about the J.

Further, your analysis only rejects the null hypothesis of cold reading when your flawed counting method is used to collect the data. How do you keep missing this point? Your data is flawed, and therefore we cannot trust your result. When ANY OTHER count posted here is used, your analytical approach DOES NOT reject the null hypothesis. This is the point I am making, and you keep avoiding. It doesn't matter what Kerberos test says - we are talking about your very specific null hypothesis.

Any Newfs in your area? They usually respond to the sounds of arms flailing on the water.

The data have nothing to do with the hypothesis. The control data have nothing to do with the hypothesis. The specific analyses have nothing to do with the hypotheses.

This is fundamental to understanding experimental design.
 
BillHoyt said:
Take III:

The null hypothesis is different from the experimental method. Do you understand that? Do you understand that the Kerberos null hypothesis and mine are the same?

Does this mean that neither you nor Kerberos were doing the "experimental method"? And if you and Kerberos are doing the same "null hupothesis", why two different answers?
Kerberos = .168
BillHoyt = .03
This seems like a big diffence to me. Could it be your counting method?
:confused:
 
Thanz said:

As I have tried to point out, ad nauseum, it simply does not matter. Your question is irrelevant. It makes no difference whether Kerberos even had a null hypothesis, whether it was the same or different. I am just using his raw counting data. Get it? Understand? Will you now address the actual substantive issues here?

We cannot proceed until you get some fundamentals down. Clancie and you have both claimed that only my approach rejected the null hypothesis. You are both quite wrong.

Do you understand that we have two different approaches to the problem, both of which have refuted the null hypothesis?
 
BNiles said:


Does this mean that neither you nor Kerberos were doing the "experimental method"? And if you and Kerberos are doing the same "null hupothesis", why two different answers?
Kerberos = 0168
BillHoyt = .03
This seems like a big diffence to me. Could it be your counting method?
:confused:

The data sets are different.

The analytical approach is different.

The hypothesis is the same. The null hypothesis is the same.

Are we going to need two Newfs here?
 
BillHoyt said:


The data sets are different.

The analytical approach is different.

The hypothesis is the same. The null hypothesis is the same.

Are we going to need two Newfs here?

What's a Newf? :D
 
BNiles said:


What's a Newf? :D

A dog. A Newfoundland, used for water rescue. Thanz is drowning. I was hoping you weren't joining him in that activity.

Cheers,
 

Back
Top Bottom