Interesting Ian
Banned
- Joined
- Feb 9, 2004
- Messages
- 7,675
T'ai Chi said:
This from some someone who failed to answer every stat. question I put forth to him.
Be productive and go win your own Tottle award again.![]()
LOL
T'ai Chi said:
This from some someone who failed to answer every stat. question I put forth to him.
Be productive and go win your own Tottle award again.![]()
T'ai Chi said:No, I think it is because your observed statistic (ie. your letter J counts) are high.
BillHoyt said:
Oh. Was the method not consistenly applied?
Was it not equally applied to all twenty-six initials?
Are those initials not random variables?
If that's what we're trying to disprove, we can't really drop it. Otherwise, we might end up disproving something entirely unrelated. What's the use of that?Originally posted by BillHoyt
Drop the assumption about a feeble spirit trying desperately to mumble his name into JE's ears.
It turns into a non-Poisson distribution.Count the name guesses, and test the null hypothesis that they match the distribution from the census data. Name guesses. Name guesses.
Same question as given to Tr'olldini:
And just what happens to the Poisson distribution with this alleged "overcounting"?
But that's exactly how science operates. It drops assumptions, turns hypotheses upside down and disproves those null hypotheses. These eliminations of possibilities provide the logical forward momentum toward the truth.69dodge said:If that's what we're trying to disprove, we can't really drop it. Otherwise, we might end up disproving something entirely unrelated. What's the use of that?
I'm sorry, but this is also wrong. The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean, and calculate the area under the Poisson pdf's tail. Again, no assumptions other than that the data collected are Poisson.When doing a significance test, all the probabilities are calculated based on the assumption that the null hypothesis is true. Then, if we end up with a small p-value, we can take that as evidence against the null hypothesis.
A failure to correctly state and test the null hypothesis doesn't turn the statistic non-Poisson. Did you say what you intended to say here?So, we need to be sure that the null hypothesis we use, actually is what we're trying to find evidence against. Otherwise, a small p-value tells us nothing that we care about.It turns into a non-Poisson distribution.
I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:If X and Y are independent Poisson random variables, each with mean m, then X + Y is Poisson with mean 2m. But X + X = 2X is not Poisson.
How do I know? Well, here's one way. The mean and variance of a Poisson random variable are equal. So, the variance of X is also m. However, 2X has mean 2m but variance 4m. So, it isn't Poisson.
But your method doesn't do this. It equates initials with names. They are not the same. You have provided no justification for counting initial letters of names said in the same manner as if he just said "Starts with a J".BillHoyt said:
We drop any assumptions about what we think we're seeing with JE and ask: do the names and name initials that he spouts seem to be sampled from the population of names at large. If they aren't (that is, we disprove the null hypothesis), we have evidence that something else is going on here.
But your "observed value" is wrong. It assumes that saying "John" is the same as saying "J connection", even if he had just said "J connection" right before it - as in "J connection, like John". While you continue to equate examples with initial guesses your count will always be wrong.I'm sorry, but this is also wrong. The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean, and calculate the area under the Poisson pdf's tail.
First, this is not true. There are other assumptions in your model. Second, you have been repeatedly asked to justify this assumption - that Poisson is applicable - and you have never done so. Again - why do you make this assumption?Again, no assumptions other than that the data collected are Poisson.
You must be so proud to quote a textbook.I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:
Statistics doesn't demand sameness; it demands categorization. Your point is specious.Thanz said:But your method doesn't do this. It equates initials with names. They are not the same. You have provided no justification for counting initial letters of names said in the same manner as if he just said "Starts with a J".
More specious points. You continue to be unable to substantiate your claim that the method produces an artifact.But your "observed value" is wrong. It assumes that saying "John" is the same as saying "J connection", even if he had just said "J connection" right before it - as in "J connection, like John". While you continue to equate examples with initial guesses your count will always be wrong.
You enumerate no other assumptions. You simply assert. You also assert the result is an artifact and cannot demonstrate how that artifact was created.First, this is not true. There are other assumptions in your model. Second, you have been repeatedly asked to justify this assumption - that Poisson is applicable - and you have never done so. Again - why do you make this assumption?
Can you not understand that I am answering? That you are asking statistical questions that require statistical answers? Did you not read the original thread in which I directly and clearly explained what a Poisson process is and that the definitional criteria are met by JE's guesses?
You must be so proud to quote a textbook.
Can you explain in plain English why your method makes any sense? Can you answer my question about where you would put your money? Can you back up your assumption that Poisson is approriate? Can you actually answer any direct questions put to you?
More avoidance from Mr. Hoyt. You have not backed up your method with logic. My points go to the heart of your method, and you have yet to address them.BillHoyt said:
Statistics doesn't demand sameness; it demands categorization. Your point is specious.
More specious points. You continue to be unable to substantiate your claim that the method produces an artifact.
Dude, did you not see the paragraph immediately before this in which I state that your method makes assumptions in the count?You enumerate no other assumptions. You simply assert.
I have. At least twice.You also assert the result is an artifact and cannot demonstrate how that artifact was created.
You never explained why JE guesses are a Poisson process. If you have, and I have missed it, just point me to the post.Can you not understand that I am answering? That you are asking statistical questions that require statistical answers? Did you not read the original thread in which I directly and clearly explained what a Poisson process is and that the definitional criteria are met by JE's guesses?
How about you start listening with an intent to understand? The answers have all been given over and over.
Clancie said:No comment?![]()
I have. In the original thread. I am not doing your homework for you.Thanz said:More avoidance from Mr. Hoyt. You have not backed up your method with logic. My points go to the heart of your method, and you have yet to address them.
And I have responded to that. I just did again with the textbook quote that somehow you thought was funny. It was not. It was direct. It directly addresses 69dodge's error and it directly addresses your claim. The variance, sir, reduces. The observed gravitate toward the mean. The effect of increasing the sample size was random and follows this general rule. It is not as you so badly misdescribe it.As for how your method affects the results, I have demonstrated this at least twice. Your method counts 18 guesses at J for only 9 actual shots at a J hit. It counts 85 guesses when only 43 were made. By doubling both the observed J count and the total sample, you get a result that is not statistically accurate. Just as if you had counted 10 "heads" or 10 "tails" for every coin flip.
Yeah, you jerk. Your complaint amounts to saying "You can't count fruit dropped per square yard of that orchard. I have pear trees and apple trees. How can you count fruit when some are apples and some are pears."
Dude, did you not see the paragraph immediately before this in which I state that your method makes assumptions in the count?
I'm not about to waste further time doing your research for you. Search for me using the phrase "poisson process."You never explained why JE guesses are a Poisson process. If you have, and I have missed it, just point me to the post.
I have done this, sir. The problem here is multifaceted, but includes the basic fact that you have no understanding of either scientific method or inferential statistics. You are also one of the most intellectually lazy posters on this forum. Go back through the threads, you dolt.Next, I am asking you very basic questions that do not need a stat book to answer. I am asking you to back up your experimental design with simple logic. It is abundantly clear that you are unable to do this. You don't seem able to answer a direct question. Do I need to create a LarsenList for you?
How about answering my very simple question about where you would bet your money? That doesn't require a stat textbook to answer. Why don't you answer it?
Clancie said:Bill,
Don't be an idiot.
Don't write about zoology.
Well, I think it's right. So there.Originally posted by BillHoyt
I'm sorry, but this is also wrong.
Yes, but how do we know what mean to expect?The calculation makes no such assumption The calculation simply reports what is there: an observed value. In this test of significance, we only compare the observed value against the expected mean
I did. If you read my original post instead of your quote of it, it might be clearer.A failure to correctly state and test the null hypothesis doesn't turn the statistic non-Poisson. Did you say what you intended to say here?
I guess the E's are supposed to represent summation signs, yes?I think you need to take a step back here and think about this claim. This is from Hogg & Craig's Introduction to Mathematical Statistics, third edition, page 168:
"Theorem 4: Let X<sub>1</sub>,...,X<sub>n</sub> denote random variables that have means m<sub>1</sub>,...,m<sub>n</sub> and variances s<sup>2</sup><sub>1</sub>,...,s<sup>2</sup><sub>n</sub>. Let p<sub>ij</sub>, i <> j, denote the correlation coefficient of X<sub>i</sub> and X<sub>j</sub> and let k<sub>1</sub>,...,<sub>n</sub> denote real constants. The mean and the variance of the linear function
Y = E<sub>1</sub><sup>n</sup>k<sub>i</sub>X<sub>i</sub>
are, respectively,
m<sub>Y</sub> = E<sub>1</sub><sup>n</sup>K<sub>i</sub>m<sub>i</sub>
and
s<sup>2</sup><sub>y</sub>= E<sub>1</sub><sup>n</sup>k<sub>i</sub><sup>2</sup>s<sup>2</sup><sub>i</sub> + 2EE<sub>i</sub><sub><</sub><sub>></sub><sub>j</sub>k<sub>i</sub>k<sub>j</sub>p<sub>ij</sub>s<sub>i</sub>s<sub>j</sub>"
In other words, the variance of a linear sum of random variables (regardless of dependence) is always less than or equal to the sum of the variances.
We don't assume. We count and test the hypothesis. The hypothesis is not an assumption. It is an assertion to be tested. The difference is significant.69dodge said:Well, I think it's right. So there.Yes, but how do we know what mean to expect?
We assume the null hypothesis is true, in other words, that John Edward really is doing what he claims he's doing, and then we calculate, based on that assumption, the probability that we'd observe a value at least as extreme as the value we actually did observe. If the probability (the p-value) is sufficiently small, we may take that fact as providing evidence against the null hypothesis.
We don't need to know either mechanisms or alleged mechanisms to characterize a Poisson process. I gave the definition in the other thread. Nobody had to ask gamma particles if they were expelled independently or not. We observed and tested.If we're trying to disprove his claim, and he doesn't claim that all his guesses are independent, then we shouldn't include in our null hypothesis the assumption of independence. And without that assumption, there's no reason to suppose that the number of 'J' guesses follows (even approximately) a Poisson distribution.I did. If you read my original post instead of your quote of it, it might be clearer.I guess the E's are supposed to represent summation signs, yes?
We do not have a situation of cX. We have a situation of sampling. Under the assumption of one guess - one person, the guesses are forced to one per sitter. Regardless of how many guesses are made. When we drop that assumption, we count each guess. That is not cX.Your "In other words, ..." summary does not follow from the preceding formula.
I'm sure you can find somewhere in that book a statement like: If X is a random variable and c is a constant, the variance of cX is c<sup>2</sup> times the variance of X. That's pretty much all I used in my previous post.
Sir, you simply have not. It is not about "doing my homework for me", it is about backing up your claims. You have not done so. If you think I am wrong - prove it. Poiont me to a post. I'm sure you'd love to prove me wrong here. But you can't.BillHoyt said:
I have. In the original thread. I am not doing your homework for you.
You have not backed up this "random" claim of yours. He is not going to say "A J connection, like Alex".The observed gravitate toward the mean. The effect of increasing the sample size was random and follows this general rule. It is not as you so badly misdescribe it.
No, it isn't specious. It is the same as you sitting and recording coin flips done by others, and instead of counting the actual flips you count the number of times the flippers say something. Like "Heads? Yeah Heads. Okay, that one was heads" and then you record 3 heads for the one flip.Your coin example is utterly specious. Go back to the Hogg & Craig theorem. This is a random variable we are looking at. This is not algebra.
No it doesn't. It is like me saying you shouldn't be including pears in your count of apples.Yeah, you jerk. Your complaint amounts to saying "You can't count fruit dropped per square yard of that orchard. I have pear trees and apple trees. How can you count fruit when some are apples and some are pears."
Can you point me to the post or not? You only posted a general description of Poisson. you never connected that process to JE or explained why JE should be considered a poisson process.I'm not about to waste further time doing your research for you. Search for me using the phrase "poisson process."
I have looked through the threads, and you simply have not backed up your counting method.I have done this, sir. The problem here is multifaceted, but includes the basic fact that you have no understanding of either scientific method or inferential statistics. You are also one of the most intellectually lazy posters on this forum. Go back through the threads, you dolt.
It is not a red herring at all, sir, and your reluctance to address it speaks volumes. your method equates the one reading with the three combined, when clearly they are not equivalent.Your "bet" is an inane red herring that doesn't warrant an answer. Stick to the freaking issues.
Clancie said:Bill,
Are you going to address my point or not? Again, it is that your counting method is ineffective at detecting cold reading. In fact, cold readers could use your premise to actually disguise what they are doing and appear to not be cold reading at all.
Doesn't that bother you at all?
How would they do this? Its simple. They make guesses of initials only for their high-frequency letter guesses and intersperse them with initials plus a string of name guesses--but only for low frequency letters...
By doing this, obviously, they would be inflating the overall count while also effectively skewing your data and hiding the -actual- number of times they used high frequency letter guesses. And it wouldn't change their hit rate at all.
Not only would they still be cold reading (undetected by you), but your method would come up with some impressive stats showing no indication of cold reading at all.
If you find that your counting method gives you the result you want from 4 JE transcripts, but find that it doesn't detect cold reading patterns in other mediums--in fact, gives a misleading result (indicating NO cold reading when actually there was)....shouldn't you, as a "scientist", have a problem with that?