Cold Reading Demos at TAM2

BillHoyt said:

But for those who don't get just how stupendously stupid, incredibly ignorant and massively moronic you are being here, lets run this one up the tr'oll poll and see how many woos ooh and ahh.


Sweet talk will get you nowhere. :)


We have a dataset of temperature measurements from dogs and a different one from cats. We put them on a spreadsheet for a side-by-side comparison and *poof* they are instantly one dataset!


We're talking about a single experiment, in the medical profession in this case, where one group is given a treatment (drug, etc.) and the other is not. You generally have as many datasets as you have experiments, not as many datasets as you have groups.


I better lodge a complaint with the mods that you just violated the JREF conjuror's rule by revealing yet another magician's trick.


If that is anything like your trick of making rationality disappear, we should join forces and be a magic duo!
 
BillHoyt said:

We're not talking about mathematcal theory, tr'oll. Find me one research paper from a peer-reviewed journal with an alpha set at .0213.

the color is red, so I'd better do it!!!! ;)

Strawman. I never claimed to be able to find an alpha set at exactly .0213. .0213 was a p-value anyway, not an alpha.

I just mentioned that commonly alphas are not set to .05 and .01. It depends on the situation for a variety of reasons, which I will repeat here: depending on what past studies used, the nature of the phenomena at hand, the experimenters' judgement, and the severity of making a Type 1 Error, for example.

It is pretty easy to find non .05 and .01 alphas. Just do a search at PubMed, for example, for 'Bonferroni' and 'alpha', for example. The studies will mention that they used a Bonferroni adjustment for multiple or pairwise comparisons.

This is a pretty good explanation of Bonferroni adjustments: http://mathworld.wolfram.com/BonferroniCorrection.html, and here is a good one about the lesser known life of Bonferroni: http://www.aghmed.fsnet.co.uk/bonf/index.html.

Oh, and you 'overlooked' these questions. I'll put them in blue to make it extra visible!:


*Do you think your choice of analyzing the J counts (and counting them multiple times for the same instance), and your choice of analysis and interpretation of the analysis, for the JE transcripts is completely free of all types of bias?

* Do you claim that it is not possible for you to impart your unconscious bias into it anywhere?

* Do you claim to not have unconscious bias?

* Do you think if you were presented with say 4 transcripts, 2 each from a well-known medium, say 'Margey Madeline' (made up name) and 2 each from a person, say 'Skeeter Jones' (made up name) who doesn't claim to be a medium, but their names were changed from Margey Madeline and Skeeter Jones to 'Person 1' and 'Person 2', and you analyzed the transcripts, and commented on them, do you think your analysis and interpretation would be improved from what you were doing before being blinded?
 
BillHoyt said:

Anybody out there got the annual income figures for all the medium clowns running around today?


Do you? Let us know! :) Make sure in your study to include incomes from all mediums, and not just the main ones on TV.

You wouldn't want any possible bias to creep in in your analysis, of course.


It should be interesting to see how not about the money it all is.


Personally I don't see any problem with people getting paid for something they are good at.
 
T'ai Chi said:
Personally I don't see any problem with people getting paid for something they are good at.

What if it's defrauding people? I'm not saying that what mediums are doing.
 
Clancie said:

Mona, I'm so sorry about your son. Please accept my condolensces.. I have always felt that no loss would be worse to than the loss of a child. I am so sorry that you have been through that.

Since you've read through this thread, you know that I often argue on behalf of mediums and mediumship here. So (true to form) I take some issue with the word "prey" (which many others here agree with you about).

Whether or not psychic ability is real, there is no doubt in my mind that some psychics are convinced that they -do- have mediumistic abilities. I do not think these kinds of psychics mean to "prey" on grieving people when they come to offer their services (for example, with missing children). I think they are trying to be helpful. (That said, I don't think they should do it. But, in my experience, it is often not about money...or even publicity for the psychic...it is simply someone who feels he/she has a gift and is trying to be helpful).

As for situations where parents seek out a medium, I don't think that qualifies as "preying" on someone either, unless a medium is knowingly fraudulent. And, there is no question that some parents -do- get comfort from their readings with mediums.

Just curious. Have you seen a medium yourself? And, what has lead you to the apparent conviction that people who do are "ignorant"? :confused:

People get comfort from many things that aren't affective for everyone (religion, mediumship, psychotherapy, work...whatever). Personally, I admire people who seem to have worked through their grief very well in whatever way they can, but we all know the pain of a deep loss never completely goes away, regardless. Have you seen the HBO documentary "Life after Life"? There is a couple in it who receives a good reading from George Anderson. They believe he has actually put them in touch with their son, and yet afterwards, they are still grieving because the son is just not here with them anymore in their daily life and not even mediumship can change that. At best, it can give a little hope, a little comfort, help people move on a little bit, (sometimes prevent them from ending their own lives due to grief), but ultimately I don't there's any real panacea, even with mediums.

No, I have not consulted a medium, nor will I. Until my early 20s I was a religionist of the RC variety, raised to believe in prayer, resurrection of the dead, and the healing properties of such things as Lourdes water. It became manifestly clear to me that this was a pile of nonsense, and when I discovered skeptical literature in the mid-80s I was delivered from the chains of superstition, fear of sinning and ending up in hell... the whole absurd bit.

Having immersed myself in skeptical works, from Randi to Shermer to the rest of the litany of usual suspects, my critical thinking skills became sufficiently engaged that I do not accept the truth claims for such purported phenomena as mediumship, because there is no evidence for them. The efforts of mediums to contact the dead are as efficacious as prayer is to the Virgin Mary is to prevent a remote accident or heal cancer, as far as I have been able to determine.

None of which is to say I do not empathize with grieving parents who seek out mediums. Mary Todd Lincoln did this obsessively, and became rather nuts (to the point where a relative attempted to have her committed.) If they can be bamboozled into believing they are in contact with their child, their comfort is something I rather envy. But I cannot simultaneously have that, and also continue with my intellectual freedom from magical and superstitious thinking.

My son is dead; he is not waiting for me at the right hand of God, and neither is there anything left of him that can communicate with me. Not by any evidence of which I am aware. Of course, I also know that, heathen that he was, he nevertheless is not burning in hell. Nor do I think his death was some divine punishment for my sins, or otherwise willed by a god whom I would wish to flay, if he intended such events. There is also a comfort in accepting reality as demonstrated by evidence or its lack.

You object to my using the word "prey" vis-a-vis mediums and grieving parents. Well, many of them (meaning mediums), by the evidence I have seen, are or should be aware that they are frauds lacking in any actual ability to communicate with the dead. I tend to think what they do should be legal, even tho it is a species of fraud, because there are implications for the religion clauses of the First Amendment if we attempted to prohibit such chicanery. But that does not mean we need withhold moral condemnation: taking money from parents in early, raw grief, to sell a fraud is immoral.
 
Clancie said:
P.S. Ever planning to address Thanz's question about your "J" analysis? Or mine about bias in general? :confused:

Questions? Hey, I got questions for ya!

"Questions Clancie does NOT want to answer"

Just a few gems:

Why do you still claim that nobody has complained about John Edward's readings, when both Dateline and O'Neill have shown serious cheating?

Please explain why the sitter would not lie.

Please show the statistical analysis that show that no mentalists can do readings with anything like the specificity John Edward does.

Why is it a problem that George Anderson does not name the people he thinks are fakes, when it is not a problem that John Edward doesn't either?

Why do you accept the anecdotal evidence that James van Praagh's sitters have been "encouraged" to cry on camera ("always"), when you reject any anecdotal evidence that John Edward is a fraud?

Do you think it is stupid to claim to be able to talk to dead animals?

Do you really consider a sitter's experience real, just because they think it is? John Edward is real, because people think he is?

Why do you claim that there are no "cold-reading-techniques-as-mediumship" transcripts, when you yourself have commented on one?

Why do you demand to see the whole Ian Rowling cold-reading, when you accept edited John Edward-readings?

Please define what a real skeptic is.

You demand that Shermer has men among the sitters. Yet, you don't demand this of John Edward, despite the overwhelming female ratio of JE's fans. Why is this?

If you do not consider Shermer's reading similar to what JE does, and you don't have a transcript, video or audio, what do you base it on?

Oh, my...and these are but a few of them.....
 
Clancie said:
So...Bill....you're saying that you're convinced that all psychics are definitely fraudulent? Is that what you think? :confused:

P.S. Ever planning to address Thanz's question about your "J" analysis? Or mine about bias in general? :confused:
Clancie,

Where did I say that? Learn some logic.

I have addressed Thanz' questions ad nauseum. I still await his (or anybody else's) answer to my questions.

Your question about bias was so incredibly dunderheaded, I would expect you to be ashamed to have posed it, let alone to repeat it here again. I have already said, on this thread, and countless others why single, double and triple blinding exist. How about you try to understand those posts. Then you will realize what an assinine question you posed.
 
T'ai Chi said:

We're talking about a single experiment, in the medical profession in this case, where one group is given a treatment (drug, etc.) and the other is not. You generally have as many datasets as you have experiments, not as many datasets as you have groups.

Great definition! Let's combine it with your earlier Excel gaffe. So we take five papers and put them all together on a single Excel spreadsheet and *poof* the meta-analysis of a single dataset!

Oh no. Oh my. Contradiction! Contradiction! We have five datasets from the five papers. But now we have one dataset from the meta-analysis. Oh my. Oh my.
 
T'ai Chi said:
Strawman. I never claimed to be able to find an alpha set at exactly .0213. .0213 was a p-value anyway, not an alpha.

It is not a strawman, Tr'olldini, when one exactly follows anothers assertions with a poignant question. Here is your assertion:
"Alpha can be set to anything in real life. Find me any mathematical theory that says that alpha is always .05 or .01 (or any value for that matter)."

That was your assertion. So, now cite for me the paper that has an alpha of .0213. If you don't like that value, then choose an alpha from the following list:

o .00747
o .0314159
o .011...
o .0271
o .07734
o .0141
o .5
o .7
o .9
o .99991

You said anything. Give us one of the above. A paper from a peer-reviewed journal.
 
BillHoyt said:

Great definition!


Thanks Clausecho! Although I didn't post a definiton per say, I posted:

"We're talking about a single experiment, in the medical profession in this case, where one group is given a treatment (drug, etc.) and the other is not. You generally have as many datasets as you have experiments, not as many datasets as you have groups."


Let's combine it with your earlier Excel gaffe.


Which was what exactly?


So we take five papers and put them all together on a single Excel spreadsheet and *poof* the meta-analysis of a single dataset!


Huh?

Each experiment was separate and had 1 dataset per experiment. When you combine several datasets into one dataset, say for a metaanalysis, sure, those datasets become one dataset.


Oh no. Oh my. Contradiction! Contradiction!


Why?

A metaanalysis is not an experiment; it is the analysis of the outcomes of several experiments, hence the 'meta' part.
 
BillHoyt said:

It is not a strawman, Tr'olldini, when one exactly follows anothers assertions with a poignant question.


Your silly demand was anything but poignant. :)


Here is your assertion:
"Alpha can be set to anything in real life. Find me any mathematical theory that says that alpha is always .05 or .01 (or any value for that matter)."
[p/quote]


Yeah, alpha "can" be anything, sure. That is far different from me saying I could find a paper(s) with a specific alpha, which is what you are demanding me to do.

Surely even you can see your own strawman waving at you with his straw hand. But perhaps you have unconscious bias, so maybe not. ;)


So, now cite for me the paper that has an alpha of .0213. If you don't like that value, then choose an alpha from the following list:
o .00747
o .0314159
o .011...
o .0271
o .07734
o .0141
o .5
o .7
o .9
o .99991


See above.


You said anything.

See above.

Hey, did you not yet find a paper in PubMed that used a Bonferroni adjustment to get a non .05 or .01 alpha level? Hmm? :)
 
TLN said:

What if it's defrauding people? I'm not saying that what mediums are doing.

Well certainly if people are frauds, and this goes for in any profession, the law should be involved and there should be penalties.
 
T'ai Chi said:
Hey, did you not yet find a paper in PubMed that used a Bonferroni adjustment to get a non .05 or .01 alpha level? Hmm? :)
How about you read them, tr'olldini? How about you understand that the Bonferroni correction is a correction. That might help you understand the curious name, eh?

The challenge remains the same. Find a study with an alpha equal to any of the values previously given. If you wish, let it be a study that uses the Bonferrni correction to an alpha equal to any of the values previously given.

Games, tr'olldini. That's all we get from you. Games. And badly played at that.
 
BillHoyt said:

Thanz,

These claims didn't work before and they won't work now unless and until you can demonstrate the flaw. You claim it is an inaccurate count of both the J and total number of guesses. Let us assume, for the moment, that that is correct. Why, then, do the "J" counts soar so far above all the others? I posed this before, you couldn't answer then. Your situation worsened when Lurker applied my counting methods to an entirely different data set and got the same result: his "J" counts soared far above all the others.
Your question is irrelevant, Mr. Hoyt. You do not use the results to justify the method. You need to first come up with a logical experimental method, and then use the logic and soundness of your method to justify the results. I do not need to explain why your method gets certain results - the results do not matter if the method is flawed. I only need to explain the flaw, which I have done.

Again, here is the actual example I posted:
Let's make this a concrete example:

Reading 1:
JE: I am getting a "J" connection here.
Sitter: J?
JE: Yes, a "J" - like John, or Joe
sitter: I had an uncle Joe....

My method: one J guess.
BillHoyt:3? 4? J guesses?

Reading 2
JE: I am getting a "J" connection..
Sitter: My grandfather was John

Thanz:1 J
BillHoyt:1 J

Reading 3:
JE: I am getting a "Jim" connection here...
Sitter: Nope, I don't know any Jim
JE:What is the Canada connection?
Sitter: Blah blah

Thanz: 1 J
BillHoyt: 1 J

Reading 4
JE: I am sensing an older female
Sitter: My Mother has passed
JE: was her name "Jennifer"
Sitter: no, it was Roberta

Thanz: 1 J
Bill Hoyt: 1 J

Now, here is my problem with your counting method. In your method, reading 1 has as much weight as readings 2, 3, and 4 combined. However, in all cases, he is trying to make one J connection. Remember, we are trying to count how many times he will guess a certain letter, for cold reading purposes. If we have 3 separate readings (2, 3, 4) in which he makes a "J" guess, that is much different than the one reading with the multiple names. That distinction is lost in your method. My method counts all of them equally.

What you want us to believe is that I concocted a technique to make this so. That, somehow, my technique favors "J"s. How? Why? Why did Lurker achieve the same result with different data?

You now have two separate people making the same finding with two separate data sets. Did Lurker commit fraud? How? Walk us through it, count by count.
Your counting method is flawed, and I have taken you through it before. See above. It simply doesn't matter if someone else, using the same flawed method, got similar results. Why can't you understand that you cannot point to results to justify a method?

Let's say you were counting dead birds in a field. You come across a dead bird that has been partially eaten by other animals, and is in 4 pieces. Your counting method counts each piece as a separate bird. Mine would count one bird. And I don't need to explain whatever results you might get by counting 4 pieces as 4 birds to point out that the count is simply wrong.
 
CFLarsen said:
Hoyt,

It is imperative that Thanz point out the statistical flaws. That's what it comes down to.

If you want to complain about methods, you should be able to point out where the problems are.

Whatever other reason you want to bring into it is completely irrelevant. If you want to be taken seriously, you got to stay focused.

Filibuster does not substitute content. Ridicule does not substitute content.

If Thanz has a problem with your use of statistics, he should be able to pinpoint what the problem is.
Good lord, the irony of this post coming from you Mr. Larsen is simply too much to take.

"Ridicule does not substitute content"? This comming from you? Tht's rich.

In any event, I have pointed out the error, several times. You either refuse to read the posts where I have done so, or you have a serious reading comprehension problem. But, I have posted the concrete example here as well - maybe then you will look at it. But I am not holding my breath.
 
Thanz said:
And I don't need to explain whatever results you might get by counting 4 pieces as 4 birds to point out that the count is simply wrong.

There are only two alternatives, sir. The disproportionately high "J" counts are either an artifact of the method or an underlying reality of the data. If you believe they are an artifact, then explain how that is so. Explain how this method favors "J"s over non-"J's. In post after bleating post, you simply repeat your accusations and simply continue your dodge around the fact that you have no answer. You cannot tell us how this method favors "J"s. Tell us now.
 
BillHoyt said:

There are only two alternatives, sir. The disproportionately high "J" counts are either an artifact of the method or an underlying reality of the data. If you believe they are an artifact, then explain how that is so. Explain how this method favors "J"s over non-"J's. In post after bleating post, you simply repeat your accusations and simply continue your dodge around the fact that you have no answer. You cannot tell us how this method favors "J"s. Tell us now.
Considering that if the count of the same transcripts is done using a logical method, by any other poster, the J count - while higher than expected in all counts - is not statistically significantly higher than expected, the extra high J counts are an artifact of your method.

We do not know if your method favours "J"s over non-"J"s, as the only analysis performed was of the letter "J". We do know that your method itself makes no logical sense. In post after bleating post you have failed to logically justify your deeply flawed counting method. Your attempts to use the results as some sort of justification for the method are laughable, and nothing more than a smokescreen for the fact that you can't actually justify your method.

I find your choice of quotes from my post quite interesting, as you don't actually address the point it makes. Are you seriously suggesting that counting the 4 pieces as 4 distinct birds could be correct in ANY circumstances? We don't even need to know what the results of whatever analysis you perform are if the counting method cannot be trusted. Your counting method cannot be trusted, therefore neither can your results.

Can you come up with any reason why the 4 parts are 4 birds? Can you actually address the concrete example I posted and explain why the first reading should be given the same weight as the other three combined? It is you who chose the method, and therefore your obligation to justify it - which you have not been able to do.
 
Thanz said:
We do not know if your method favours "J"s over non-"J"s, as the only analysis performed was of the letter "J". We do know that your method itself makes no logical sense. In post after bleating post you have failed to logically justify your deeply flawed counting method. Your attempts to use the results as some sort of justification for the method are laughable, and nothing more than a smokescreen for the fact that you can't actually justify your method.
This is simply more bleating, sir. It is also flat-out wrong mathematically. The fact is, the "J"s soared into significance with my method. The question is: is this artifact, or does this reveal something about the underlying data? You claim artifact. You must demonstrate how.

This is very simple. If I multiply a fraction, A by 3, I get 3A. If I multiply a set of fractions, say, (A,B,C) by 3, I get (3A, 3B, 3C) IF A,B,and C are the total population, then the population size is also multiplied by 3. The new fractions of that population are, again, (3A,3B,3C)/3N, or (A,B,C). If JE made letter guesses equally, the proportions of the letters would all remain the same.

But, with JE's data, this did not happen. With two different datasets, sir. The "J"s expanded disproportionately. Now, you allege this is an artifact. Yet every complaint you raise about the method is equally true of "M", "N", "D" and every letter you can think of. The fraction of "J"s should not have changed. Yet they did. Explain how that is an artifact of the method.

[edited for clarity -bh]
 
BillHoyt said:

This is simply more bleating, sir. It is also flat-out wrong mathematically. The fact is, the "J"s soared into significance with my method. The question is: is this artifact, or does this reveal something about the underlying data? You claim artifact. You must demonstrate how.
You would count 1 bird in 4 pieces as 4 birds and I am wrong mathematically. Sure. Again, however, I don't have to demonstrate how. It is irrelevant. It is you that has to demonstrate the soundness of your method (independent of the results) which you have been unable to do.

This is very simple. If I multiply a fraction, A by 3, I get 3A. If I multiply a set of fractions, say, (A,B,C) by 3, I get (3A, 3B, 3C) If JE made letter guesses equally, the proportions of the letters would all remain the same.
But he doesn't make letter guesses equally - nor would we expect him to. We expect that he makes more J guesses, as it is the most common initial. Further, what we see in the data is a letter guess with specific name examples. "J, like John or Joe" for example. There is no reason to believe that he would use the same proportion of examples for each letter. It could be that he uses more specific examples for the letter J than for others. This is different from just guessing J more often, and your method has no way of telling the two apart. In fact, it assumes the latter.

But, with JE's data, this did not happen. With two different datasets, sir. The "J"s expanded disproportionately. Now, you allege this is an artifact. Yet every complaint you raise about the method is equally true of "M", "N", "D" and every letter you can think of. The fraction of "J"s should not have changed. Yet they did. Explain how that is an artifact of the method.
See above. All this means is that your total figure is wrong as well. Having both figures wrong does not enhance your data. How can you tell if he simply uses more examples for J rather than guessing J more often? Are you ever going to address the specific example I have posted?
 

Back
Top Bottom