• You may find search is unavailable for a little while. Trying to fix a problem.

Unfair challenge?

EternalSceptic

Critical Thinker
Joined
Feb 3, 2007
Messages
254
I have been reading through most of the challenge applications and stumbled over what I feel is a slightly unfair point in a protocol:

http://www.internationalskeptics.com/forums/showthread.php?t=118952

Mrs. Putt failed to identify any of the persons in her Test.
This is so far OK, but there is one point which might have had considerable influence on the result:

Mrs Putt made her reading and AFTERWARDS the persons under test had to identify themselves from the written down readings.

I think this is slightly unfair because the result depends strongly on the self - estimate of the persons which she read. Would'nt it have been better to let these persons give a characteristic of themselves berfore the reading (of course in a way that ensures, that Mrs Putt can in no way have access to this information)?

This would have reduced the chance, that the persons she tried to read might have been emotionally biased and rejected the readings because they felt "to be better than the reading showed?

Just my $ 0,02
 
That's an interesting, and potentially valid point. However, as is the case with every JREF challenge test, the applicant helped design the protocol, and was satisfied that it reliably reflected her ability. Your suggested amendments, while possible better controls, were deemed unnecessary by the applicant herself, and might have added significant complication to the protocol design.
 
This kind of test is designed so that the results of chance success can be calculated, and the success criteria set accordingly. How would you calculate the likelihood of Mrs Putt guessing the pre-specified characteristic by chance? How would you determine whether her description of the pre-specified characteristic was close enough to be classified as a hit?

I guess a protocol could be devised where each subject identified a particular, different, distinguishing characteristic, and Mrs Putt had to match the subjects to the characteristics. But you're still relying on the subject's self-estimate of which is their most distinguishing characteristic.

In some protocols I've seen it's someone who knows the subject well, rather than the subject themselves, who picks out the reading that best describes them.
 
To determine if it's fair or not we'd have to know what characteristics she was expected to write down. "This person is a dynamic fool" sorts of things would likely be subject to a bias effect, but, "This subject is a 36 year old dockworker from Iowa" would not be.
 
I understand the point, and I think it would apply if there were objective investigations into such claims, but I do not think it applies here at all.

From the original claim (I haven't read the actual protocol--just the descriptions in the thread), it seems Mrs. Putt's claim is that the subjects will agree on the reading. Just read the OP in the appropriate thread; Mrs. Putt wanted the subjects to tell her how accurate she was.

So it isn't really a claim of objective accuracy; it is a claim of subjective agreement.
 
To determine if it's fair or not we'd have to know what characteristics she was expected to write down. "This person is a dynamic fool" sorts of things would likely be subject to a bias effect, but, "This subject is a 36 year old dockworker from Iowa" would not be.

I think that's the most important point.

If there was a definition about what characteristics Mrs. Putt had to read an ist this definition excludes things which could be emotionally important for the test - persons, then i agree,that the test was fair.

Profession, age, hobbies etc should be OK. But in this case it would be very important, that Mrs. Putt could not get any hint from the appearance or outfit of the persons.

And we all know, that very subtle hints, which go unnoticed by average attention can give an experienced person a lot of information. After all, this case looks to me like one of the most difficult to design if it must be fair and safe at the same time.

Btw, sorry for my english, I am no native english speaker.
 
Last edited:
This reminds me of a favorite point of astrologers too: if a subject doesn't fall for the Forer Effect, and says that a chart doesn't uniquely identify himself, the astrologist can claim the subject just isn't very self-aware.

Funny though that this sort of quibbling only comes up in test situations. In real life, they'll take money for their services as if readings were absolutely 100% accurate and reliable. Indeed, part of the "confidence" game they're playing is to assert knowledge and not just reasonable guesses. (Medical professionals, for example, will often offer diagnoses as hypotheses that should be tested with various measures or sometimes with treatment-as-diagnostic-test.)
 
That's an interesting, and potentially valid point. However, as is the case with every JREF challenge test, the applicant helped design the protocol, and was satisfied that it reliably reflected her ability. Your suggested amendments, while possible better controls, were deemed unnecessary by the applicant herself, and might have added significant complication to the protocol design.

It's actually a reasonable design assuming no cheating.

One of Randi's most famous demos was handing out individual readings to a classroom and asking who thought their reading was accurate. Almost everyone raised their hand. He then had them swap readings and asked again. Everyone had a good laugh as the readings were identical.

Here, if they present all with all readings secretly, and away from each other, and if the correct people mostly pick out the correct one, that is statistical evidence.
 
Last edited:
She and I went over this part of it when creating the protocol. I pointed out to her that unless she was able to write down very specific details about the subjects (like their profession, for instance) there was a chance they wouldn't see themselves in the readings. Putt said that she was confident she would be able to identify immutable characteristics.
 
I supposed you did. Maybe I should not have used the word "unfair" because I didn't think it was either jref or Putt who was not fair. I think that both parties did their best to establish a protocol for a test, which is extremely difficult to design.

There are IMO too much personal feelings and subjective judgements involved in both Mrs Putt and - no, not the testers - but the persons she was supposed to read. subjective judgements in both directions ws possible.

For example, a person she was tryig to read could have been "merciful" or biased and therefore (not cheating, but also not really objective) interpretet the reading as positive as possible. Or vice versa. This could have happened even if there had never been any contact whatsoever between Mrs Putt and the person.

I hope you get, what I want to say (my english...)
 
"Here, if they present all with all readings secretly, and away from each other, and if the correct people mostly pick out the correct one, that is statistical evidence."

Yes, that sounds like the best solution provided the readings are not too vague. But this problem can be handled :)

Hoewever I'd not call it "statistical evidence" unless the result can be repeated in several runs with different persons. You know... statistics can be a bitch and even a single 100% score is not evidence if it is not repeatable. Remember Dr. Rhine and the Zener - cards?

The more I think about this special Test, the more difficulties and pitfalls raise their ugly heads.
 
Last edited:
Hoewever I'd not call it "statistical evidence" unless the result can be repeated in several runs with different persons. You know... statistics can be a bitch and even a single 100% score is not evidence if it is not repeatable.
That's why there are two tests, the preliminary and the final. For the preliminary test the applicant typically has to beat odds of chance success of 1:1000. As no-one has ever done so the question of what success criteria would be set for the final test has never been stated, but a simple repeat of the first test would give combined chance odds beaten of 1:1,000,000.
 
That's why there are two tests, the preliminary and the final. For the preliminary test the applicant typically has to beat odds of chance success of 1:1000. As no-one has ever done so the question of what success criteria would be set for the final test has never been stated, but a simple repeat of the first test would give combined chance odds beaten of 1:1,000,000.

Point taken. But a million is a million is a million. That's one point.
The second: Probability does not say anything about what will happen next, If the candidate gets a score of 100% in the first run the chance to get a similar result in the second run is still 1:1000.

That said - I doubt that James Randi (Happy Birthday and we all hope, that you will stay among us until a three - digit Birthday, prefearbly in hexadecimal. The world needs you badly :) ) is just interested in keeping the million Dollars. I am pretty sure, that he is, like me, very much interested to find once evidence beyond any doubt that there are people with some kind of paranormal abilities or the opposite and has started the challenge mainly to motivate people to undergo rigirous tests.

Would'nt it be very sad if at the end he is fooled - not by a cheating candidate but simply by one of the tricks probability and statistics can play with really scientific thinking and totally unbiased people - to take paranormal abilities for granted?
 
Point taken. But a million is a million is a million. That's one point.
That's a point? I'm afraid I don't get it.

The second: Probability does not say anything about what will happen next, If the candidate gets a score of 100% in the first run the chance to get a similar result in the second run is still 1:1000.
The test protocol is usually designed so that the candidate does not need to score 100% to meet the success criteria. For example in the case of Patricia Putt only five of the ten subjects had to correctly identify their readings for her results to be considered sufficiently better than chance to pass the test, i.e a score of 50%. Her actual score was 0%.

And yes, the odds of reaching the success criteria in the second test by chance would also be 1:1000, same as the first. But the odds of reaching the success criteria in both tests by chance is 1:1,000,000.

I am pretty sure, that he is, like me, very much interested to find once evidence beyond any doubt that there are people with some kind of paranormal abilities or the opposite
Only the existence of paranormal abilities can be proved beyond any doubt; the opposite becomes increasingly likely the more people are tested and fail but there's always the possibility that the very next person you test will prove to be the first to possess one. Randi's motivation in creating the challenge, AIUI, was to expose scam artists like Uri Gellar who pretend to have paranormal abilities when they are actually just magicians like him.

Would'nt it be very sad if at the end he is fooled - not by a cheating candidate but simply by one of the tricks probability and statistics can play with really scientific thinking and totally unbiased people - to take paranormal abilities for granted?
The more people he tests the more likely it becomes that that 1:1,000,000 chance result will eventually happen, if that's what you mean, but he'd need to test hundreds of thousands of people before that would become a serious concern. I don't understand the bolded comment.
 
That's a point? I'm afraid I don't get it.

Sorry, I could not resist - odds of 1 : 1 000 000 and a million Dollars :) Please don't take that too serious.

The test protocol is usually designed so that the candidate does not need to score 100% to meet the success criteria. For example in the case of Patricia Putt only five of the ten subjects had to correctly identify their readings for her results to be considered sufficiently better than chance to pass the test, i.e a score of 50%. Her actual score was 0%

My fault. I wanted to say, if she reached her goal (5 out of 10) 100%

And yes, the odds of reaching the success criteria in the second test by chance would also be 1:1000, same as the first. But the odds of reaching the success criteria in both tests by chance is 1:1,000,000.

True.

Only the existence of paranormal abilities can be proved beyond any doubt; the opposite becomes increasingly likely the more people are tested and fail but there's always the possibility that the very next person you test will prove to be the first to possess one. Randi's motivation in creating the challenge, AIUI, was to expose scam artists like Uri Gellar who pretend to have paranormal abilities when they are actually just magicians like him.


The more people he tests the more likely it becomes that that 1:1,000,000 chance result will eventually happen, if that's what you mean, but he'd need to test hundreds of thousands of people before that would become a serious concern. I don't understand the bolded comment.

Maybe I misunderstand the purpose of the challenge completely.

I always thought, besides debunking all these sorts of cheating and fraud (which, as you stated, does not mean "demonstrating or proofing the non - existence of paranormal phenomenons") the purpose of the challenge was to demonstrate how many wrong conclusions and misinterpretations can happen in the research of such phenomenons and of course to the believers. And therefore, if at any time some sort of coincidence causes a candidate to win the challenge, then we have a situation, where Randi must accept, that such a phenomenon exists. With all implications.

I do not even want to imagine the reaction of the woos :)

OTOH if a candidate wins because he/she has really paranormal abilities we would have to apply several changes to our world - view
And a lot of research in front of us. But I doubt, that this will ever happen. At least it is several orders of magnitude less likely tha success by chance.

Once again - I am not a native english speaker and therefore my control over the strength (?)* of an expression is very limited, therefore please take, what I write, with a grain of salt.

*for example here: " in the research of such " i am pretty sure, that "research" does not exactly mean what I want to say. It is a too strong word. But I can't find a better one.
 
Last edited:
This reminds me of a favorite point of astrologers too: if a subject doesn't fall for the Forer Effect, and says that a chart doesn't uniquely identify himself, the astrologist can claim the subject just isn't very self-aware.

I like it.

If I ever cross to the dark side, I will say things such as "you are not aware of your selfishness." If the person admits to being selfish, I can count it as a hit and if the person denies it, I can still count it as a hit.
 
I think you're missing the point of the challenge. If the person can actually do what they claim they can do, there shouldn't be any doubt of whose result is whose. Especially if the person claimed and agreed that they could perform the task without including personally definable information. (Like a job, profession, or hobby.)

In fact, the person being tested wants it that way. The less direct they are, the better their chances the person reading their "reading" will be able to think, "Oh, sure, that's me."
 
Back
Top Bottom