• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Pigasus Awards & Sheldrake

Again you make a powerful case! I am unable to counter such bold assertions which require no substantiation.
 
This is a nitpick, but a p=0.05 gives odds of 1 to 19 (p/(1-p)).

That is an important point. When it is very unlikely that the null hypothesis is false (let's say 1/100 or less), you are much more likely to find a p=0.05 when the null hypotheses is true than when it is false.

Linda

I accept this, though, for a preliminary test. I think the point of a preliminary test is to prevent Type II error - the accidental rejection of an hypothesis that is true.

I don't accept that this paper shows a positive result, but if it did, the next step would be to move toward higher statistical power. The main reason for needing higher statistical power is the sheer number of experiments that are done in this field. The more you do, the more likely it is to trip over a Type I error just by chance alone. Which is the point of statistical analysis.




Hence the need for collaboration between both sides on joint experiments - such as Chris French & Rupert Sheldrake. Then neither side can complain about the way the experiment was conducted.

This always leads to impasse: the paranormalists refuse to run experiments with tight protocols, and refuse to do proper statistical analysis. Consulting with them usually means nothing gets done. Look at the JREF Paranormal Challenge. Or the Natasha Demkina fisasco. Good God, they're still kvetching, and it's been two years.

Whichever side doesn't get the results they want will complain like gangbusters. Ad hoc rationalizations are the modus operandi of paranormal experiments.

I don't see any reason to consult with Sheldrake. Nothing personal against him, but this is experience talking. Read a few of Susan Blackmore's editorials.





I heard Sheldrake a few months ago in a radio debate with a chemist (a chemist!)

Good God, no! The next thing we'll hear is that some biologist will start dabbling in paranormal research. Where will it end?




I don't believe Sheldrake has claimed to have conclusively demonstrated anything here. Just that so far he has found an apparent telepathy effect which deserves further research. (Though I don't doubt he believes that given further research the effect will persist.)

? Why would he call it telepathy?



Well that confirms it - you don't understand P-values.

I think the poster made sense, and I agree with the statement that the primary focus should be on protocol. This paper was a dog's breakfast of protocols, and it means that it's really hard to even agree on how to calculate a p-value.




That's hardly a complaint against Sheldrake. Very few published papers in any field have negative results. It's tough to get a paper with negative results accepted for publication.

This is half-true. Assuming the protocol is acceptable, (ie: if we're just talking about publication bias) a published experiment with p<.05 and a positive result could very well represent ten unpublished (file-drawers) with p<.05 and negative results.

This possibility diminishes noticeably if we choose to go with p<.005, or p<.001. There simply can't be a hundred unpublished experiments out there with negative results.

However, my concerns are about the protocols right now. The p value is just a distraction. Especially the way Sheldrake calculates it. See below.




So your complaint against Sheldrake is that he isn't being honest? I've seen no evidence of that. I gather he honestly believes that something is going on and is doing his best to scientifically test for it. What evidence (other than positive results) do you have that he is cheating to get the results he wants?

Sheldrake is indeed a legitemate scientist in the field of biology. This is why I think he should know better than to combine runs that are performed with different protocols into a summary statistical analysis. This is a huge no-no in the professional scientific world, and seeing him do this is evidence of dishonesty in my opinion.

He further botches the p value calculations. He did not address the fact that he was doing multiple runs. This dilutes the p-value, whereas he simply pooled the results. This would only be valid for one big run, but that's not what he actually did. This artificially increases the statistical power of the experiment. Again: he should know that, so seeing him do this suggests deception rather than incompetence.
 
I was referring to his book 'Dogs who know their owners are coming home', though I assume the experiments are also published in papers (sorry, can't recall). No doubt on Sheldrake's web site.

Oh, the book. I've read his papers on the subject, and didn't recall anything about asking the owner to stop coming home.
 
Again you make a powerful case! I am unable to counter such bold assertions which require no substantiation.

Alright, let me spell it out for you:

Sheldrake claimed the dog went to the window when the owner was coming home, and therefore knew she was on her way.

When this experiment was repeated by another team, the dog did indeed go to the window when the owner was coming home. And at regular intervals in between. The dog went to the window all the damn time. Therefore it was not proved that the dog knew anything about the owner's movements, and no paranormal ability was recognised.

Of course Sheldrake is going to defend his position. He has books to sell. But I believe that the scientific community has accepted the second set of results, his defence wasn't deemed significant, and the matter is at rest. No paranormal ability was proved, nothing to see, move along.

If you think you have proof of something which violates the known laws of the universe, you'd better be two things:

1) sure
2) thorough.

Sheldrake consistently fails on the second, which makes me wonder if he even cares about the first.
 
Can You Feel a Difference Between a Live TV Show and a Recording?
I want to find out if some people can feel a difference between a live TV show and a recording. For example, if you’re watching a football match broadcast live, at the same time that you’re seeing it, millions of other people maybe watching and experiencing similar emotions as the game progresses. By contrast, if you watch the same match on a DVD or video recording when almost no one else is watching it, there will be very few people feeling the same emotions at the same time as you. I’m trying to find out if people can feel a difference between live and recorded events while they are watching them. Of course, this is hard to separate out your conscious knowledge of whether it is live or recorded from your feelings when watching it. I’m thinking of carrying out experiments in which these effects could be teased apart. But meanwhile I would like to hear from anyone who’s noticed a difference between watching live and recorded events and would be interested in any observations you maybe able to share.

Diving back to the first page, I was especially taken by this advert that Teek posted. I often find that watching a football match live gives a very different feeling than watching it later on video / dvd / sky+.

But there are so many reasons why this would be the case (for me, at least). I would either know the result, or would have taken care not to see the result. Watching football is very routinised - I know the time of kickoff, either a weekday evening for a european game, a Sunday afternoon for a Premiership game etc. If I'm watching a game on a Friday morning, it's not going to be live (unless it's the world cup in Japan, or something).

So if you're going to ask for volunteers, why choose an example where you're likely to get a majority of false positives? Why would you want to start your experiment with an example that already has a number of mundane explanations, without having to posit some kind of emotional feedback?

Of course, this says nothing about the validity of the experiment when (if) it takes place. It just seems a shoddy way to start.
 
I believe that the scientific community has accepted the second set of results, his defence wasn't deemed significant, and the matter is at rest.

Er, who exactly is this unnamed 'scientific community' that has been closely following this particular research & publishing papers about it? How many scientists are we talking about - thousands? Or are you talking about Richard Wiseman? (He whose Word is Truth;))

If you think you have proof of something which violates the known laws of the universe

Er, does it violate the known laws of the universe? You don't know. You don't know the mechanisms involved (if any), because you haven't done the research and moreover you don't want the research to be done.
 
Last edited:
Beth said:
This is my take on it: He has a hypothesis regarding something he terms "morphic resonance" (I'm not sure if that the right terminology). Assuming this hypothesis is correct, he's making a prediction that people will be able to distinguish between live and pre-recorded TV. Now he's trying to test that prediction. If it tests positive, it would lend credence to his hypothesis. If it tests negative, it would imply his hypothesis is NOT correct. I think this is how science is supposed to work.
This can only be the case if the hypothesis that he tests in the experiment is drawn from a theory of morphic resonance. In other words, he has to postulate a mechanism and operation of morphic resonanace, and then draw a hypothesis from this theory to test in his experiment. Otherwise it's just another experiment about protocol and statistics.

He may be doing this; I don't know.

~~ Paul
 
This can only be the case if the hypothesis that he tests in the experiment is drawn from a theory of morphic resonance. In other words, he has to postulate a mechanism and operation of morphic resonanace, and then draw a hypothesis from this theory to test in his experiment. Otherwise it's just another experiment about protocol and statistics.

Not sure this is true. Surely you can conduct a useful experiment to falsify a theory T without putting up an alternative theory. E.g. let's suppose a positive outcome on this particular telepathy experiment (if well-controlled etc.) would be incompatible with some of the known laws of physics T. Then the experiment, if the outcome is positive, does something very important - it falsifies T (or rather provides evidence that T is false). I don't see that you have to provide an alternative T' that the experiment is supposed to be supporting in order to get useful results from it. That could be something for later research.

In other words, he has to postulate a mechanism and operation
But plenty of things in science can be experimentally demonstrated where the underlying mechanism & operation is unknown. E.g. you can demonstrate that the sun shines, but until nuclear physics was understood no-one knew how (it was known to be incompatible with the age of the earth as demonstrated by geology, because if the sun was burning conventional fuel it would have burnt out long ago).

(Incidentally people on this forum seem to be assuming for some unstated reason that telepathy must violate the known laws of physics, which I think is why they are so vehemently opposed to it.)
 
Last edited:
Bfinn said:
Not sure this is true. Surely you can conduct a useful experiment to falsify a theory T without putting up an alternative theory. E.g. let's suppose a positive outcome on this particular telepathy experiment (if well-controlled etc.) would be incompatible with some of the known laws of physics T. Then the experiment, if the outcome is positive, does something very important - it falsifies T (or rather provides evidence that T is false). I don't see that you have to provide an alternative T' that the experiment is supposed to be supporting in order to get useful results from it. That could be something for later research.
Yes, this could be an interesting experiment. However, I have never read about a psi experiment whose null hypothesis was an existing law of physics. The null hypothesis is usually something like "The p value will be greater than .05," even if that's not what the researcher thinks it is.

But plenty of things in science can be experimentally demonstrated where the underlying mechanism & operation is unknown. E.g. you can demonstrate that the sun shines, but until nuclear physics was understood no-one knew how (it was known to be incompatible with the age of the earth as demonstrated by geology, because if the sun was burning conventional fuel it would have burnt out long ago).
But once you demonstrate conclusively that the sun shines, you move on to develop a theory of starlight. If some psi researcher thinks he's demonstrated the equivalent of the sun shining in parapsychology, he should move on to the theory stage.

(Incidentally people on this forum seem to be assuming for some unstated reason that telepathy must violate the known laws of physics, which I think is why they are so vehemently opposed to it.)
I don't think it has to, which is possibly why psi experiments don't have physical laws as their null hypotheses.

~~ Paul
 
I can usually tell who is phoning me at any time, before I answer the phone. That's because the set of people who phone me is limited to less than half a dozen people, and they usually call at particular times of the day, or day of the week, etc.

Which raises two issues that Sheldrake needs to address:

1) The person doing the guessing is almost invariably given a known set of people who will call. However that is not "reality". In reality, the potential set of callers is the world, with a higher probability that it will be specific people. So the set of callers in the test needs to be particularly larger than the number of people who make the calls, being chosen from that set. Let's say a total set of 100, with 5 chosen at random being one series of tests.

2) As mentioned before in this thread, timing is a factor that Sheldrake overlooked. And it seems to me that this is such an obvious factor that it does need to be controlled. But that Sheldrake missed it is significant to this discussion - it raises major concerns about what other obvious controls were not put in place.

What this means is that Sheldrake's results are basically number salad. We can't rely on them at all for any results, positive or negative.

I tend to agree with Teek - Sheldrake seems to have cottoned on to taking a dippy idea, making a half-baked theory about it, trying to data-mine for supporting evidence by gradually refining his "experiments", then rinse lather and repeat.

This is precisely how PEAR operated, for 25+ years. And I think Sheldrake has a similar motive in mind: While people and institutes continue to pay him to study it, it pays him NOT to get results but to keep on doing research! Which is precisely what he is doing!

Cynical, aren't I.
 
1) The person doing the guessing is almost invariably given a known set of people who will call. However that is not "reality". In reality, the potential set of callers is the world, with a higher probability that it will be specific people. So the set of callers in the test needs to be particularly larger than the number of people who make the calls, being chosen from that set. Let's say a total set of 100, with 5 chosen at random being one series of tests.

Er, why? You seem to arguing that in order to test for telepathy the experiment has to be structured the same way that normal phone calls are. Non sequitur. If there isn't telepathy (and the experiment is well-controlled), the results will be indistinguishable from chance, regardless of how many people are calling and whether they are friends or not.

2) As mentioned before in this thread, timing is a factor that Sheldrake overlooked. And it seems to me that this is such an obvious factor that it does need to be controlled. But that Sheldrake missed it is significant to this discussion - it raises major concerns about what other obvious controls were not put in place.

As mentioned before in this thread, (a) Sheldrake says he looked into it and the clock sync wasn't a factor (at least in the videoed experiments), presumably because the recipient didn't look at a clock; (b) he discussed the clock sync issue in the Nolan Sisters paper.
 
Sheldrake is indeed a legitemate scientist in the field of biology. This is why I think he should know better than to combine runs that are performed with different protocols into a summary statistical analysis. This is a huge no-no in the professional scientific world,

Isn't that what meta-analysis is all about, which is widely accepted in conventional scientific journals?

He further botches the p value calculations. He did not address the fact that he was doing multiple runs. This dilutes the p-value, whereas he simply pooled the results. This would only be valid for one big run, but that's not what he actually did. This artificially increases the statistical power of the experiment.

Could you give a simple mathematical example of how this works?
 
davidsmith73,

Are you going to pay for the two experiments you wanted done (post #20)?
 
Er, why? You seem to arguing that in order to test for telepathy the experiment has to be structured the same way that normal phone calls are. Non sequitur. If there isn't telepathy (and the experiment is well-controlled), the results will be indistinguishable from chance, regardless of how many people are calling and whether they are friends or not.
Then it's not an accurate model of the "reality" situation which initially prompted Sheldrake to start this whole business. Namely that some people believe they can tell who is calling before they pick up the phone.

To limit the callers to people whom the receiver knows necessarily changes one of the prime factors in the experiment. It is also the basis for most of the problems that beset the experiments. Most of the effort you see are intended precisely to overcome the problems associated with the callers being known to the recipient - timing, expectation, collusion, etc.



As mentioned before in this thread, (a) Sheldrake says he looked into it and the clock sync wasn't a factor (at least in the videoed experiments), presumably because the recipient didn't look at a clock; (b) he discussed the clock sync issue in the Nolan Sisters paper.
They don't need a clock to be able to count to 100 in their heads reasonably accurately. I can time periods up to 5 minutes or so within a few seconds accuracy if I concentrate on the task. So the timing issue still exists.

Not to mention that in the experiments so far, the recipient KNEW they were going to be called...at some point! Again, that's precisely what the experimental model should be trying to eradicate!
 
Of course, this says nothing about the validity of the experiment when (if) it takes place. It just seems a shoddy way to start.

It's not the only one to start that way, hence I posted it. But, he might as well start as he will invariably continue.
 
Last edited:
Then it's not an accurate model of the "reality" situation which initially prompted Sheldrake to start this whole business. Namely that some people believe they can tell who is calling before they pick up the phone.

This doesn't make a protocol using only 4 callers known to the recipient an invalid test of telepathy.

To limit the callers to people whom the receiver knows necessarily changes one of the prime factors in the experiment.

If someone claims they know telepatically when a friend of family member is about to call them then test them using 4 known people, or test them using 100 unknown and 4 known. It won't make a difference to the validity of the test.
It is also the basis for most of the problems that beset the experiments. Most of the effort you see are intended precisely to overcome the problems associated with the callers being known to the recipient - timing, expectation, collusion, etc.

Since Sheldrake states that this apparent phenomenon usually occurs when the caller is known to the recipient it would make good sense to test using known callers. The problems you mention could be controlled for. Testing with known callers really isn't a problem, which is why Chris French is attempting a collaborative replication with tighter protocols also using known callers.
 
Since Sheldrake states that this apparent phenomenon usually occurs when the caller is known to the recipient it would make good sense to test using known callers.

That's exactly why the phenomenon doesn't exist.

Why would people be more receptive to those they know, and not guess e.g. that it was some telemarketer, wrong number, or whoever unknown might be calling?

It would be much more impressive, if people could predict whoever is calling, regardless of whether they knew the person or not.

(phone rings)
(Ah, that is Auntie Em)

(phone rings)
(Oh, shucks, that's a bloody telemarketer)

That doesn't happen. No, people report that they think of someone they know, and whoopsie, they call.

Which is what we call "confirmation bias".
 
His response has no value. His work was thoroughly discredited.

Not really. Have you actually read his response to their criticisms?

Alright, let me spell it out for you:

Sheldrake claimed the dog went to the window when the owner was coming home, and therefore knew she was on her way.

When this experiment was repeated by another team, the dog did indeed go to the window when the owner was coming home. And at regular intervals in between. The dog went to the window all the damn time. Therefore it was not proved that the dog knew anything about the owner's movements, and no paranormal ability was recognised.

According to Sheldrake, when the experimental data collected by that team was analyzed the same way that Sheldrake analyzed his own experimental data, Wiseman's data shows the same pattern as Sheldrakes original experimental data. From the link provided above:
In the three experiments Wiseman did in Pam's parents' flat, Jaytee was at the window an average of 4% of the time during the main period of Pam's absence, and 78% of the time when she was on the way home. This difference was statistically significant. When Wiseman's data were plotted on graphs, they showed essentially the same pattern as my own. In other words Wiseman replicated my own results.
Another complaint about his analysis was that the dog was at the window more frequently the longer the owner was gone and the owner was always away at least one hour. However, closer examation of the data showed that this problem did not affect the results.
I have reanalyzed the data from all 12 experiments excluding the first hour. The percentage of time that Jaytee spent by the window in the main period of Pam's absence was actually lower when the first hour was excluded (3.1%) than when it was included (3.7%).

Refuted? Not exactly. He has responded to the criticisms leveled at his experimental results with data and shown them to be ill-founded. At one point, a few years ago, I read Sheldrakes detailed response to these criticisms which contained graphs of both his experimental data and Wiseman's. What he claims about both data sets showing the same pattern was true. I haven't independently verified his statistical analysis regarding the analysis with and without including first hour's data, but see no reason to assume he's presenting a bald-faced lie about those results.

Do you have any evidence that he's actually being dishonest about the data, the analysis or the results?


Sheldrake is indeed a legitemate scientist in the field of biology. This is why I think he should know better than to combine runs that are performed with different protocols into a summary statistical analysis. This is a huge no-no in the professional scientific world, and seeing him do this is evidence of dishonesty in my opinion.

This is not necessarily a problem. Depends on how it's done. It's the basic idea behind meta-analysis. And when doing a DOE, protocol changes may be incorporated into the experimental design. I don't find this criticism to be evidence of dishonesty.
He further botches the p value calculations. He did not address the fact that he was doing multiple runs. This dilutes the p-value, whereas he simply pooled the results. This would only be valid for one big run, but that's not what he actually did. This artificially increases the statistical power of the experiment. Again: he should know that, so seeing him do this suggests deception rather than incompetence.

I'm not sure which paper you are talking about here but, if this was published in a peer-reviewed journal and the problem is so easily spotted that it suggests deception rather than incompentence, how did such mistakes pass peer-review?

My opinion is these types of analysis procedures can be tricky to run correctly. Hence, they can slip by peer-reviews. I can't say whether or not the results are valid or if you are correct in your claim of botched p-values without a fairly detailed and extensive review of the analysis. Again, I don't find this to be evidence of dishonesty. Even if you are correct about the problem, incompetence is a reasonable and adequate explanation for such problems.
 

Back
Top Bottom