Moderated Is the Telekinesis Real?

No, I'm not going to follow him on the new subject he's frantically trying to change to. He desperately wants to leave significance testing far in the dust and bring up something new to distract the audience from his colossal failure and his inauspicious damage-control effort.

I find any "evidence" of telekinesis that requires this much discussion of statistics to be underwhelming.

All Buddha has done in this thread is prove that the best evidence he can offer for Telekinesis lies within the margin of error in poor test design and implementation.

Does anyone have any better evidence? The study Buddha is babbling about is a dead end.
 
I find any "evidence" of telekinesis that requires this much discussion of statistics to be underwhelming.

This thread isn't about psychokinesis any more than his other threads were about what the titles suggested. This thread isn't even really about statistics either.

The view from 10,000 feet is more revealing. Buddha is frantically trying to find a combination of forum and subject that lets him pretend to be highly accomplished at others' expense. He's found the forum, and now he's running the gamut of stuff he has some cursory knowledge of, hoping he can parlay enough of it to gain self-esteem.

Philosophy, in proving God? Nope, others knew too much about philosophy. Biology, for evolution? Nope, he couldn't make that stick either. Empirical methods, for reincarnation? Nope, too many experienced empiricists. Engineering? Nope, he had to admit his critics there were at least his equals. Now we're into statistics, where he's frantically trying to portray all his critics as ignorant while, out of the other side of his mouth, he regurgitates what they say as if he thought of it.

All his threads are about trying (and generally failing) to make everyone else look bad, and himself look good in comparison, according to the same old deny-then-repeat strategy -- coupled with goalposts on wheels -- that all fringe claimants who use it seem to think is so very clever and undetectable.
 
"Buddha" said:
...

For more information on t-tests consult Crow, Statistical Manual, page 57 (this is a very simple book, everyone should be able to understand it).


...

This Wikipedia article gives some information about optimal stopping
https://en.wikipedia.org/wiki/Optimal_stopping

...

So you are educating yourself. Good for you. I would rather suggest other sources than Wikipedia, you know.

Let's probe if you have learnt something about that

  • Which were the steps Jahn and Dunne took to set the size of the group in the experiment?
  • What should have been those steps and the range of that size?
Don't hesitate, it's everything out there. For instance, you might read
Operator-Related Anomalies in a Random Mechanical Cascade by Dunne, Nelson & Jahn. It might make clearer for you what you're dealing with.
 
Sorry, but I'm not just going to let the lies stand. Obviously Buddha's been frantically doing make-up studying, during which he has apparently discovered that I was right all along. Now he's in Pontiff mode trying to belatedly lecture on the subjects I've already described and that he's already dismissed as "irrelevant."

That's OK. So I suggest all of us to switch to teaching mode so "Buddha" -and the few real lurkers that are horrified by "Buddha" 's- may benefit of our collective knowledge.
 
I find any "evidence" of telekinesis that requires this much discussion of statistics to be underwhelming.

All Buddha has done in this thread is prove that the best evidence he can offer for Telekinesis lies within the margin of error in poor test design and implementation.

Does anyone have any better evidence? The study Buddha is babbling about is a dead end.

And that pretty much summarizes it all.

However, this thread continues to be a good opportunity to further study epistemological hedonism in its natural environment.
 
This thread isn't about psychokinesis any more than his other threads were about what the titles suggested. This thread isn't even really about statistics either.

The view from 10,000 feet is more revealing. Buddha is frantically trying to find a combination of forum and subject that lets him pretend to be highly accomplished at others' expense. He's found the forum, and now he's running the gamut of stuff he has some cursory knowledge of, hoping he can parlay enough of it to gain self-esteem.

Philosophy, in proving God? Nope, others knew too much about philosophy. Biology, for evolution? Nope, he couldn't make that stick either. Empirical methods, for reincarnation? Nope, too many experienced empiricists. Engineering? Nope, he had to admit his critics there were at least his equals. Now we're into statistics, where he's frantically trying to portray all his critics as ignorant while, out of the other side of his mouth, he regurgitates what they say as if he thought of it.

All his threads are about trying (and generally failing) to make everyone else look bad, and himself look good in comparison, according to the same old deny-then-repeat strategy -- coupled with goalposts on wheels -- that all fringe claimants who use it seem to think is so very clever and undetectable.

Yes, he's basically a photocopy of Donald Trump who is trying to get control of a thread in a web forum that is ridden by such characters the same way the city drains are rat-ridden. He's not trying to get control of the cheget as The real Don was. If you think about it, it's no biggie.

Replying to every lie "Buddha" may drop here because of ignorance may prove to be overkilling.
 
Yes, he's basically a photocopy of Donald Trump who is trying to get control of a thread in a web forum...

Correct. He ignores all the critics of PEAR except the one he thinks he can throw enough mud at to make himself look good. And he dismisses stuff he can't answer as "irrelevant" (although it somehow seems to regain relevance once he's crammed enough to write a pontifical paragraph or two on it), and then moves on to new subjects to distract from his inability to address his opponents' rebuttal.

This is not Dunning and Kruger. Their paper was "Unskilled and unaware of it." This is "Knowingly unskilled and trying desperately to conceal it." Foisting the agenda for discussion is just one of his social-engineering tactics for doing that. Pretty much the management consultant phenomenon others have already noted.

Replying to every lie "Buddha" may drop here because of ignorance may prove to be overkilling.

Hopefully. He needs to know that we've noticed them, otherwise he'll think he can just keep trying to get away with straw-manning all his opponents. So for that purpose we need to point out at least a few of them in detail to show that we see them and that we can prove them with facts, if necessary. But no, once we've shown his propensity to lie in order to save face, it doesn't have to be a matter of ongoing proof.
 
Hopefully. He needs to know that we've noticed them, otherwise he'll think he can just keep trying to get away with straw-manning all his opponents. So for that purpose we need to point out at least a few of them in detail to show that we see them and that we can prove them with facts, if necessary. But no, once we've shown his propensity to lie in order to save face, it doesn't have to be a matter of ongoing proof.
Precisely. And to make it clear, others have notice them, at least this one. I cannot comment on the details of the statistical methodology and critique themselves, but I am astute enough to know when one person (JayUtah) puts forth a specific criticism which another person (Buddha) ignorantly dismisses before changing tacks and putting forth discussion on that criticism as if it were his own.
 
This is not Dunning and Kruger. Their paper was "Unskilled and unaware of it." This is "Knowingly unskilled and trying desperately to conceal it." Foisting the agenda for discussion is just one of his social-engineering tactics for doing that. Pretty much the management consultant phenomenon others have already noted.

What D&K described is at the root of his behaviour and others'. But you're right that when he is in front of massive evidence of his blunders he surpasses the narcissistic "I must be right" expressed as no-acknowledgement of his faults to fall into a total charade.

He needs to know that we've noticed them, otherwise he'll think he can just keep trying to get away with straw-manning all his opponents. So for that purpose we need to point out at least a few of them in detail to show that we see them and that we can prove them with facts, if necessary. But no, once we've shown his propensity to lie in order to save face, it doesn't have to be a matter of ongoing proof.

And hopefully, judging from the isolated "pre-recorded" post of his today, we're entering the final stage of this thread when he'll claim his goals accomplished, proclaim victory and declare he has no time because he has to go to Copenhagen to get the Novel Price in Biology from the hands of the Czar (as that bound-by-grammarly usual suspect would tell).

To explore the thread's subject seriously, I recommend reading the Dunne, Nelson & Jahn paper I linked earlier today. It's a jewel.
 
What D&K described is at the root of his behaviour and others'.

Sure. They realized at the time that they had stumbled onto a nest of related behaviors.

And hopefully, judging from the isolated "pre-recorded" post of his today...

Today's post was truly a desperate Hail Mary post. It nonsensically tries to connect one of Palmer's minor quibbles to Buddha's lingering misunderstanding about significance testing. It really bears no further analysis that than; it's literally that frantic and misconstrued.

Obviously for most of us this thread was over several pages ago. Buddha will probably keep thinking he's actually fooling someone.

To explore the thread's subject seriously, I recommend reading the Dunne, Nelson & Jahn paper I linked earlier today. It's a jewel.

Indeed, and the first few pages of the thread contain a plethora of links to the criticism and commentary that Buddha won't acknowledge. It's all good reading.
 
I am astute enough to know when one person (JayUtah) puts forth a specific criticism which another person (Buddha) ignorantly dismisses before changing tacks and putting forth discussion on that criticism as if it were his own.

We all know "that guy" in the meeting, the one who masks his own incompetence and inaction by glomming quickly onto what others say, me-too fashion. (Not #MeToo fashion, hopefully.) He's falling all over himself trying belatedly to come up with references to the literature -- which, he assures us, is far over our heads -- on things like homogenization and significance testing, quoting pointless but impressive-sounding passages from them. He honestly seems to think we'll be impressed by this. But even more amusingly, he honestly seems to think we won't recognize these as subjects I already brought up -- subjects he tried to avoid by dismissing them as "irrelevant."

Heavens, he's still trying to foist the notion that no data can ever be removed from a study once it's started. He just makes up his own personal rules for science that bear no resemblance to what's actually done. Today he's horribly torturing an elementary discussion of the t-test from a free book written for the Navy in 1960. He states "...remains constant throughout the process," (his interpretation, not an actual quote from the book) as supporting this "rule," when the author is only saying rather than that the N-values for the two populations must be the same.

Why is he doing this? Because he's desperate to keep Operator 010 in the running. As the only subject in four experiments (three of them entirely independent) who managed to show any results at all that support his claim in this thread, she's all he's got. And he's fully willing to misrepresent the entire field of statistics to keep her there.

Remember when he said all the baseline runs have to be done before any of the trial runs? Here's what his source has to say about that.
Crow said:
Using the common normality assumption, we must decide whether to sample the two populations independently, or to pair observations so that conditions which usually fluctuate are the same for each pair, though changing from pair to pair.

In other words, exactly the way I said it should be done, and exactly the way Jahn did it. This is how closely Buddha reads his sources.

I wrote above that the statement regarding sample size is nowhere in the book. Not even the sentiment is there. The relevant passage expresses the relationship between sample size and the parameters to the t-test, which I already covered in depth, but doesn't in any way support the contention that studies must proceed with a fixed-size sample at all costs. True, there would be a minimum sample size for some desired set of parameters, but that doesn't mean that samples that far exceed the minimum N-value cannot be re-reckoned to account for unusable data.

We noted in the proof-for-God thread how ready Buddha is to flagrantly lie about what his sources say and cherry-pick what he needs. Or, charitably at best, to completely misunderstand them. He's still up to his same old tricks.
 
Why is he doing this? Because he's desperate to keep Operator 010 in the running. As the only subject in four experiments (three of them entirely independent) who managed to show any results at all that support his claim in this thread, she's all he's got. And he's fully willing to misrepresent the entire field of statistics to keep her there.

That's epistemological hedonism on steroids on "Buddha"'s part.

Besides the obvious path to follow: preparing a set of experiments to be performed by Operator 010 alone. If she was a "gifted" person -and the rest of the pack just average subjects- she could have repeated favourable results in some of many suitable experiments.

It seems that in "Buddha" 's mind Operator 010 serves the purpose of badly grouped data but she is uninteresting as an individual psychokinesist to be tested. Bizarre, bizarre!
 
It seems that in "Buddha" 's mind Operator 010 serves the purpose of badly grouped data but she is uninteresting as an individual psychokinesist to be tested. Bizarre, bizarre!

That's the interpretation that aligns with the nominal subject of the thread. What it really comes down to, I think, is that she's the excuse for Buddha's crusade against Palmer. Palmer showed what the data looked like without her contribution, and it told a story Buddha didn't want told. But the mere act of drawing back the curtain on the actual subject composition seems to be, in Buddha's mind, some inexcusable statistical anathema that he can rail about incessantly and pretend to be all professionally indignant about. As long as he has only a lay audience Buddha can continue to insist there was some impropriety in raising the question of an anomalous outlier. But it really rings no more salient than, "But her emails...!"

Often a fringe claimant will search for and settle on something that he can keep coming back to, something that to him will always be wrong no matter how ignorant its foundation or how cogent the previously discussion explaining it. It appears Operator 010 will always be Palmer's Waterloo in Buddha's mind.
 
Often a fringe claimant will search for and settle on something that he can keep coming back to, something that to him will always be wrong no matter how ignorant its foundation or how cogent the previously discussion explaining it. It appears Operator 010 will always be Palmer's Waterloo in Buddha's mind.

There are a couple of things that bother me the most. Firstly, the honour system by which every operator registered their intention. When Operator 010 got the results she did I don't recall anybody saying "she was retested and this time she had to declare her intention to a witness - and also it was hard recorded-". Secondly, if Operator 010 was so extraordinary, why wasn't she specially tested? How come a group of people is gathered for an experiment without asking if they believed -or had prove- that they had psychokinetic abilities and they're tested as if the only expectation is the human race having such abilities by default? Hadn't the researchers other hypothesis to test about how these hypothetical special abilities are distributed?

The whole situation is so lame that it makes "Buddha" 's speculation even more wanting.
 
Firstly, the honour system by which every operator registered their intention.

Correct. That's a point of empirical control that was missed. Virtually all the commentators noted the lack of description of empirical controls in the scant descriptions of the protocol. Because they weren't described, there is no reason to suppose they were implemented. This is what we mean by reproducibility. You have to describe your experiment protocol in enough detail that another group can do it exactly the same way. Jahn notes that the reproduction attempts used "substantially similar" protocols, which obviously could have added the appropriate controls.

This is especially important when researching a controversial subject, and even more especially important after Jahn reported that the field had previously been tainted by fraudulent work. That is a situation in which more empirical controls are indicated, not fewer, and where it is vitally important not to simply trust the subjects to report honestly.

Secondly, if Operator 010 was so extraordinary, why wasn't she specially tested?

I gather it's because the effect of inter-subject distribution wasn't studied until PEAR's critics did it. I don't recall seeing any such tests done as part of the REG research. The authors just relied upon the aggregate for most of the papers they published.

Hadn't the researchers other hypothesis to test about how these hypothetical special abilities are distributed?

None that they stated in enough detail ahead of time, although there was plenty of conjecture subsequent to viewing the data among both proponents and critics. This is what one of our other esteemed contributors calls HARKing -- Hypothesization After Results are Known.

The whole situation is so lame that it makes "Buddha" 's speculation even more wanting.

Despite what Buddha desperately needs his audience to believe, the criticism against PEAR is well founded in scientific methodology and statistical methods, is not biased or incompetent, is confirmed by attempts to replicate, and engenders enough doubt in PEAR's original findings that we cannot take them as probative of psychokinesis. Even Jahn himself admitted that while he found some interesting things in the data, the final analysis failed to prove PK.

I don't understand why Buddha can't accept this. His own witness ended up agreeing substantially with his critics. Heck, Jeffers even had the cooperation of PEAR researchers when he wrote his critique. Buddha is trying to make this into an exercise the facts demonstrate is simply not what happened.
 
Precisely. And to make it clear, others have notice them, at least this one. I cannot comment on the details of the statistical methodology and critique themselves, but I am astute enough to know when one person (JayUtah) puts forth a specific criticism which another person (Buddha) ignorantly dismisses before changing tacks and putting forth discussion on that criticism as if it were his own.

This. I don't pretend to be an expert in statistics (unlike a certain thread starter) but I can smell bull **** when it's posted in front of me and I can tell who is posting honestly and who is dodging.
 
Bwahahaha! You must have been sick the day they taught multivariate analysis in statistics. With a wave of your hand, you simply disregard an entire field of statistics, and one of the most important techniques in experimental research that guarantees the independence of the one variable you were give data for.

What's even worse is that I described in detail how they are used in research of the type PEAR conducted, and incidentally in medical research. They are invaluable in ensuring the integrity and applicability of the study, and I took great effort to explain why. Do you address my detailed description at all? No, you simply announce that I'm wrong and give a highly simplified description of the one part of some clinical trial you may have been incidentally involved with, and then wrongly concluding that's all the statistical work that was ever done or needed to be done.

You keep proving time and again that you are unwilling to expand your knowledge to fit the problem. You keep trying to pare away the parts of problems that don't fit your existing understanding. And hubristically you try to call other people stupid when they point out your oversimplifications and errors. What kind of a "consultant" does that?



But you're not an authority on what is "usually" done in experiments. In the thread where you gave us empirical anecdotal evidence for reincarnation, you both illustrated and admitted that you didn't know what empirical controls were. Do you really expect us to forget about these things from day to day?



A subject pool is first established, usually from volunteers out of the general human population. This is true both for physiology research and for psychology research. A common source of psychology research subjects is undergraduate students, who must participate in order to get credit in the psychology classes they take. From this subject pool are drawn the subjects for some particular study based on their demographic conformance to the general population, or to the parameters of the study -- sometimes both. Usually the results are hoped to generalize to the population at large, so the sample is desired to differ as little as possible from what is known about the general population. Often the study is limited to subjects of a certain category, such as to women who have had at least one child. Obviously the men and childless women must be filtered out in that case. Sex is a category. Number of children is a category. The latter especially may be one of the proposed correlates.

Drawing at random often works because subjects often naturally fall into a representative (or desired) demographic distribution naturally. A random sample of a representative pool would also be representative to within a certain amount. Another method is to score each member of the subject pool according to how closely he fits the general population (or study parameters) and then take only N of the top scorers.

Randomness here is not the end goal. The end goal is conformance to the parameters of the study. Randomness is only a proxy for that, and not even always the best proxy.

So with an appropriately shaped subject sample in hand, what happens next? Well, as I explained previously, it's separated into the control and variable groups. In a clinical trial, the control group gets the placebo. In a psychology experiment there may be no control group, the sample having been homogenized against the general population. Where the bifurcation occurs, it is best done randomly to avoid experimenter bias. However it's done, the goal is to ensure that all the pertinent categorical variables are equally distributed on both sides of the divide. This is to ensure that the two groups are otherwise independent, besides the thing you're going to do to one group but not the other.

This is measured by looking at the pertinent categorical variables and determining whether the values those variables take on is related to which group they were randomly placed it. Here too, randomness per se is not the end goal. The end goal is homogeneity between the two experiment groups. Randomization is merely one way to achieve it.

Regardless of how it's achieved, it will never be perfect. Neither randomization process produces a sample that's perfectly aligned with the population from which it's drawn, nor a pair of groups that's exactly equally divided among potential confounds. It just avoids a particular sort of bias. But the groups will always differ a measurable amount in all the ways the matter for the experiment, and the sample as a whole will always differ a measurable amount from the population they are meant to represent.

The point here is that we can measure it. This is what statistics, at its heart, is all about. We'll come back to this.



Total hogwash.

Data are retrospectively excluded from studies all the time for reasons such as measurement error, subjects' failure to follow the protocol, experimenter error, the death or withdrawal of the subject, and so forth. In any sufficiently large study it is practically unheard of for all the subject data to make it to the end of the study. Now perhaps in the data you were given, those inappropriate data had already been culled. Once again you continue to misunderstand why your particular suggestion for removing data was invalid. Because you were challenged (correctly) on that particular reason, you seem to have evolved this childlike belief that no data may ever be excluded from analysis. If your concept of science were true, think of how many studies would have to be terminated, after years of work and funding, because one data point became unusable for whatever reason. No science would ever get done.



Emphatically no, and it illustrates just how little you really understand what statistics are for. Randomness is not itself the desired end. It is the means to other ends, and those ends are not disserved by the departure of one subject's data after the randomization process has done its thing, nor are they incorrigible. Statistics is about confident reasoning in the face of uncertainty. Your view is that a proper scientific study is a perfectly balanced house of cards that can't tolerate the merest jiggle of the table without crashing down in its entirety. That's as anti-statistical a sentiment as there ever could be. Statistics is about what you can do with what you know, even if it's not perfect.

Those subjects were never perfectly aligned anyway, randomness notwithstanding. There was always a need to measure how aligned they were to the population and to each other and to develop a confidence interval to describe how much those groups' behavior could be considered useful. When one subject departs, the confidence interval changes. But it doesn't just go up in smoke. That's what it means to reason statistically. Now if your N is too small, then one subject gone may change the confidence interval to the point where significance of effect is hard to measure. But that's not even remotely the same thing as the naive all-or-nothing fantasy you've painted.

You told me I hadn't provided an example of these things. That's a lie, because I've referred twice now to Dr. Philip Zimbardo and one of the most famous psychology experiments of all time, the Stanford prison experiment. You may recall this was formulated as an experiment to determine the effect of imposed roles on cruel behavior, and is most infamous for having developed astonishing levels of brutality before it was belatedly terminated. In his most recent book, The Lucifer Effect, he gives a retrospective of the experiment interleaved with copious mea culpa for not realizing what was going on. Naturally it's all in the original paper, which is easily available. But in Lucifer he takes you through it in detail, assuming you aren't a research psychologist.

So the first thing Zimbardo had to do was get reasonably suitable subjects. He described his recruiting efforts; most ended up being undergraduates at Stanford University, for obvious reasons. They weren't randomly sampled from the entire U.S. population, but they reasonably varied in demographics as much as college students can be expected to vary.

Then he administered a set of standard psychometric instruments to them to determine where they fell according to various variables he knew or suspected would lead to brutal behavior. He describes the variables and shows how his sample scored as a group along all these dimensions. He also compared their scores with distributions of scores from a the wider population. It's obvious that Zimbardo's sample was not perfectly representative of the U.S. population in all respects. For starters there were no girls, owing to the practical limits of human-subjects testing on this particular experiment. But it doesn't largely matter because he merely needed to establish by psychometry that they were representative enough in the ways that applied to his study.

Then he randomly divided the sample into Guards and Prisoners and showed how the psychometric evaluations were distributed between the two groups. Specifically he showed that in each of those categories the distribution of the psychometric variables was so similar between the two groups that they could be considered equivalent in terms of their predisposition to violence and brutality. Not identical, of course. There was variance. But one of the things the t-test sort of analysis can give you is a comparison between two population means, knowing the confidence interval that applies. While the groups were not identical, the confidence interval was broad enough that it could contain the (unknown) theoretical mean score for all the subjects. This is what we mean by homogenization. Unless you do it, you can't independently attribute effects to the imposed cause.

Zimbardo planned to impose different roles on the two groups, and he needed them to be otherwise as similarly as possible, so that any differences he observed could be properly attributed to the imposed role, not the emergence of some latent property that was skewed in one group versus the other. Hypothetically, if all the people with measured aggressive tendencies were somehow in the Guard group, it would be hard to chalk up a beat-down to the imposed role and not the predetermination. Or in contrast, if all the non-white people were in the Prisoner group, you couldn't separate the Guard-Prisoner dichotomy from the race category and say that everyone's behavior would be the same even if all the prisoners had been white.

All the things I've been talking about are in the book.

The rest of the experiment is history. Zimbardo let it run far too long, up to and past the point of inflicting severe emotional distress on the participants. He spent quite a lot of time cleaning up the mess. But what has emerged over time is that he has maintained contact -- and even professional relationships (some of them became psychologists themselves) -- with the former subjects. And it emerged that one member of the Guard group developed his own agenda that was not necessarily compatible with the goals and protocols of the experiment. For the quantitative part of the experiment Zimbardo had to decide whether to include that data.

A simpler example suffices. Suppose there is a drug trial and subjects are prohibited from drinking alcohol for the duration of the study. Now let's suppose the experimenter discovers that one subject is having his daily glass of wine at dinner, in contravention of the protocol. According to you, we would have to include his data in the final result even though he broke the experimental protocol and confounded whatever results he may have had from the drug with the result of drinking alcohol. That's what would affect the study, not booting him out. If we boot him out of the study, whichever group he was in -- placebo or drug -- has its N-value decremented by one, and his contribution to the homogenization check is removed. And for all we know, his departure might make the groups more homogeneous. This retains the integrity of the study because the remaining subjects (who are presumed to follow the protocol) are exhibiting the behavior it was desired to study.



No, that's not even remotely close to what Palmer suggests.



No. As explained above, randomization is the means we use to achieve various ends which are then not affected by culling bad data in a way that statistical analysis cannot overcome. Randomization is not an end unto itself. This is your pidgin-level knowledge of statistics and experimentation, against which you're trying to measure the behavior of the published experts. You're literally trying to tell career scientists "they should have known better," when you have already both demonstrated and admitted you don't know their science or its methods.
As always. plenty of unrelated data and complete lack of knowledge of the concept of randomness. Just stop pretending that you know something about statistics and follow this link
https://en.wikipedia.org/wiki/Random_sequence
 

Back
Top Bottom