Moderated Is the Telekinesis Real?

Seriously dude, what are you trying to achieve here?

We have enough samples of Buddha's lines of reasoning that the answer to this should be fairly apparent.

Every thread he's started has been an attempt to prove the hypothesis that Buddha is the smartest guy in the room, no matter what the room or the topic. It is becoming increasingly pointless to think of it as anything but that. He really has no argument beyond calling everyone else biased or stupid, pontificating his own poorly-understood knowledge against them, then following up with ham-fisted social engineering to evade criticism.

The study has been proven to be garbage.

Oddly enough that wasn't Palmer's conclusion. While he noted the PEAR studies lacked the methodological rigor to be scientifically probative, he praised Jahn et al. for improving significantly over the prior state of the art and for producing some interesting things to follow up on. Keep in mind, Palmer is a parapsychologist himself. He is not approaching PEAR from the point of view of mainstream science kicking the fridge. His approach is quite literally that of a peer reviewer trying to elevate the scientific standards of the field. Just like you would certainly call me out on something I got wrong, even though we are in reasonable ideological alignment, Palmer is calling out PEAR on the things they got wrong, with the intent of improving the state of the art.

Buddha gets none of this. In his mind, Palmer is The Enemy and thus gets painted with all the preconceived epithets you'd expect to be applied in that situation.

Even the study's original author conceded the baseline was crap and blamed it on telekinetic powers.

Buddha has yet to comment on this, which I find strange. For the t-test for significance to have any predictive value, the populations must be independent but for the tested effect. If PK was a factor in both the baseline and experimental runs, then the samples cannot be considered independent according to PK. That's a show-stopper. If the researchers cannot redeem the integrity of their baseline without violating the statistical basis of their intended test, then they really have no case.

Do you have anything better than this study or are you going to keep beating that dead horse?

Of course he is, over and above the objections of his critics.

He has no affirmative argument. All he has is the notion that all of PEAR's critics are somehow biased or incompetent, even though he wasn't sure at the time who they even were. His opponents selected the bibliography of criticism they wanted him to address. He set that aside entirely and cherry-picked his own victim. All he's done since then is cast ignorant aspersions at that victim and call all his opponents here stupid for not accepting his pidgin lectures as gospel.
 
That statement you quoted is just common sense - how is it a "bald-faced lie"?

Buddha's "analysis" of Palmer is a Gish gallop of increasingly pointless and misapprehended aspersions. It's like he's not even trying to understand the text as long as he can get in a cheap shot.

But here it looks like a specific misconception. Buddha has gotten it into his head that you need to set the N-value for a study ahead of time, and that once it's set you can't add or subtract data points without invalidating the whole experiment. He's wrong, of course, and the source he cites in support of it merely notes the relationship between the N-value and the other parameters to the significance test. Buddha misinterpreted that to mean a hard and fast proscription on the parameters which, if then violated, invalidate the whole study. The significance test mandates that the N-value for the two populations to be compared must be the same, but Buddha thinks that means something else. He's doing all this because he objects to disregarding the Operator 010 data (the only data to achieve significance) as anomalous. He's scrounging around to find something in statistics that says you can't do that, and therefore misinterpreting things left and right to try to make them seem to say it.

It appears here he's trying to use that misconception to argue that Palmer's approach to the series sizes is fundamentally flawed, even if Palmer's actual point is obvious and salient. I think Buddha is trying to say you wouldn't be able to close off the series early anyway because this would violate your precomputed N-value which, in his foggy understanding of how the test works, would make the test automatically invalid. The larger aim seems to be to portray John Palmer as incompetent and/or biased -- something along the lines of, "See, this is just one of the elementary errors he's made, so we can't trust him as a competent critic of PEAR."

As I've mentioned a few times, novice errors often fall into well known patterns. People get the same wrong impression a lot. However this is one of the reciprocal cases where you just have to read and re-read to try to figure out what combination of errors is leading the student to draw the bizarre conclusion he has. Since fringe claimants often can't seem to distinguish his assumptions from fact, their assumptions often go unstated and require us to guess at them.
 
He's holding onto this one with a lot more tenacity than he has previous threads. I suspect your successful exposing of his ignorance about statistics hit a bit too close to home for his taste. He may very well make this the hill he dies on, morphing this into a true Jabba thread.
He never stated philosophy as a core competency, nor empirical testing for psi effects. He can essentially back out of those with limited loss of face. That is, he can refine his purpose at the end to say he "found out what he wanted to know" without ever having to admit he was wrong.

But he's already put cards on the table that say he's an expert statistician, so there's no backing out or redefinition possible here without having to admit he lied outright on his resume. And as soon as that happens, that will be mentioned in every new thread he starts. No one wants to debate people who just lie.
 
At last, "Buddha" has admitted his defeat in the field of statistics (in a grumpy way, to save face)

Now we can devote our time to "exorcize" this thread of its unscientific content.
This may look like an admission of a defeat to you. Congrats!
 
Let's dissect one of the last train wreck from "Buddha" 's so you get a notion of how his mind works



Yes, categorical variables, such as blood type or having previously received a complete course of chemotherapy, an incomplete one or none, are not used in clinical trials.:rolleyes:


ETA: "Buddha", I got an idea. Why don't you include the threads you started here as part of your resume/CV? Your potential employers/clients will be shocked by the proficiency you show.
"For ease in statistical processing, categorical variables may be assigned numeric indices, e.g. 1 through K for a K-way categorical variable (i.e. a variable that can express exactly K possible values). In general, however, the numbers are arbitrary, and have no significance beyond simply providing a convenient label for a particular value. In other words, the values in a categorical variable exist on a nominal scale: they each represent a logically separate concept, cannot necessarily be meaningfully ordered, and cannot be otherwise manipulated as numbers could be. Instead, valid operations are equivalence, set membership, and other set-related operations.

As a result, the central tendency of a set of categorical variables is given by its mode; neither the mean nor the median can be defined. As an example, given a set of people, we can consider the set of categorical variables corresponding to their last names. We can consider operations such as equivalence (whether two people have the same last name), set membership (whether a person has a name in a given list), counting (how many people have a given last name), or finding the mode (which name occurs most often). However, we cannot meaningfully compute the "sum" of Smith + Johnson, or ask whether Smith is "less than" or "greater than" Johnson. As a result, we cannot meaningfully ask what the "average name" (the mean) or the "middle-most name" (the median) is in a set of names".
https://en.wikipedia.org/wiki/Categorical_variable

In a clinical trial you measure mean and median to determine whether the test results are statistically significant or not. Blood type of a set of samples doesn't have the mean value, so it is not used in clinical studies. Before jumping to conclusion I suggest you educate yourself on the topic of clinical trials. Apparently, you have no knowledge of this topic. However, I will continue debates with you because you mistake is relatively small, it doesn't show the level incompetence that my other opponent had demonstrated.
 
"Buddha", before you bid farewell, can you list the hypothesis to be tested in Jahn's paper?
I already listed it in one of my posts. I don not want to be accused of being repetitive, so you would have to find that post.
I am planning to run this thread longer than the previous ones, so do not expect me to bid farewell any time soon.
 
An interesting bluff, this. The idea, I suppose, is to hope that either nobody follows the link or that nobody who follows it understands it, leaving the impression that Buddha's scored a telling point.



And this is Trump level stuff, a precise inversion of reality that commits the exact wrongs it accuses of.

Dave
Actually, I was hoping that a particular opponent of mine would follow this link because this article is easy to absorb, it doesn't contain a complex description of a simple concept of randomness. I think that almost all my opponents including you, except for him/her, have a good understanding of this concept, so they do not have to use this link.
 
"For ease..."

Yes, we all know where Wikipedia is. As usual, you're belatedly Googling for answers to questions whose answers you should already knwo.

In a clinical trial you measure mean and median to determine whether the test results are statistically significant or not. Blood type of a set of samples doesn't have the mean value, so it is not used in clinical studies.

But it does have a categorical membership, which can be used in multivariate analysis and tests of discrete independence such as the chi-square. It's amazing you don't know about that. You seem familiar with only one narrow manner of treating data statistically, and you're trying to make everything fit that. And if it doesn't fit, you declare it outside the realm of statistics.

Before jumping to conclusion I suggest you educate yourself on the topic of clinical trials.

I explained at length how categorical variables are used in clinical trials. You ignored it. Further, you seem to be completely unaware that an entire branch of statistics exists that tests for significance and independence among categorical variables treated as categories, not their sometimes-numerical encoding, and that this forms the basis of homogenization in the subject pool.

Blood type is an excellent example. It exists as a set of discretes, not a continuum of values. We can certainly treat its value as a predictor of placement in either the control or the variable group without having to encode the value as something that a novice might mistake for a quantity. What we hope for, after randomization, is that none of these categorical variables are predictors. Otherwise it would be confounded with the variable we hope to manipulate.

...it doesn't show the level incompetence that my other opponent had demonstrated.

Assiduously avoiding my posts doesn't prove I'm incompetent. You need a better argument than simply quoting elementary coverage from Wikipedia and calling everyone else stupid.
 
Last edited:
Actually, I was hoping that a particular opponent of mine would follow this link because this article is easy to absorb, it doesn't contain a complex description of a simple concept of randomness. I think that almost all my opponents including you, except for him/her, have a good understanding of this concept, so they do not have to use this link.

The problem is that it's too easy to absorb and doesn't cover all the mathematical properties of randomness. Nor does it go into very much depth about why random sequences are used in the empirical sciences.

That's why I followed up and gave the parts that your cursory Googling missed, and explained how the parts that were left out of your discussion affect your claims. You're clearly unwilling or unable to address those, so you're left with your standard practice of simply hurling insults.
 
I am planning to run this thread longer than the previous ones, so do not expect me to bid farewell any time soon.

When will you be addressing the critics of PEAR that your opponents here asked you to address on Page 1? Will you ignore them forever in favor of your cherry-picked victim?
 
??????

That statement you quoted is just common sense - how is it a "bald-faced lie"? If points out that some of the series the numbers are identical and also nice and round, and therefore they likely didn't do anything shady like deliberately stop at a good point.

So if you want to test if they did that elsewhere, you can compare to these tests and see if there's a difference in favor of tests where they stopped at a suspicious point. If they're way better than these ones that's something that might imply there's some shenanigans going on.
My point is that t-tests and optimal stopping tests do not mixed, although they both are correct. Palmer agreed that t-tests could be used to evaluate the results of the Princeton research, but at the same time added to them the requirements that apply only to optimal stopping tests, which is absurd. Apparently, he thought that his intended audience (the US Army mathematicians) are not smart enough to see his deception.

Nevertheless, I should thank you for this post because it gave me opportunity to make my criticism of Palmer's article perfectly clear.
 
My point is that t-tests and optimal stopping tests do not mixed, although they both are correct. Palmer agreed that t-tests could be used to evaluate the results of the Princeton research, but at the same time added to them the requirements that apply only to optimal stopping tests, which is absurd. Apparently, he thought that his intended audience (the US Army mathematicians) are not smart enough to see his deception.

No, the highlighted part is your interpretation, which you're now trying to foist back on Palmer so that you can call it absurd and try to discredit him on that basis. You've done this twice now. You don't get to hold Palmer responsible for your inability to discern what he's talking about, or for the errant conclusions you draw. You're the one who got caught trying to connect two things that don't have anything to do with each other.
 
I am trying to entice professional mathematicians ,who reject the telekinesis, to join this discussion. I cannot guarantee the success, but I really want to respond to their posts defending the Palmer article. This might take time, but I am planning to run this thread for a while.

Now, about my immediate plans -- I will end my critique of Palmer's article on Monday and turn my attention to something else. It appears that I found a reference to an article whose author claims that he was not able to reproduce the Princeton research at his lab. I hope that his article is available on the Internet, but so far I didn't have time to verify that. I will provide more info about his article on Monday.
 
"For ease in statistical processing, categorical variables may be assigned numeric indices, e.g. 1 through K for a K-way categorical variable (i.e. a variable that can express exactly K possible values). In general, however, the numbers are arbitrary, and have no significance beyond simply providing a convenient label for a particular value. In other words, the values in a categorical variable exist on a nominal scale: they each represent a logically separate concept, cannot necessarily be meaningfully ordered, and cannot be otherwise manipulated as numbers could be. Instead, valid operations are equivalence, set membership, and other set-related operations.

As a result, the central tendency of a set of categorical variables is given by its mode; neither the mean nor the median can be defined. As an example, given a set of people, we can consider the set of categorical variables corresponding to their last names. We can consider operations such as equivalence (whether two people have the same last name), set membership (whether a person has a name in a given list), counting (how many people have a given last name), or finding the mode (which name occurs most often). However, we cannot meaningfully compute the "sum" of Smith + Johnson, or ask whether Smith is "less than" or "greater than" Johnson. As a result, we cannot meaningfully ask what the "average name" (the mean) or the "middle-most name" (the median) is in a set of names".
https://en.wikipedia.org/wiki/Categorical_variable

In a clinical trial you measure mean and median to determine whether the test results are statistically significant or not. Blood type of a set of samples doesn't have the mean value, so it is not used in clinical studies. Before jumping to conclusion I suggest you educate yourself on the topic of clinical trials. Apparently, you have no knowledge of this topic. However, I will continue debates with you because you mistake is relatively small, it doesn't show the level incompetence that my other opponent had demonstrated.

I'm flattered you quoted my post as a piece to be sewn together with the desperate-to-be-related text you dropped below it. An abundant text, I must add, intended to dispel the doodoo you previously did.

As you have comprehension problems, I'll repeat it for you

aleCcowaN said:
This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with, categorical variables are not used in clinical trials. The collected data consists of the analysis results of the subjects' blood, as it was in the leukemia clinical trials that I described.

Yes, categorical variables, such as blood type or having previously received a complete course of chemotherapy, an incomplete one or none, are not used in clinical trials.:rolleyes:

I commend your efforts to go to Wikipedia to get some information on these variables, so I hope you won't make mistakes like that again. Simply, don't talk about things you don't really know.
 
I am trying to entice professional mathematicians ,who reject the telekinesis, to join this discussion.

No, don't shop around for a different set of opponents. You've already been given accurate and correct mathematical rebuttals to your claims by people who are patently qualified, which you refuse to address. You then further insult their authors by calling them stupid while lacking the gumption to engage them. It's the height of insult then to announce that you need someone else to talk to.

...but I really want to respond to their posts defending the Palmer article.

No, you really don't. You want easy opponents whom you can gaslight and browbeat into compliance with your self-indulgence.

Further, you don't get to shift the burden of proof. Your thesis in the opening post was that all the critics of PEAR -- whom you didn't even know at the time -- were incompetent and biased. That is your burden to prove, and all you've done so far is prove that you don't know what you're talking about. No one has the onus to step up and defend the victims of your theatrical performance.

This might take time, but I am planning to run this thread for a while.

This is certainly a different flounce strategy, I'll give you that. We're supposed to sit here patiently and wait for you to go find opponents you like better?

It appears that I found a reference to an article whose author claims that he was not able to reproduce the Princeton research at his lab. I hope that his article is available on the Internet, but so far I didn't have time to verify that. I will provide more info about his article on Monday.

So once again, instead of addressing the critics your opponents here have asked you to address, you're going to choose your own opposition. You're simply, as usual, trying to script both sides of the debate.

Do you have the guts to go up against a critic you don't choose? Or can you succeed only when you stack the deck? Address Dr. Steven Jeffers, please. He was mentioned on Page 1 as a critic your opponents here feel is one who has a good case against PEAR. And since he received PEAR's cooperation in making it, it will be hard for you to dismiss him as biased.
 

Back
Top Bottom