Moderated Is the Telekinesis Real?

Yes, this cannot be emphasized strongly enough, so the fact that Alec and I risked ninja-ing each other to point it out is significant. What Buddha is insinuating about the t-test for significance is exactly the opposite of what the t-test is all about. He has no clue what it's for or how it works.

And his lack of comprehension of the reasons Jeffers chose such experiment design.

""""Buddha"""" said:
Double-slit diffraction doesn’t produce a Poisson process, instead it produces a diffraction pattern (wave interference pattern). Since this is not a Poisson process, the t-tests are not applicable to it. But Jeffers used a t-test to draw the conclusion that the experiment debunks the Princeton research.

jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif
jaw-dropping.gif


Saying I was flabbergasted when I read this paragraph is not strong enough. """Buddha""" doesn't understand the physical background of the whole thing and why Jeffers found it fitting. Saying there's no "Poisson process" -one of those terms """Buddha""" loved because he thinks he doesn't look silly when he uses them- is not understanding the physical background, the equipment, its mechanics, the experiment and what a Poisson process is, altogether.

When I read again his phrase "Since this is not a Poisson process, the t-tests are not applicable to it." I laugh like a madman and even want to scream if I imagine a corporeal "him" pronouncing it with pomposity. I've never seen such density of nonsense by square inch.
 
And his lack of comprehension of the reasons Jeffers chose such experiment design.




Saying I was flabbergasted when I read this paragraph is not strong enough. """Buddha""" doesn't understand the physical background of the whole thing and why Jeffers found it fitting. Saying there's no "Poisson process" -one of those terms """Buddha""" loved because he thinks he doesn't look silly when he uses them- is not understanding the physical background, the equipment, its mechanics, the experiment and what a Poisson process is, altogether.

When I read again his phrase "Since this is not a Poisson process, the t-tests are not applicable to it." I laugh like a madman and even want to scream if I imagine a corporeal "him" pronouncing it with pomposity. I've never seen such density of nonsense by square inch.
You complained before that I neglect your posts, so I am going to reply to this one. First and foremost, you should follow this link instead of laughing uncontrollable to educate yourself about the t-tests

https://en.wikipedia.org/wiki/Student's_t-test

"A t-test is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known" Wikipedia

As you can see, there are the words "normal distribution" in the first sentence.

Now I am going to reply to several opponents including you.

I am not the only one who criticized Jeffers for the use of double-slit setup for his experiment. Earlier I posted a link to the article whose author also noted that Jeffers failed to reproduce the conditions of the Princeton experiment because he used a double-slit arrangement; there are other critics of the Jeffers experiment as well.

Jeffers could have easily avoided this critique if he had chosen a single-slit setup, which leads to a Poisson distribution of particles. He already had the equipment, all he had to do is to use a screen with one slit instead of one with two slits. But he chose to distort the conditions of the Princeton experiment to “demonstrate” that it is irreproducible. What kind of a scientist would do that?
 
You still don't have a clue what the t-test is or how it works. The t-test always uses the t-distribution, not a normal distribution. It is specifically the test you use when the data are not expected to conform to any of the normal distributions. Once again you manifest your abject ignorance of descriptive statistics and significance testing.

What's worse is that the two sources you previously cited to describe the t-test amply explain this. It's one thing initially not to understand the t-test test; that's forgivable. It's another thing to cite sources that explain them, insinuate that only you can understand them, and then spectacularly demonstrate your failure to understand them. That takes a special commitment to gaslighting.



Jeffers goes to great lengths (Jeffers, op. cit., pp. 548ff) to explain how his experiments differ in method from the PEAR experiments and why that's a good thing. The problem with PEAR's results was exactly in its methodology. Jeffers was not trying to duplicate PEAR's results using their methodology. He was trying to see if an improved methodology would duplicate the results PEAR obtained (i.e., with a vastly improved methodology that was hoped would eliminate the ways in which critics of PK research could find fault with it). If he achieved a significant result with a similar (but importantly different) methodology, then PEAR's results would have been somewhat vindicated.

The shift in methodology was crucial to Jeffers' stated goal. "[T]he major motivation for this effort was to improve our understanding of the dependencies and invariants of the process, rather than simply to provide more evidence of such anomalies." (Ibid., p. 547) In this particular paper, Jeffers is helping PEAR and others find the possible confounds to inform further research. This is different than the articles he wrote for Skeptical Inquirer in which he more directly criticizes PEAR. That you think Jeffers had, or should have had, some different goal in this research is irrelevant. The German scientists attempted to duplicate PEAR's research using their protocol and failed. Jeffers developed his own protocol -- with PEAR's assistance, in some cases -- attempting to study the same PK effect on the same sorts of phenomenon (a protocol that was supposed to correct for PEAR's protocol issues) and achieved only marginal significance that correlated to which test site was used.

As usual, in a broader sense you fail to put this in context that Alcock provides. Jeffers wasn't a mainstream scientist out to discredit psi research at all costs. He was a famous physicist welcomed by PEAR and their colleagues -- initially -- with open arms and an ear to how his insight would improve their plight. You insist on painting him as a mustache-twirling villain and, as a result, not competent to worship the water you walk on.



Uh, what you write has absolutely nothing to do with what Jeffers is trying to illustrate in that passage. Once again it's as if you don't understand what's being said at all, but you think that if you just say something that sounds vaguely statisticky you can fool people into think you're leveling valid criticism and that anyone who objects "obviously" isn't privy to your brilliance.



No, using a different methodology and protocol than PEAR is not an attempt to "set his experiment to fail" to confirm PEAR, nor does it qualify Jeffers' work as "bs." You're simply trying to shoehorn Jeffers' actual work in this paper into what you preconceived it should have been, and faulting him for little more than failing to validate your preconception. And no, you clearly don't understand what Jeffers actually did to vet the initial PEAR studies in Skeptical Inquirer. And no, you're still as ignorant as can be over what statistics properly apply to this sort of research.

You declared Jeffers to be biased and "irrelevant" before you even looked at his work, so now you're just cherry-picking stuff from his research that you can spin to make it seem like that's still true.

Also, some bookkeeping. We initially asked you to comment on Jeffers' interpretation of the baseline-bind situation reported by Jahn. Instead, you've decided you're going to attack Jeffers' own original research. Although you're finally addressing the critic we wanted you to focus on, you're not addressing the specific criticism we asked you to look at. You hastily posted a link to an Internet post by Williams, claiming that it answers Jeffers, but as I pointed out Williams clearly offers nothing but thinly-veiled prevarication on that point. It is unsuitable as an answer. I'm asking you to address that specific criticism -- the baseline bind -- because it requires you to demonstrate an actual understanding of sample variance in statistical analysis, a topic you've repeatedly demonstrated yourself deficient in. I want to see if you're able and willing to correct your misunderstanding and misattribution of the underlying statistical phenomenon to margin of error.

And we're still waiting for you to clarify whether you believe Jeffers has done any original research. Earlier you claimed he hadn't, because you claimed Alcock hadn't mentioned any. Today you're clearly looking at, and cherry-picking from, what is obviously original research done by Jeffers in psychokinesis, informed and assisted by PEAR with the aim of following it up. We're wondering when you're going to get around to admitting and taking responsibility for your original error in claiming nothing of the sort existed. If you're never going to acknowledge errors, then it is fruitless for a thinking person to attempt to engage you and correct them.
Once again, you didn't produce a link to a single article showing that t-tests are not used to draw conclusions about normal distributions. I quoted Statistics Manual in one of my previous posts to show how the t-tests work. Would you, please. provide a reference to a book or an article supporting your point of view. Otherwise I do not see any reason why I should have a discussion with you about the topics that are out of your grasp.
 
...you should follow this link instead of laughing uncontrollable to educate yourself about the t-tests...

Frantic appeals to Wikipedia don't cure the problem. You have no idea what the t-test is, how it works, or what it's used for.

As you can see, there are the words "normal distribution" in the first sentence.

Yes, and in Jeffers' experiment the scaling factor is unknown. But the real problem is that you don't know what variable in Jeffers' experiment the t-test was applied to. You niavely think it was the deposition of particles through the slit. That's not the variable Jeffers applied the t-test to. You fundamentally don't understand Jeffers' experiment design.

I am not the only one who criticized Jeffers for the use of double-slit setup...

You simply copied what other people said without delving any deeper. Yes, people criticized Jeffers after the fact, nit-picking away at this or that. None of that criticism is valid, and it all came after he got the "wrong" answer. But more importantly, after he vetted his methods and apparatus with would-be critics.

Jeffers could have easily avoided this critique...

Avoiding criticism was not his goal. I pointed out what he stated his goal to be. You're trying to shoehorn it into something else.

But he chose to distort the conditions of the Princeton experiment to “demonstrate” that it is irreproducible. What kind of a scientist would do that?

The kind of scientist who carefully states his goals and carefully explores alternative protocols to determine the potential confounds.
 
Once again, you didn't produce a link to a single article showing that t-tests are not used to draw conclusions about normal distributions. I quoted Statistics Manual in one of my previous posts to show how the t-tests work. Would you, please. provide a reference to a book or an article supporting your point of view. Otherwise I do not see any reason why I should have a discussion with you about the topics that are out of your grasp.

Excuses, excuses. You quote my thorough explanations in their entirety and add one line of gaslighting in response. You don't get to demand that I produce more. When you can answer one of my posts to the same level of thoroughness as I applied in writing it, that's when you can complain.

Now quit whining and answer my post.
 
The whole "categorical variables in clinical trials" thing is a good example of that.


He reminded me the times, decades ago, when I was associate professor in Operational Research and I started each course teaching Dantzig's simplex method. I would enunciate a problem dealing with quantities of different models of trainers/sneakers to be manufactured and next asked the students which the variables were. It was inevitable some of them replied "the trainers", and I would ask "what? Reeboks? Adidas? not really, it's the number of each model of sneaker instead. You are confounding the name of the variables with the variable values themselves, like mixing up the fixed boxes and their variable content."


I have no problem imagining "Buddha" as one of those students and I'm sure one or two of them would be spiteful enough as to repeat the same style of explanation I gave in a recriminatory way if it served to solve their problems with debate.


You showed him he had no idea what he was talking about and he tried to deflect that with a forced "gotcha moment" the style of what I've just described.
You said that you were an associate professor of operations research. Very interesting. It appears that your posts do not reflect the depth of your knowledge. How unfortunate! Out of curiosity, I would like to know how the whole problem of trainers/sneakers was stated. Could you name the other variables as well along with the objective function?
 
It appears that your posts do not reflect the depth of your knowledge.

You need a more convincing argument than calling everyone else stupid.

Could you name the other variables as well along with the objective function?

Don't change the subject. You haven't dealt with the Jeffers baseline commentary, which is what we've been trying for days to get you to address. Now that you concede that Jeffers exists and that he actually did research you claim he didn't do, when may we expect you to actually address the criticism your opponents here put before you?
 
Before some of my opponents express their strange views on t-tests, I would like them to read this"

"The t test is one type of inferential statistics. It is used to determine whether there is a significant difference between the means of two groups. With all inferential statistics, we assume the dependent variable fits a normal distribution. When we assume a normal distribution exists, we can identify the probability of a particular outcome. We specify the level of probability (alpha level, level of significance, p) we are willing to accept before we collect data (p < .05 is a common value that is used). After we collect data we calculate a test statistic with a formula. We compare our test statistic with a critical value found on a table to see if our results fall within the acceptable level of probability. Modern computer programs calculate the test statistic for us and also provide the exact probability of obtaining that test statistic with the number of subjects we have. "

https://researchbasics.education.uconn.edu/t-test/#
 
""""Buddha"""" said:
You complained before that I neglect your posts, so I am going to reply to this one. First and foremost, you should follow this link instead of laughing uncontrollable to educate yourself about the t-tests

https://en.wikipedia.org/wiki/Student's_t-test

"A t-test is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known" Wikipedia

As you can see, there are the words "normal distribution" in the first sentence.

Good try at gaslighting but it's obvious you couldn't make heads or tails from your own link.

You should had started by the t-distribution instead. That would've given you an opportunity to get it in the beginning. Late for that as it is now, you are forced to continue with your fiction that you got it from the beginning a "the rest" didn't. Very sad of you.

If you give me an address I will send you fifty bucks for you to buy a book on basic Statistics. With the change you could buy yourself a clue.
 
Before some of my opponents express their strange views on t-tests, I would like them to read this"

"The t test is one type of inferential statistics. It is used to determine whether there is a significant difference between the means of two groups. With all inferential statistics, we assume the dependent variable fits a normal distribution. When we assume a normal distribution exists, we can identify the probability of a particular outcome. We specify the level of probability (alpha level, level of significance, p) we are willing to accept before we collect data (p < .05 is a common value that is used). After we collect data we calculate a test statistic with a formula. We compare our test statistic with a critical value found on a table to see if our results fall within the acceptable level of probability. Modern computer programs calculate the test statistic for us and also provide the exact probability of obtaining that test statistic with the number of subjects we have. "

https://researchbasics.education.uconn.edu/t-test/#

Ooo, you discovered google.
 
"With all inferential statistics, we assume the dependent variable fits a normal distribution."

And what about the independent variables?

This is why the t-test is not fundamentally built upon a normal distribution, but upon a hyperbolic distribution. The dependent variable in this case is PK ability. I agree in a normal human population, properly sampled, it should form a normal distribution. But in this case it is confounded with other factors, not the least of which in Jeffers' case is the specific expectation of behavior in the double-slit apparatus. You yourself admitted early on that some processes, all told, do not result in a normal distribution. When they do not, or when they do but cannot be properly parameterized by synthesis, then we must use the t-test.

it's interesting now that you admit the PK ability variable should be normally distributed. That rather flies in the face of your prior protests against excluding Operator 010 data because it did not fit an expected distribution. You told us we couldn't predict its expected distribution due to speculative factors. So which is it?
 
Ooo, you discovered google.

Today is especially cargo-culty. "Oh, look, this article mentions 'normal distribution,' therefore I must be right." It's what you'd expect from Googling "t-test and normal distribution" but not understanding a single thing that's said at the other end. One can certainly Google for facts, but not for knowledge.
 
You should had started by the t-distribution instead. That would've given you an opportunity to get it in the beginning.

Indeed, he seems to think we can't see the pattern of someone initially denying a thing, and then slowly trying to come to terms with it and educate himself. Today, however, he seems to just be cherry-picking whatever passages he can Google that mention "normal distribution" in the same breath as "t-test."

For those of you playing at home, whatever underlying physical process is producing the output the subject is supposed to be manipulating with his, that's an independent variable in the experiment. It can follow whatever distribution it wants, but it's generally a good idea if it follows a normal distribution (i.e., is a random variable). What matters, however, is that it defies the subject's ability to predict by ordinary means. The dependent variable in the experiment is the degree to which the subjects' can actually change the behavior of that underlying process. That's supposed to follow a normal distribution.
 
You said that you were an associate professor of operations research. Very interesting. It appears that your posts do not reflect the depth of your knowledge. How unfortunate! Out of curiosity, I would like to know how the whole problem of trainers/sneakers was stated. Could you name the other variables as well along with the objective function?

The name of the "other variables"? I'd rather say you're a person very restricted in knowledge. :D:D:D:D:D:D:D You don't even know the "simplex method" and you allow yourself the buffoonery of voir diring me?

How deeply hurt you are, caught in the open in the full horror of your ignorance! :rolleyes:

Neeners, neeners and I-dare-yous won't hide what you've already written: your failure addressing all these subjects.

For instance:

""""Buddha"""" said:
Jeffers could have easily avoided this critique if he had chosen a single-slit setup, which leads to a Poisson distribution of particles. He already had the equipment, all he had to do is to use a screen with one slit instead of one with two slits.

You don't have the slightest idea about the physics behind the equipment either. You're basically suggesting the need of dealing with a Poisson distribution to examine the problem of the total number of clients served by all tellers in all branches in all banks in all countries during the whole century.

You don't even know the slit width nor the times involved in Jeffers' or "your" modified Jeffers' (clearly specified by Jeffers' from the beginning), otherwise you would've refrained yourself of writing down such statistical tomfoolery.

""""Buddha"""" said:
But he chose to distort the conditions of the Princeton experiment to “demonstrate” that it is irreproducible.

That's just another piece of paranoid thinking we're so accustomed to read in your posts and the vanity press booklet you claim authorship.
 
Before some of my opponents express their strange views on t-tests, I would like them to read this"

"The t test is one type of inferential statistics. It is used to determine whether there is a significant difference between the means of two groups. With all inferential statistics, we assume the dependent variable fits a normal distribution. When we assume a normal distribution exists, we can identify the probability of a particular outcome. We specify the level of probability (alpha level, level of significance, p) we are willing to accept before we collect data (p < .05 is a common value that is used). After we collect data we calculate a test statistic with a formula. We compare our test statistic with a critical value found on a table to see if our results fall within the acceptable level of probability. Modern computer programs calculate the test statistic for us and also provide the exact probability of obtaining that test statistic with the number of subjects we have. "

https://researchbasics.education.uconn.edu/t-test/#


Oh, you learnt how to cut and paste! Good for you!!! If you were in Excel you could use F4 to repeat, repeat, repeat, repeat ....
 
Indeed, he seems to think we can't see the pattern of someone initially denying a thing, and then slowly trying to come to terms with it and educate himself. Today, however, he seems to just be cherry-picking whatever passages he can Google that mention "normal distribution" in the same breath as "t-test."

Yes, he's not even the name dropper he used to be days ago. He's now falling way below jabba's category (B-) and entering crude Malcolm Fitzpatrick/Robinson zone, with a mix of pure copypasta and resentful hostility.


This thread just turned into another failed post in "Buddha" 's blogspot where he couldn't disable the comment section and its laughs.
 
I’m not saying that Buddha is trying to mock and smear the author of the anti-science book he earlier claimed to have written. I will however say that if somebody wanted to make a writer look like a blithering idiot, even more so than the actual anti-science book that they had penned, the way Buddha is humiliating himself repeatedly on this forum by demonstrating a profound lack of knowledge and a chronic incapacity to learn would be a very good strategy for doing so.

If his goal is to play a character to make the person he claims to be look foolish then he’s very successful.
 
Yes, he's not even the name dropper he used to be days ago.

He's still obviously trying to pretend he's in good company. He keeps referring to all the other people who have also criticized Jeffers, and believes that he would be endorsed and vindicated by them. Curious thing, though. He can't really speak to what those critics actually say. Or make his criticism consistent with theirs.

If Buddha is to be believed, Jeffers is making elementary errors in statistics. So, apparently, was Palmer. Buddha expressed his disbelief that these people could be making such glaring mistakes. So you'd expect Buddha's other critics to pick up on that, right? Especially York Dobyns, PEAR's resident statistician who thought enough about it to write it up and publish it. Does Dobyns note all the same egregious statistical offenses Buddha tries to lay at Jeffers' feet? In fact he does not. You'd think that someone who found it advisable to publicly defend his group's work wouldn't mess around with the statistical esoterica that we read in Dobyns' published commentary. You'd think that he'd come right out and say stuff like, 'He can't use the t-test for significance here," just as Buddha has said. If that were the right answer, that is. But Dobyns seems to miss all the basic "blunders" that Buddha has found. Williams too. Also Alcock, who had plenty of chances to comment on the method before the study was even run. And somehow Jeffers' peer reviewers missed it all too. How did so many well-equipped, well-motivated people miss the very simple errors Buddha says he sees in Jeffers' original research?

Well, there's another possibility. They aren't really errors -- at least not on Jeffers' part. The myriad other reviewers and critics miss Buddha's "obvious" accusations because they're misunderstandings on Buddha's part.
 
He's still obviously trying to pretend he's in good company. He keeps referring to all the other people who have also criticized Jeffers, and believes that he would be endorsed and vindicated by them. Curious thing, though. He can't really speak to what those critics actually say. Or make his criticism consistent with theirs.

If Buddha is to be believed, Jeffers is making elementary errors in statistics. So, apparently, was Palmer. Buddha expressed his disbelief that these people could be making such glaring mistakes. So you'd expect Buddha's other critics to pick up on that, right? Especially York Dobyns, PEAR's resident statistician who thought enough about it to write it up and publish it. Does Dobyns note all the same egregious statistical offenses Buddha tries to lay at Jeffers' feet? In fact he does not. You'd think that someone who found it advisable to publicly defend his group's work wouldn't mess around with the statistical esoterica that we read in Dobyns' published commentary. You'd think that he'd come right out and say stuff like, 'He can't use the t-test for significance here," just as Buddha has said. If that were the right answer, that is. But Dobyns seems to miss all the basic "blunders" that Buddha has found. Williams too. Also Alcock, who had plenty of chances to comment on the method before the study was even run. And somehow Jeffers' peer reviewers missed it all too. How did so many well-equipped, well-motivated people miss the very simple errors Buddha says he sees in Jeffers' original research?

Well, there's another possibility. They aren't really errors -- at least not on Jeffers' part. The myriad other reviewers and critics miss Buddha's "obvious" accusations because they're misunderstandings on Buddha's part.


You're way too generous patiently trying to inform "Buddha" and make him reason and understand his mistaken ways. "Buddha" sees elementary errors in statistics just because of basic projection of his own incompetence, which is now recorded here, in web.archive.org and many others places, for everybody to see, for all the eternity. "Buddha" also sees ill-intentions and duplicity in those bona fide authors as a projection of his own mala fides trying to pose here -and probably elsewhere- as a knowledgeable person, when he's already shown (planning, language, skills, etc) that he's below average in his likely professional real life.
 
Oh, you learnt how to cut and paste!

From something probably intended for a seventh-grader, no less. The funny thing about cherry-picking is that there's nothing in that paragraph that is not correct. But it doesn't salvage Buddha's argument because he doesn't pay attention to context.

Yes, in this case (and every similar case) the dependent variable is assumed to be normally distributed. If some such distribution weren't involved, you'd have to resort to generic correlations. But in some empirical cases the dependent variable is convolved in the observation with several independent variables whose data may be distributed in any old way. The outcome is a combination of them, and is not at all guaranteed to be normally distributed. That's why it's parameterized into a different kind of distribution, the t-distribution. In clinical testing the independent variables are typically normally distributed, but only because the subject pool has been homogenized to achieve this. In other cases one may have no idea how the independent variables vary, and they could do so according to any distribution -- or no distribution at all. In the PK tests the physical phenomenon that's supposed to be manipulated psychically by the subject is an independent variable. What varies, as the dependent variable, is whether the subject is attempting to manipulate it. While the apparatus PEAR used to produce that value has, at its heart, a physical phenomenon known to be normally distributed, it is also subject to other effects, only some of which can be controlled for.

But even that's not the real problem. Buddha, fixated for a dozen pages on the workings of the various apparatuses, does not realize that the underlying physical process in these apparatuses is not the variable that's being addressed in the significance tests. He's still trying to fit these real-world experiments into his one-dimensional understanding of statistics. Yes, some people criticized Jeffers for using a double-slit apparatus. But it's important to know when and why. It's not because such an apparatus is somehow statistically invalid. The objection was because the double-slit phenomenon was theorized to be too difficult for PK-able subjects to manipulate in a noticeable way. And that objection arose only after Jeffers got the "wrong" answer. No one objected to the double-slit apparatus until after it failed to be manipulated by the subjects. The critics aren't complaining because the apparatus is not statistically modeled by a normal distribution. They're looking for reasons to suppose the PK effect still exists even though the test failed.
 

Back
Top Bottom