Moderated Is the Telekinesis Real?

Just who are these lurkers that follow the ISF yet never seem to jump into a thread and proclaim their agreement with the woo de jour?


Well I for one have profound telekinetic powers. My abilities are such that it is a terror to behold. “The Great Turtle” from the Wild Cards Books was inspired by my burgeoning capabilities when I was younger. (The character in the books was toned down considerably to make him more believable.)

I am not coming to the defense of any of the “proofs” being offered here because, quite frankly, they’re embarrassing. I’m insulted by this garbage being passed off as genuine telekinetic power. Massaging the number that comes up in a flawed “random” number generator is hardly a display of telekinesis. It’s like pointing to a healed paper cut as proof that you can re-grow somebody’s arm. It’s pathetic. It’s a joke. If you’re going to prove you have telekinetic power you’re going to go all out. This half ass namby-pamby wishy-washy nonsense is just embarrassing to watch.

Posers. Posers all the way down.
 
Massaging the number that comes up in a flawed “random” number generator is hardly a display of telekinesis. It’s like pointing to a healed paper cut as proof that you can re-grow somebody’s arm. It’s pathetic. It’s a joke.

"Ray, the sponges migrated about a foot and a half."

This half ass namby-pamby wishy-washy nonsense is just embarrassing to watch.

Especially since this thread devolved from the start into the standard exercise of correcting Buddha's effluent ignorance. We haven't actually discussed a single proof for psychokinesis since the top half of the first page.
 
Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.
And my one shot at glory fizzles on the launch pad. Don't mind me. I'll just be slinking away. And pouting.

Slinking and pouting. My life.
 
Because you brought one up as a comparison to PEAR and Palmer.



No, that's not how it works. Anyone can claim to be anything as long as they don't have to demonstrate it. That's what you do. Here on this forum and elsewhere you've claimed all kind of expertise you can't ultimately back up. Instead, I demonstrate correct understanding. That way people can draw their own conclusions about whether I know what I'm talking about. They don't have to take my word for it.



Nonsense. You brought up a clinical trial as a direct comparison to an error you are claiming Dr. Palmer made. The discussion of categorical variables is necessary to show how subject pools are homogenized in such trials and how your error would have violated that homogenization. Then I showed how, under the same model, Palmer's actions actually had the opposite effect as your mistake, and served to achieve a homogenization that would be necessary for the aggregate statistics in PEAR's findings to have the meaning they intended. If you're claiming that tests for variable independence such as the chi-square are irrelevant to experimental psychology, that's just about as ignorant a statement as can be.

I don't care about the details of how to test drugs for cancer. You have a history of rambling on about irrelevant subjects instead of sticking to the point. You also have a history of insinuating that you have expertise, but declining to demonstrate it. Since you can't understand my argument or rebut it, you're desperately trying to gaslight the audience into thinking it is irrelevant.
This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with, categorical variables are not used in clinical trials. The collected data consists of the analysis results of the subjects' blood, as it was in the leukemia clinical trials that I described. The conclusion that a patent was cancer-free was a part of his medical record that had no bearing on the conclusion presented to the FDA. My mistaken suggestion was to exclude all this patient's data from the study, which is similar to the one that Palmer made.

It seems to me that you are trying to impress your supporters by your erudition by bringing tons of irrelevant data into the discussion. But you do not have to work that hard, they are already on your side. As for me, this tactics is totally unimpressive because it shows a complete lack of originality.

Besides, you are unable to understand my request to provide a psychological studies data; I didn't say that a t-test or any other test cannot be used for the data evaluation, but I asked you to provide at least one example of a psychological study which contains a rejected outlier. You do not have to give me a link to that report, all you have to do to describe it in your own words. But I am sure you are unable to do that because such report doesn't exist.
 
It seems to me that you are trying to impress...

Wow, project much?

As for me, this tactics is totally unimpressive because it shows a complete lack of originality.

That's some pretty desperate well-poisoning there, Buddha. How about you deal with what I actually wrote in all its detail instead of whining about what you think I haven't produced or what you think I don't know.

You do not have to give me a link to that report, all you have to do to describe it in your own words.

I did. You said it was "irrelevant." But then again I also gave you the reference ti Zimbardo's book, which describes how it was done in one of most famous psychology experiments ever. You're not a very good authority on what your critics have or have not provided.

It's not a very good time for you to be haughty considering you've been ignoring Jeffers for weeks.
 
Last edited:
But today you reversed that and claimed there was no "theory of telekinesis" that was being tested, therefore no way Dr. Palmer could have known what the inter-subject data would look like, and therefore no basis for him to determine that Operator 010's performance was anomalous.

Vague handwaving references to "statistical methods" don't address the problem that even under the theory you state there still should have been a normal distribution, and that anomalous data would still stand out against it. Nor, when I thoroughly provide the background that illustrates just how such integrity tests and homogenization procedures would be accomplished using inter-subject data, do you simply get to handwave it all away as "irrelevant." You made it relevant in posts such as these. Since you are unwilling to discuss the "statistical methods" beyond broad-strokes handwaving and content-free appeals to your own non-existent authority, we can dismiss your attempt to undermine PEAR's critics as shallow and uninformed.

But it's also worth pointing out that you're changing your story in order to enable reasons for you to sidestep challenges you can't meet. This is not honest debate.
Your suggestion that I changed my story shows that you have very little knowledge of the making of a theory. Take, for example, election surveys. There is no theory behind them, their purpose is to provide statistical data about candidates; standings in the polls. Similar to that, the Princeton study researchers did.t try to "prove their theory of telekinesis: because they had none, their goal was to collect the experiment data and evaluate it.

Either you are unable to understand my argument or you distort it on purpose. Lack of understanding and a deliberate distortion are equally bad, as you already know.
 
Your suggestion that I changed my story shows that you have very little knowledge of the making of a theory.

That's right, just keep hurling those insults. You have no argument in any of these threads except how supposedly stupid your critics must be.

Take, for example, election surveys.

Irrelevant. We're talking about whether research into psychokinesis proceeds according to a theory about how it must work. Both Jahn in his principal research and Palmer in his criticism advanced various theories that guided their interpretation of the data. One theory was that the effect came in bursts and was therefore possibly short-lived. Another was that it proceeded involuntarily (i.e., to explain the anomalous calibration results).

...and evaluate it.

The evaluation included speculation about how it might work. That's what points to theory. You abandoned the notion of theory in these studies only when it became apparent that it could be used to disregard outlying data -- which the researchers all eventually did, including Jahn. You didn't have these philosophical concerns about it until you saw how it could be used to undermine your beliefs.

Either you are unable to understand my argument or you distort it on purpose. Lack of understanding and a deliberate distortion are equally bad, as you already know.

You need a better argument than, "My critics are such terrible people."
 
Last edited:
This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with,



You've been caught lying and exaggerating too many times for anything you write to be taken at face value. You need to provide citations for your claims. Nobody takes your pathetic attempts at insulting people seriously. You're not credible.

You need a better argument than, "My critics are such terrible people."

If he stopped using that all he'd be left with was his Dunning-Kruger based diatribes.
 
Last edited:
You need to provide citations for your claims.

Except that they're irrelevant claims. He can't talk about the PEAR research directly with anything approaching comprehension, so he tries to say, "It must be like clinical trials," which he claims to know about. Or, "It must be like elections," which he also might know something about. He's desperately trying to make this fit whatever little knowledge he might have about how to analyze other kinds collected data using statistics. I don't care about references to election data or cancer drugs. Those are distractions from his inability to handle the actual subject he raised.
 

tl;dr -- Buddha doesn't understand the statistics PEAR used to analyze their findings. He has conflated the incidental means of validating the baseline with the actual tests for significance against the baseline. Based on that misunderstanding, he accuses Palmer of erring when he recomputed the significance. I explain how the t-test for significance works and provide an example to illustrate what Dr. Jeffers found suspicious about PEAR's use of it.



Yes it is, contrary to what you think.



Yes he does, in the context of PEAR's research which intended -- correctly so -- to use the t-test for significance. Dr. Palmer explicitly notes that PEAR's decision to use the t-test instead of the Z-test improves over Jahn's predecessor Schmidt in studying the PK effect. I've mentioned this several times, but you never commented on it. I'm going to explain why it's an improvement, why PEAR was right to use it, why Palmer was right to endorse it's use, and why you don't know what you're talking about.



No.

If your intent is to use the t-test for significance, then baseline data must be collected empirically. It can't be inferred from theory. It can only be done after the project design, apparatus, and protocols are in place. Now where the baseline calibration factors are all static given the above, all the empirical baseline data may be collected prior to any experimental trials -- or even afterwards, as long as the baseline collection is independent of experiment trials. But if instead the calibration factors include environmental factors that cannot be controlled for except at the moment of trial, then it would be a mistake to compare experimental data collected in one environment at the beginning of the project to baseline data collected in a different environment as the project proceeds.

It is up to the judgment of the experimenter to know which factors apply. In this case Dr. Jahn, an experienced engineer, properly understood that the REG apparatus was sensitive to several environmental factors, only some of which he could control for explicitly. Hence the protocol properly required calibration runs at the time of trial. This is so that the collected data sets would be reasonably assured to be independent in only one variable.



No.

There is no magical rule that says that all baseline data must be collected prior to any experimental data, and absolutely no rule that says calibration runs may not interleave with experimental runs. You're imagining rules for the experimental sciences that simply aren't true. We know from your prior threads that you have no expertise or experience in the experimental sciences, so you are not a very good authority on how experiments are actually carried out. We further know that you will pretend to have expertise you don't have, and that your prior arguments depart from "rules" you invent from that pretended expertise and then try to hold the real world to.



Yes, in theory. The REG design is based on a physical phenomenon known to be governed principally by a Poisson distribution. That does not mean the underlying expectation transfers unaffected through the apparatus from theoretical basis to observable outcome. In the ideal configuration of the apparatus, and under ideal conditions, the outcome is intended to conform to a Poisson distribution to an acceptable amount of error.



Not just possible, known to confound. We'll come back to this.



The results will never "form" a Poisson distribution. The results will only ever approximate a Poisson distribution to within a certain error.



Specifically, if the machine is operating properly a Z-test for significance applied to the calibration run will produce a p-value less than 0.05. If the p-value is higher, it means some confound in the REG is producing a statistically significant effect.

But you misunderstand why this is of concern. You wrongly think it's because it is the goal of the experimenters to compare the experimental results to the Poisson distribution. Instead, an errant result in an apparatus carefully designed and adjusted to approximate as close as possible a Poisson distribution indicates an apparatus that is clearly out of order. This in turn indicates an unplanned condition within the experiment that cannot for that reason be assumed later not to have confounded in an unknown qualitative way with the experimental data. The Z-test for conformance to the Poisson distribution merely confirms that the machine is working as intended, not that the machine is working so well that the Poisson distribution can be substituted as a suitable baseline.



Well-known, but known not to be exhaustive. This is the part you're missing. Yes, the operator of an REG (or any other apparatus) will have a predetermined checklist to regulate the known confounds with the hope of reducing measured calibration error to below significance. That doesn't guarantee he will succeed in removing all error such that he can set aside measurement in favor of theory.



No.

Certainly a conscientious team will look for sources of error. But they know they cannot exhaustively do so, and that they will never reduce the Z-test p-value to zero. Nor is it possible to. They will only reduce the results of the calibration Z-test to a p-value that is acceptably small for their purposes. "Acceptably small" does not mean zero. It merely means they have confidence that the machine is working as expected. They know from the start that they are dealing with a Poisson process. The calibration merely ensures that the Poisson effect dominates the machine's operation.

If they were to use the Z-test to compare the experimental data to the idealized Poisson distribution, the error remaining in the calibration Z-test would still be a factor. And it would have been set aside in that method.

Let's say the calibration runs produce a p-value in the Z-test of p < 0.045. That's certainly enough to ensure that the machine is operating within tolerance. But the error still exists as a non-zero quantity. Using the Z-test combines that error with any variance in the experimental results such that they cannot be separated. When the expected variance in your experimental results is very small, this becomes a concern.

Hence the t-test for significance, which relaxes the constraint that the expected data conform to any theoretical formulation of central tendency. The data may incidentally conform, but that's not a factor in the significance test.



Yes, which is why the other tests for significance besides the Z-test exist. The rules for choosing a baseline in the t-test are that the baseline is determined to an acceptable degrees-of-freedom extent by some number of empirical runs where the test variable is not varied. The protocol for running the calibration is determined by known factors of the test, and looks at how the expected or known confounds are thought to vary. Jahn et al. expected the confounds to vary mostly by factors that would exhibit themselves only at the time of trial, and could only be partially controlled for by machine adjustment. Hence calibration runs interleaved with trial runs.

"Better than chance" in these contexts doesn't mean varying from the Poisson distribution. It means varying from the behavior that would be expected were not some influence applied. Whatever that behavior would have been is completely up for grabs. You don't have to be able to fit it to a classic distribution. You only have to be able to measure it reliably.



Which is why a test was developed that determines a baseline empirically, without the presumption that it would conform to some theoretical distribution. If you aren't sure which theoretical distribution is supposed to fit, or you know that no theoretical distribution will fit because of the nature of the process, then descriptive statistics provides a method for determining whether some variable in the process has a significant effect by comparing it against an empirically-determined baseline. The limitations of baselines determined empirically translates to degrees-of-freedom in the comparison, but does not invalidate it entirely. This is Descriptive Stats 101, Buddha. The fact that you can't grasp this simple, well-known fact in the field says volumes about your pretense to expertise.



The t-test requires no choice -- it always uses the t-distribution. You fundamentally don't understand what it is, why it's used, or how it achieves its results.



No.

This is comically naive, Buddha. You're basically arguing that the t-test itself is invalid, when it is actually one of the best-known standard measurements of significance.

No, the t-test does not require "inifinite number of runs" to establish a usable baseline. The confidence in the baseline is determined by the distribution of means in the calibration runs. The standard deviation of that metric determines the degrees of freedom, which is the major parameter to the t-distribution. The degrees-of-freedom flexibility in the t-distribution is meant to compensate for uncertainty in the standard deviation in the distribution of means in the calibration runs.

You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline.

Let's say you do twenty calibration runs, and for all of them the Z-test against the Poisson distribution produces a p-value in the range (0.044-0.046). That's approaching significance, but the p<0.05 threshold may be sacrosanct in your field. So you're good to go. Your confounds are just below the level of significance when compared to Poisson.

But instead we might find that the distribution of means in the calibration runs is extremely narrow. That is, the machine might be on the hairy edge of accurately approximating the Poisson distribution, but it could be very solidly within the realm of repeating its performance accurately every time. This is why the t-test is suitable for small data sets (in Jahn's case, N=23) where such behavior might be revealed in only a small number of calibration runs.

A small standard deviation in the baseline means translates to fewer degrees of freedom in the ability of the baseline to "stretch" to accommodate values in the comparison distribution of means. That means any data that stands too far outside the properly-parameterized t-distribution will be seen as significantly variant. That is, it is the consistency among the baseline runs, not their conformance to one of the other classic distributions, that makes the comparison work.

But what's more important is that any concern in the p-values of the Z-test on the calibration is irrelevant. Whatever was causing the machine to only-just-barely produce suitably random numbers was shown in the t-test baseline computation not to vary a whole lot from run to run. Whatever the confounds are, they're well-behaved and can be confidently counted on not to suddenly become a spurious independent variable. If the subject then comes in and produces a trial that varies at p < 0.05 in the t-test from the t-distribution parameterized from those very-consistent prior runs, that's statistically significant. If that subject's performance had been measured instead according to the Poisson distribution, then the effect hoped to be statistically significant would still be confounded with whatever lingering effect was causing the p < 0.046 etc. values in the calibration.

In your rush to play teacher, you've really shot yourself in the foot today.

First, I covered all this previously. It was in one of those lengthy posts you quoted and added to it a single line of dismissive rebuttal. You constantly attempt to handwave away my posts as "irrelevant" or somehow misinformed, but here you are again trying to say what I've already said as if you're now the one teaching the class. The way the Poisoning-the-Well argument technique works is that you're not supposed to drink from the same well. I explained how the t-test and its parameters worked, but now it's suddenly relevant when you decide to do it...

...and get it wrong. That's our second point. You fundamentally don't understand how tests for significance work. It's clear you've ever only worked with the basic, classic distributions and -- in your particular mode -- think that's all there could ever be. As I wrote yesterday, you're trying to make the problem fit your limited understanding instead of expanding your understanding to fit the problem. And in your typically arrogant way, you have assumed that your little knowledge of the problem, gleaned from wherever, "must" be correct, and that someone with a demonstrably better understanding of the subject than you -- the eminent psi researcher John Palmer -- "must" have conceived the problem wrong.

These are questioned intended entirely seriously: Do you ever consider that there are things about some subject you do not know? Do you ever consider that others may have a better grasp of the subject than you? Have you ever admitted a consequential error?

Third, now it's abundantly clear why you're so terrified to address Dr. Steven Jeffers. Your ignorance of how the t-test for significance works and achieves its results reveals that you don't have the faintest clue what Jeffers actually did. You're ignoring him because you don't have any idea how to even begin. It's so far over your head.

So the t-test for significance compares two data sets that are categorically independent according to some variable of interest (in PEAR's case, whether PK influence was consciously applied). All the potential confounds are expected to be homogeneous across the two sets. One data set is the calibration runs, represented by its mean and standard deviation. The other set is the experimental runs, similarly represented. The N-value (23, for PEAR) and the standard deviation in one distribution determine the degrees of freedom that the corresponding t-distribution can use to "stretch" or "bend" to accommodate the other distribution.

What Jeffers discovered was that PEAR's t-distribution for the calibration runs was too tightly constrained. Working backwards, this translates into not enough degrees of freedom, then into not a lot of variance in the calibration means for a sample size of 23. In fact, an absurdly small amount of variance. Too small to be possible from PEAR's protocol. Why? Because while the process underlying the the REG operation is theoretically Poisson, the process variable gets discretized along the way. Discretizing a variable changes the amount by which it can vary, and consequently the ways in which statistical descriptions of such variance can appear.

Let's say you ask 10 people to name a number between 1 and 10. We take the mean. Can that mean have a value of 3.14? No. Why not? Because our divisor is 10, and can never produce more than one digit past the decimal. It could be 3.1 or 3.2, but not 3.14. Do that 20 times, for a total of 20 means computed from groups of ten. If we aggregate the means, they can't vary from group to group by anything finer than 0.1. Data points will be either coincident or some multiple of 0.1 apart. If we look at the distribution of those means, there is a limit to how closely they can approximate a classic distribution because they are constrained by where they can fall in the histogram. They can fall only on 0.1-unit boundaries, regardless of how close or far away from the idealized distribution that is. All our descriptive statistics are hobbled in this case by the coarse discretization of the data.

All that occurs because the customary response to "pick a number between 1 and 10" is an integer. If we re-run the test and let people pick decimal numbers to arbitrary precision, then the group means can take on any real value, the aggregate of means can take on any value, and the distribution of those means across all groups has more flexibility to get close to a classical distribution. More importantly, the standard devision of that distribution has more places to go.

What Jeffers found was that the purported distribution of means in the calibration runs is not likely to have actually been produced by the REGs because it offered a standard deviation not achievable through the discrete outputs the REG offered, just like there exists no set of integers such that their sum divided by 10 can be 3.14.

I would like you to address Jeffers, the critic of PEAR you've been avoiding for weeks. I would like to see you demonstrate enough correct knowledge of the t-test for significance to be able to discuss his results intelligently, and at the same time realize that John Palmer is not misinformed as you claim. At this point you seriously don't know what you're talking about.
If you do not understand how a baseline is determined, I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!) However, you wrote that a baseline cannot be based on theoretical considerations. I gave one example already of how this could be done, I can give more if you ask me to.
Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.

I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work. Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.

Your example of calculation of average values has nothing to do with my presentation; once again you put your erudition on full display without realizing that this tactics doesn't work on every opponent; I guess, you used it successfully in the past but you should know that its success rate is less than 100%.

Do I ever accept that I am wrong? I already did it by discussing my mistake at this thread. Do I concede a defeat in some arguments? Yes I do, as I recently did in an argument with my cousin. Usually I win, but this time her presentation was irrefutable. But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.
 
If you do not understand how a baseline is determined, I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!) However, you wrote that a baseline cannot be based on theoretical considerations. I gave one example already of how this could be done, I can give more if you ask me to.
Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.

I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work. Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.

Your example of calculation of average values has nothing to do with my presentation; once again you put your erudition on full display without realizing that this tactics doesn't work on every opponent; I guess, you used it successfully in the past but you should know that its success rate is less than 100%.

Do I ever accept that I am wrong? I already did it by discussing my mistake at this thread. Do I concede a defeat in some arguments? Yes I do, as I recently did in an argument with my cousin. Usually I win, but this time her presentation was irrefutable. But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.

Oh, this is some fine vintage horse****.
 
I am going to address the topic of outliers because I forgot to mention one important thing.

The subjects for a test are usually chosen at random with the help of a table of random numbers. Once the choice is made, the subject data remains no matter what; otherwise a test becomes nonrandom and its results are no longer valid.

Palmer suggested that test results of the Princeton study regarding one subject should be discarded to show that they do not significantly deviate from the ones based on Poisson distribution. He should have known better that then all test results become meaningless because the requirement for randomization is no longer met.
 
I still have time to respond to this:

"You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline."

In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment. For now I suggest to my opponent to read an elementary textbook on simple statistical tests.

Now I have to return to my work. I'll be back tomorrow.

This is not how t-test works. I will discuss this topic tomorrow because it is relevant the Princeton experiment
 
If you do not understand how a baseline is determined...

Except that I do. And unlike you, I can demonstrate my understanding.

I cannot explain it to you for the reason that is unknown to me. (this is a joke, do not take it seriously!)

Except that you can't explain it. You don't know what the t-test for significance is, despite your claim to be a statistician. You think Palmer was wrong to use it, but you don't seem to realize that he used it because Jahn used it, and both researchers did so because it was the appropriate test in this case. You haven't even read the PEAR research, have you?

This is a serious comment. Do not attempt to dismiss it as a joke.

However, you wrote that a baseline cannot be based on theoretical considerations.

No. I wrote that if no purely theoretical model is expected to govern the data, there are other ways to establish a baseline for comparison. One of them is the t-test that PEAR used. It is indicated especially when a study's N-value is small, as it was in the PEAR research.

Once again, you brought plenty of irrelevant data into the discussion. For example, the goal of my post was not to discuss significance data tests but to concentrate on theoretical considerations that lead to a baseline.

You flat-out said that the method Palmer used to re-evaluate the data in the absence of Operator 010 was improper because it didn't use an appropriate baseline for comparison. The paragraph you quoted gave the results of his own significance testing, following Jahn's method. How then is a discussion of significance testing suddenly "irrelevant?" The "theoretical considerations that lead to a baseline" in this case is exactly the proper parameterization of the t-distribution to produce the proper degrees of freedom in it. If this is something you think is "irrelevant" to statistical analysis, then I really don't know what to say. You might want to consider revising your claim of what you do for a living.

I didn't say that t-tests are invalid for a simple reason -- I use them occasionally in my work.

Utter nonsense. You clearly didn't know a thing about it until I mentioned it. You tried to argue that the data had to fit one of the previously-mentioned distributions, not the t-distribution. And you wrongly claimed it would take "infinite" trials to determine which, if any, of those simple distributions might apply. You tried to undermine the very basis of the t-test, so you don't get to suddenly assure us you know what it is and how to use it. But if now you're changing your story -- once again -- and telling us you recognize it as a valid test, then you have to explain why Palmer's use of it was so wrong.

Your statement is a complete misrepresentation of my argument which you are unable to grasp, so you put the words into my mouth without realizing how ridiculous this makes you look.

I'll leave it to the readers to decide which one of us looks ridiculous.

...you put your erudition on full display without realizing that this tactics doesn't work on every opponent...

Pointing out your errors certainly doesn't seem to be working on you. You keep telling us how criticism has no effect on you. You don't seem to realize that's not something you should be boasting about. Yes, I know what I'm talking about. That should be apparent by now. You won't be able to just bluster or backpedal around me, so kindly stop trying. If you simply be honest about what you know, and what you might realize during this debate you didn't know, things will be better.

But I can also detect my opponent's weaknesses, as I did in your case. So far you produced nothing but hot air.

That's right, just keep gaslighting. You have no argument in any of your threads that rises above accusing all your critics of being stupid. Good luck with that.
 
In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment.

I already discussed that. You didn't address what I discussed.

For now I suggest to my opponent to read an elementary textbook on simple statistical tests.

It should be clear from my posts that I know what I'm talking about. You really need a better argument that constantly accusing your critics of being stupid.

This is not how t-test works. I will discuss this topic tomorrow because it is relevant the Princeton experiment

Twenty minutes ago you said it wasn't. I guess in the meantime you must have Googled the t-test for significance and discovered that you couldn't bluster or gaslight your way around your error this time.
 
In this case the distribution is not idealized; it was predicted that electron fluctuations comply with a Poisson process, and this was proven to be true in the experiments that have nothing to do with the Princeton experiment.

That's right ladies and gentlemen. Buddha is seriously claiming that the random number generator from one research project was completely reliable because of electron fluctuation testing done in a COMPLETELY DIFFERENT PROJECT. This claim is made despite multiple references to how the actual baseline derived from the actual equipment was flawed.

Buddha literally cannot differentiate between a theory and an implementation. This is not an insult or a personal attack, but a cold hard fact learned from reading his actual posts. He literally cannot understand how a flawed technology can result in data that does not match what one expects from the underlying theory.

For now I suggest to my opponent to read an elementary textbook on simple statistical tests.

Based upon the profound and general ignorance Buddha exhibits, I suggest Buddha check out this Wikipedia page.

https://en.wikipedia.org/wiki/Dunning–Kruger_effect
 
The subjects for a test are usually chosen at random with the help of a table of random numbers. Once the choice is made, the subject data remains no matter what; otherwise a test becomes nonrandom and its results are no longer valid.

Nope. The test doesn't become "non-random" just because N shrinks by one. The key thing about random numbers is that they are independent of each other, so N doesn't matter.

This is nonsense. You're claiming investigators can never reject data that may become obviously unusable as the experiment proceeds, without invalidating the whole study. As I described several days ago -- where you once again dismissed it as "irrelevant," the subjects are typically divided randomly into the control group and the variable group, over which it is hoped that other possible confounds will be evenly distributed. Random selection is the most effective way to do that. If, for reasons that the experimenters determine, one subject's data in either group is unusable, then obviously it is set aside and N becomes slightly smaller. Depending on N, the homogenization may become looser, but with large trials that's not a problem.

You're pulling out all the stops to come up with a reason why Operator 010 should remain in the data. You're completely disregarding that three other versions of the study, including one by Jahn himself, failed to replicate the findings and that Jahn accepted this.

He should have known better that then all test results become meaningless because the requirement for randomization is no longer met.

No, this is just another of your attempts to take the professionals to task based on your misconception. This is why I asked whether you admit error. One of the first things you should be asking in this debate is whether one of the world's most well-known psychologists experimenting in psi phenomenon messed up basic experiment design, or whether an anonymous internet poster who constantly boasts about expertise he doesn't have might just be making a predictable mistake.
 
Hey, pals, why do you lose your time replying to the recycled BS that "Buddha" drops here? He utterly failed pages ago.

So old the things he's dealing with here. It's so evident he only can read old posts in forums and rehash them here to make them look original. Take a walk by memory lane -the old threads here about this BS- and you'll see what I mean. That's why he can't address Jeffers' : he has nothing useful to copy or rehash about this paper that he could use to favour his argumentation, hence the fallacy of controlling the game field by not talking about it.

"Buddha". Yawn!
 

Back
Top Bottom