Moderated Is the Telekinesis Real?

That's why he can't address Jeffers' : he has nothing useful to copy or rehash about this paper that he could use to favour his argumentation...

Responding to Jeffers requires understanding the mathematics upon which he based his conclusion. Buddha's erratic performance over the past several posts indicates he does not understand it, but is just now coming to terms with what it means for Jahn's research and for Palmer's criticism. That is, he has just realized he cannot set aside significance testing, although he's not yet to the point of understanding Jeffers.

...hence the fallacy of controlling the game field by not talking about it.

Sure. Note the change of goalposts that happened between just two of his posts today. I thoroughly described his error in assuming the significance test proper to Jahn's research had to proceed a certain way. Buddha was convinced Palmer was wrong to use an empirical baseline, which he said was invalid because, among other reasons, it could never collect enough baseline data to achieve perfect conformance to the Poisson distribution. He didn't know how the t-test works. But more importantly, he didn't realize that Palmer applied the t-test to the subjects minus Operator 010 because that's what Jahn did in the original research. He can't claim that Palmer misunderstood how baselines work without also claiming that Jahn misunderstood it, and that undermines his star witness.

He probably went out and Googled the t-test for significance and realized belatedly that it exists and that it works substantially as I described. So now he's trying to dump the argument that Palmer's method was naive. True to pattern, he's promising now to pontificate on the t-test in a way that suggests he knew about it all along, without addressing his prior point that Palmer -- in using it -- apparently didn't know what he was doing. He pontificates in order to distract from his previous errors. He realizes his audience sees these significant blunders as hits to his credibility, and that sort of "I'm an accomplished person" attention seems to be what he craves.

But of course that only works when accompanied by vigorous and vociferous gaslighting about the supposed irrelevance and ignorance of his opponents. So today he's going all out showing how evil and stupid I am, and tomorrow we can expect to be favored with a hodge-podge summary of the stuff he's frantically studying today. He has no argument in favor of psychokinesis. All he has is a badly-formed crusade against the critics of one contributor to PK research, and a similarly badly-formed crusade against his own critics here. I fail to see any part of his entire approach to knowledge that isn't browbeating and gaslighting people into believing his claim that he must be so much smarter than they.
 
When you find yourself stuck in a hole of your own making, it is time to stop digging.

Indeed, when someone has obviously caught you in a lie or a material mistake, it's generally pretty silly to double-down on it and make the same mistake again, only harder. This is why I asked whether Buddha had ever admitted a consequential error, the answer to which seems to be practically never. That's why I specifically pointed out where you had misread his post and gallantly admitted it. This gives you far more credibility than he, even though you erred. You will admit your mistakes and move on. Buddha hasn't yet figured out that this is the intellectually stronger path.

I've found this to be somewhat true in cases besides this one. Claimants underestimate the degree to which their critics can see them going off into errant paths. This is an observation I made while teaching. There are often common ways of misunderstanding and common errors that students make while learning something. Claimants trying to impress beyond their means, and blustering their way through the exercise, assume their bluff is working and that their critics can't see through the excuse that the claimant's obvious error must still be "somehow" correct. "No, you don't understand my argument..." No, no. I understand your argument. Which is to say, you're making the same error I've seen novices make dozens of times before.
 
Your example of calculation of average values has nothing to do with my presentation...

Statisticians use "mean," not "average." Again, you might want to consider that your critics are wising up to your increasingly evident overstatement of your expertise in statistics.

But you are slightly correct in that my example of the effect of discretization on the behavior of the mean doesn't have much to do with "your presentation," if by that you mean Palmer's allegedly improper baseline. It does, however, have something to do with what your critics here have been harping on from Page 1. That is, you are ignoring the reviewer they've asked you to address from the beginning, even after you asked for references to such reviews to be posted.

Again I'm speaking of Dr. Steven Jeffers, the physicist whom you've been avoiding like the plague. And yes, I even pointed out in my post that I was shifting topics to talk about Jeffers. Your whining that it's "irrelevant" doesn't change the fact that we, your critics here, want you to address what he wrote, and we're not going to be satisfied with your doggedly head-in-the-sand approach to a debate of PEAR's findings. If you are unwilling to address material criticism simply because you didn't get to choose it, then you are not capable of defending PEAR to the extent required in this forum.

You are the one who brought up baselines and how they figure into significance testing by trying to say Palmer didn't know what he was doing. So while you continue to try to shove Jeffers aside as "irrelevant," you keep bringing up exactly the features of PEAR's research that Jeffers addressed. Arguing that Palmer doesn't know about proper baselines makes the issue of proper baselines relevant. And if we're going to talk about what makes baselines proper or not, Jeffers becomes relevant whether you like it or not. His whole series of articles discussed how it could be determined whether PEAR used a credible baseline.

No, your judgment of relevance has nothing to do with what is actually relevant. Instead your application of the relevance test seems to follow whether the subject is one you know how to talk about or can bluster your way through to your satisfaction. Your behavior is much more consistent with ham-fisted techniques to avoid having your ignorance exposed than it is with keeping the discussion on track.
 
That's right ladies and gentlemen. Buddha is seriously claiming that the random number generator from one research project was completely reliable because of electron fluctuation testing done in a COMPLETELY DIFFERENT PROJECT.

This is where Schmidt comes in, the other researcher that Palmer reviewed. To be sure, Jahn also compared the behavior of his REGs to an idealized Poisson distribution and used that to govern at-trial adjustment of the REG's operational parameters. That's in the IEEE paper. But the difference between Schmidt and Jahn is that Schmidt, having shown that his REG produced a distribution close enough to a Poisson distribution to eliminate significant confounds, then went ahead and measured the subjects' performance against Poisson using the Z-test for significance. In other words, he measured his subjects against the ideal performance of the machine instead of its actual performance.

This is not per se wrong. If the performance of the machine is measured to not depart significantly from theory, then you are arguably justified in using a theoretical distribution as the baseline. And, certain parameters notwithstanding, your results will be acceptable to a certain degree. But when the effect you're trying to study is theorized to be very small, as Schmidt and Jahn both did, then you need to consider the second- and third-order effects in your experiment, and p < 0.05 may not be suitably significant when comparing REG to Poisson. This is what Palmer says when reviewing Schmidt.

The Poisson distribution works from a descriptive dictated by mathematical theory. The distribution is formally defined, and must be in order for it to satisfy the constraints of a probability density function. As such its descriptive statistics are derived as part of the theory as a functions of its parameters. The big part of the theory is simply that the events it describes are independent, and that's what lets descriptive statistics proceed from abstract knowledge. Any actual distribution of events theorized to be independent will only ever approximate the Poisson distribution. The measured descriptive statistics will not be spot-on with those from theory. This is due to the effects of the N-value, to be sure, and we can say that an infinite number of such independent events should converge to Poisson. But it can also be due to the effects of confounds that, while they may not rise to significance under one threshold for the p-value, they may be significant at other p-values. This is one of the ways we find confounding variables.

What this means then is that the actual expected distribution may not reflect purely independent events, in which case the descriptive statistics may not be expected to conform to theoretical values. That is, we can't say for sure that REGs, if run unaffected an infinite number of times, will converge to a Poisson distribution to some arbitrarily small p-value. If, for example, the machine were affected by the fluorescent lights in the room and the experimenters did not know this, then the samples taken when the lights were off would differ from those taken when it was on, and you'd have the convolution of two time-dependent distributions, assuming the last researcher turned out the lights on his way out of the office for the day.

The t-distribution resembles a normal distribution, but it isn't. It's a hyperbolic distribution and therefore allows for different behavior from its parameters. What it does is allow is to build a confidence interval from sparse data regarding how far the measured mean can differ from the (unknown) true mean. It's what you use when the population's mean and standard deviation aren't known, and it achieves that effect by favoring data that lies farther from the mean than in a normal distribution, and thus amplifies variance over small N-values. The population in this case is the behavior of all similarly-designed REGs, over all time. Jahn (and Palmer, who agrees with Jahn) effectively said,
"While we expect the REGs used in these experiments to approximate very closely a Poisson distribution, their actual mean and standard deviation can't be known from theory, due to effects we can never hope to control down to a fine enough degree to measure very small variances against.

"But we can measure the performance of these machines with all the lingering confounds in place, after precise and careful adjustment of the things we know about and can control. And if, given N=23, we come up with a confidence interval that's small enough, we can assure ourselves that our measured descriptive statistics can vary from the (unknown) theoretical distribution by a small enough amount that measured variance in the subjects can be thought significant. And it accounts for intervening comfounds with a very small, but possibly noteworthy effect."​
In this case "theoretical distribution" means the theoretical distribution that would occur if the REGs were measured an infinite number of times -- what Buddha suggests would have to be done in order to know how the results should be distributed without the PK effect.

This is why it's so comical to hear Buddha say that and then say, "Oh, right, the t-test for significance; yeah, I use that sometimes in my work." No. Just, no. If Buddha is saying he knows about the t-distribution and uses it, then it's silly for him to say (before it's explained to him) that we need infinite empirical trials to create the baseline. That's explicitly what the t-test is meant to avoid having to do. It's not just a refinement of his previous claim. It's a total reversal of it. He seems to want us not to see this.

Based upon the profound and general ignorance Buddha exhibits, I suggest Buddha check out this Wikipedia page.

https://en.wikipedia.org/wiki/Dunning–Kruger_effect

Sure, there is an element here of not knowing what one doesn't know. But I detect also in his approach a bunch of fairly common social-engineering techniques that reveal a more conscious understanding of his own deficiency. The Dunning-Kruger effect seems to have more to do with unawareness than with deliberate attempts to create the illusion of erudition at others' expense. Buddha is more about the latter.
 
To start with, categorical variables are not used in clinical trials.

Bwahahaha! You must have been sick the day they taught multivariate analysis in statistics. With a wave of your hand, you simply disregard an entire field of statistics, and one of the most important techniques in experimental research that guarantees the independence of the one variable you were give data for.

What's even worse is that I described in detail how they are used in research of the type PEAR conducted, and incidentally in medical research. They are invaluable in ensuring the integrity and applicability of the study, and I took great effort to explain why. Do you address my detailed description at all? No, you simply announce that I'm wrong and give a highly simplified description of the one part of some clinical trial you may have been incidentally involved with, and then wrongly concluding that's all the statistical work that was ever done or needed to be done.

You keep proving time and again that you are unwilling to expand your knowledge to fit the problem. You keep trying to pare away the parts of problems that don't fit your existing understanding. And hubristically you try to call other people stupid when they point out your oversimplifications and errors. What kind of a "consultant" does that?

The subjects for a test are usually...

But you're not an authority on what is "usually" done in experiments. In the thread where you gave us empirical anecdotal evidence for reincarnation, you both illustrated and admitted that you didn't know what empirical controls were. Do you really expect us to forget about these things from day to day?

chosen at random...

A subject pool is first established, usually from volunteers out of the general human population. This is true both for physiology research and for psychology research. A common source of psychology research subjects is undergraduate students, who must participate in order to get credit in the psychology classes they take. From this subject pool are drawn the subjects for some particular study based on their demographic conformance to the general population, or to the parameters of the study -- sometimes both. Usually the results are hoped to generalize to the population at large, so the sample is desired to differ as little as possible from what is known about the general population. Often the study is limited to subjects of a certain category, such as to women who have had at least one child. Obviously the men and childless women must be filtered out in that case. Sex is a category. Number of children is a category. The latter especially may be one of the proposed correlates.

Drawing at random often works because subjects often naturally fall into a representative (or desired) demographic distribution naturally. A random sample of a representative pool would also be representative to within a certain amount. Another method is to score each member of the subject pool according to how closely he fits the general population (or study parameters) and then take only N of the top scorers.

Randomness here is not the end goal. The end goal is conformance to the parameters of the study. Randomness is only a proxy for that, and not even always the best proxy.

So with an appropriately shaped subject sample in hand, what happens next? Well, as I explained previously, it's separated into the control and variable groups. In a clinical trial, the control group gets the placebo. In a psychology experiment there may be no control group, the sample having been homogenized against the general population. Where the bifurcation occurs, it is best done randomly to avoid experimenter bias. However it's done, the goal is to ensure that all the pertinent categorical variables are equally distributed on both sides of the divide. This is to ensure that the two groups are otherwise independent, besides the thing you're going to do to one group but not the other.

This is measured by looking at the pertinent categorical variables and determining whether the values those variables take on is related to which group they were randomly placed it. Here too, randomness per se is not the end goal. The end goal is homogeneity between the two experiment groups. Randomization is merely one way to achieve it.

Regardless of how it's achieved, it will never be perfect. Neither randomization process produces a sample that's perfectly aligned with the population from which it's drawn, nor a pair of groups that's exactly equally divided among potential confounds. It just avoids a particular sort of bias. But the groups will always differ a measurable amount in all the ways the matter for the experiment, and the sample as a whole will always differ a measurable amount from the population they are meant to represent.

The point here is that we can measure it. This is what statistics, at its heart, is all about. We'll come back to this.

Once the choice is made, the subject data remains no matter what;

Total hogwash.

Data are retrospectively excluded from studies all the time for reasons such as measurement error, subjects' failure to follow the protocol, experimenter error, the death or withdrawal of the subject, and so forth. In any sufficiently large study it is practically unheard of for all the subject data to make it to the end of the study. Now perhaps in the data you were given, those inappropriate data had already been culled. Once again you continue to misunderstand why your particular suggestion for removing data was invalid. Because you were challenged (correctly) on that particular reason, you seem to have evolved this childlike belief that no data may ever be excluded from analysis. If your concept of science were true, think of how many studies would have to be terminated, after years of work and funding, because one data point became unusable for whatever reason. No science would ever get done.

...otherwise a test becomes nonrandom and its results are no longer valid.

Emphatically no, and it illustrates just how little you really understand what statistics are for. Randomness is not itself the desired end. It is the means to other ends, and those ends are not disserved by the departure of one subject's data after the randomization process has done its thing, nor are they incorrigible. Statistics is about confident reasoning in the face of uncertainty. Your view is that a proper scientific study is a perfectly balanced house of cards that can't tolerate the merest jiggle of the table without crashing down in its entirety. That's as anti-statistical a sentiment as there ever could be. Statistics is about what you can do with what you know, even if it's not perfect.

Those subjects were never perfectly aligned anyway, randomness notwithstanding. There was always a need to measure how aligned they were to the population and to each other and to develop a confidence interval to describe how much those groups' behavior could be considered useful. When one subject departs, the confidence interval changes. But it doesn't just go up in smoke. That's what it means to reason statistically. Now if your N is too small, then one subject gone may change the confidence interval to the point where significance of effect is hard to measure. But that's not even remotely the same thing as the naive all-or-nothing fantasy you've painted.

You told me I hadn't provided an example of these things. That's a lie, because I've referred twice now to Dr. Philip Zimbardo and one of the most famous psychology experiments of all time, the Stanford prison experiment. You may recall this was formulated as an experiment to determine the effect of imposed roles on cruel behavior, and is most infamous for having developed astonishing levels of brutality before it was belatedly terminated. In his most recent book, The Lucifer Effect, he gives a retrospective of the experiment interleaved with copious mea culpa for not realizing what was going on. Naturally it's all in the original paper, which is easily available. But in Lucifer he takes you through it in detail, assuming you aren't a research psychologist.

So the first thing Zimbardo had to do was get reasonably suitable subjects. He described his recruiting efforts; most ended up being undergraduates at Stanford University, for obvious reasons. They weren't randomly sampled from the entire U.S. population, but they reasonably varied in demographics as much as college students can be expected to vary.

Then he administered a set of standard psychometric instruments to them to determine where they fell according to various variables he knew or suspected would lead to brutal behavior. He describes the variables and shows how his sample scored as a group along all these dimensions. He also compared their scores with distributions of scores from a the wider population. It's obvious that Zimbardo's sample was not perfectly representative of the U.S. population in all respects. For starters there were no girls, owing to the practical limits of human-subjects testing on this particular experiment. But it doesn't largely matter because he merely needed to establish by psychometry that they were representative enough in the ways that applied to his study.

Then he randomly divided the sample into Guards and Prisoners and showed how the psychometric evaluations were distributed between the two groups. Specifically he showed that in each of those categories the distribution of the psychometric variables was so similar between the two groups that they could be considered equivalent in terms of their predisposition to violence and brutality. Not identical, of course. There was variance. But one of the things the t-test sort of analysis can give you is a comparison between two population means, knowing the confidence interval that applies. While the groups were not identical, the confidence interval was broad enough that it could contain the (unknown) theoretical mean score for all the subjects. This is what we mean by homogenization. Unless you do it, you can't independently attribute effects to the imposed cause.

Zimbardo planned to impose different roles on the two groups, and he needed them to be otherwise as similarly as possible, so that any differences he observed could be properly attributed to the imposed role, not the emergence of some latent property that was skewed in one group versus the other. Hypothetically, if all the people with measured aggressive tendencies were somehow in the Guard group, it would be hard to chalk up a beat-down to the imposed role and not the predetermination. Or in contrast, if all the non-white people were in the Prisoner group, you couldn't separate the Guard-Prisoner dichotomy from the race category and say that everyone's behavior would be the same even if all the prisoners had been white.

All the things I've been talking about are in the book.

The rest of the experiment is history. Zimbardo let it run far too long, up to and past the point of inflicting severe emotional distress on the participants. He spent quite a lot of time cleaning up the mess. But what has emerged over time is that he has maintained contact -- and even professional relationships (some of them became psychologists themselves) -- with the former subjects. And it emerged that one member of the Guard group developed his own agenda that was not necessarily compatible with the goals and protocols of the experiment. For the quantitative part of the experiment Zimbardo had to decide whether to include that data.

A simpler example suffices. Suppose there is a drug trial and subjects are prohibited from drinking alcohol for the duration of the study. Now let's suppose the experimenter discovers that one subject is having his daily glass of wine at dinner, in contravention of the protocol. According to you, we would have to include his data in the final result even though he broke the experimental protocol and confounded whatever results he may have had from the drug with the result of drinking alcohol. That's what would affect the study, not booting him out. If we boot him out of the study, whichever group he was in -- placebo or drug -- has its N-value decremented by one, and his contribution to the homogenization check is removed. And for all we know, his departure might make the groups more homogeneous. This retains the integrity of the study because the remaining subjects (who are presumed to follow the protocol) are exhibiting the behavior it was desired to study.

Palmer suggested that test results of the Princeton study regarding one subject should be discarded to show that they do not significantly deviate from the ones based on Poisson distribution.

No, that's not even remotely close to what Palmer suggests.

He should have known better that then all test results become meaningless because the requirement for randomization is no longer met.

No. As explained above, randomization is the means we use to achieve various ends which are then not affected by culling bad data in a way that statistical analysis cannot overcome. Randomization is not an end unto itself. This is your pidgin-level knowledge of statistics and experimentation, against which you're trying to measure the behavior of the published experts. You're literally trying to tell career scientists "they should have known better," when you have already both demonstrated and admitted you don't know their science or its methods.
 
Responding to Jeffers requires understanding the mathematics upon which he based his conclusion. Buddha's erratic performance over the past several posts indicates he does not understand it, but is just now coming to terms with what it means for Jahn's research and for Palmer's criticism. That is, he has just realized he cannot set aside significance testing, although he's not yet to the point of understanding Jeffers.

That's why I suggested Jeffers' very early in this thread: to show not only that the "king" is naked but that he is indeed the court jester.

Buddha was convinced Palmer was wrong to use an empirical baseline, which he said was invalid because, among other reasons, it could never collect enough baseline data to achieve perfect conformance to the Poisson distribution. He didn't know how the t-test works. But more importantly, he didn't realize that Palmer applied the t-test to the subjects minus Operator 010 because that's what Jahn did in the original research. He can't claim that Palmer misunderstood how baselines work without also claiming that Jahn misunderstood it, and that undermines his star witness.

He probably went out and Googled the t-test for significance and realized belatedly that it exists and that it works substantially as I described. So now he's trying to dump the argument that Palmer's method was naive. True to pattern, he's promising now to pontificate on the t-test in a way that suggests he knew about it all along, without addressing his prior point that Palmer -- in using it -- apparently didn't know what he was doing.

That's why I lost my patience with him very early in this thread -or in previous threads perpetrated by him-, when he started to show he was dealing with a group of a couple of dozens as if it were an infinite set. I am sick of people participating in forums who passed Statistics 101 with, depending on the system, a C+, a 5, a 12 or a 2.6, many years ago and as they struggled a lot to get that grade they declare "to know everything about it", the eternally growing Dunning-Kruger crowd and their cultural Brownian motion.

He pontificates in order to distract from his previous errors. He realizes his audience sees these significant blunders as hits to his credibility, and that sort of "I'm an accomplished person" attention seems to be what he craves.

Well, beware yourself of using his chosen terms like "opponents" and "critics" when indeed it's much more accurate to use "the people who feel sorry for him" or "...like chewtoys in forums".

He's just following a model who is a mix of Göbbels and the Roadrunner. "Opponents" and "critics" is part that what he could sell here, as well as claiming the authorship of a pathetic book when he shows he's not able to write even such a bad piece. He also has claimed professional backgrounds that won't check in reality. That's the Göbbelsian part of it. The Roadrunner part is his "I'm an accomplexed accomplished person" and "you, JayUtah, are an interesting pal ... the rest are pretty stupid" hidden among a patronising drivel that claims their post to be from Acme brand.

And he gets that way irritation enough within the participants so the thread can go on to fulfil its void destiny while he feels he's getting the attention he doesn't get in real life while fantasizing that a myriad of invisible lurkers are silently cheering for him.

Kids have imaginary friends. Adults become the central characters in forum threads. In brief, a déjàbba vu.
 
That's why I lost my patience with him very early in this thread -or in previous threads perpetrated by him-

All his other former interlocutors seem to have given up on him too. Frankly it's just not that interesting -- and in fact a little disturbing -- to watch someone so tenaciously stroke his own ego.
 
All his other former interlocutors seem to have given up on him too. Frankly it's just not that interesting -- and in fact a little disturbing -- to watch someone so tenaciously stroke his own ego.

Egos may be fine, some deal of rudeness too, but the constant zero-content in his posts is intolerable. And by zero-content I mean incessant self-referential drivel scarcely seasoned with rehashed bits that have been repeated until exhaustion (including student mistakes and a general clumsiness in dealing with logic and hard sciences aggravated by a limited ability to use proper language)

I wonder if, in this thread, he ever have tried to reply his own question "Is 'the' telekinesis real?". I also wonder if he ever realized this thread has been cast out of the "Science..." forum and thrown into "...the paranormal" (with general **** like bigfoot) where it rightly belongs.
 
Yeah, you guys with your facts and science. All those non-existent lurkers out there can see exactly what you're doing. Big words and proper protocols, pfft.
 
also wonder if he ever realized this thread has been cast out of the "Science..." forum and thrown into "...the paranormal" (with general **** like bigfoot) where it rightly belongs.


Shhh. If he notices he’ll use that to fuel his persecution complex.
 
And by zero-content I mean incessant self-referential drivel scarcely seasoned with rehashed bits that have been repeated until exhaustion...

All the other posters are clearly exhausted. I'm just about ready to throw in the towel too. Buddha is fundamentally ineducable.

...including student mistakes...

Indeed when I was a student I always marveled at how my teachers sometimes knew the mistakes I was going to make even before I made them, and knew exactly how to correct them. "How did they know?" And not until I had students of my own did I realize that my former errors -- and theirs -- were not as individualized and uncommon as I believed when I was a youngster. Buddha simply can't fathom that other people can fully understand his attempts to argue, fully see the errors he's making, and fully correct them. He urged his critics to read an elementary book on statistical analysis. He doesn't realize that his problem is that he's working from nothing but an elementary level understanding and trying to make expert analysis fit it. He's obviously never been within a mile of an actual scientific study, from conception to completion. He may have tangentially played some small role in one, commensurate with his demonstrable knowledge. But he has absolutely no clue how actual research is done.

...a general clumsiness in dealing with logic and hard sciences...

It's been a riot both in this thread and in his prior reincarnation thread watching him lurch from one major conceptual error to the other when talking about empirical research. He translates past corrections into cargo-cult "rules" for research that have nothing to do with a reasoned approach or a knowledge of the actual tools and techniques. And then when the real world doesn't fit this pidgin understanding, it's somehow the world's fault.

His proof for God was so riddled with logical errors it was almost painful to watch, and his evolution book is obviously a travesty. Here is someone clearly wanting to be seen as erudite, but simply not being willing to be taught.

I wonder if, in this thread, he ever have tried to reply his own question "Is 'the' telekinesis real?".

Nope. He started from the very beginning doing nothing more than going after selected critics of PEAR, which very quickly transformed into going after his own critics here. He's never made an argument in favor of psychokinesis, except for declaring that since he had declared Jahn's critics all to be ignorant -- in his expert opinion -- then PEAR's conclusions should stand. And this, apparently, is to be the basis of the masterful paper he's about to write defending PEAR.

He's not even spending much time talking about psychokinesis anymore. He's on a full-blown rampage to explain to the world how I don't understand how to do clinical drug trials. For some reason that's relevant to the PEAR studies, but a discussion of significance-testing methods is not. It's quite obvious he's just trying to make himself look good by bashing his critics on whatever subject he can think of.
 
And then when the real world doesn't fit this pidgin understanding, it's somehow the world's fault.

"pidgin understanding" ... that's a very apt description

His proof for God was so riddled with logical errors it was almost painful to watch, and his evolution book is obviously a travesty. Here is someone clearly wanting to be seen as erudite, but simply not being willing to be taught.

Why facing all the troubles of getting there when one can pretend to be there yet?

Nope. He started from the very beginning doing nothing more than going after selected critics of PEAR, which very quickly transformed into going after his own critics here. He's never made an argument in favor of psychokinesis, except for declaring that since he had declared Jahn's critics all to be ignorant -- in his expert opinion -- then PEAR's conclusions should stand. And this, apparently, is to be the basis of the masterful paper he's about to write defending PEAR.

Yes, psychokinesis is real by default the same way in medicine they consider snake oil to be an all-disease cure by default.

"A censure motion regarding those aliens who kidnap people in the night has been presented. Any opposition? The motion passes!"

He's not even spending much time talking about psychokinesis anymore. He's on a full-blown rampage to explain to the world how I don't understand how to do clinical drug trials. For some reason that's relevant to the PEAR studies, but a discussion of significance-testing methods is not. It's quite obvious he's just trying to make himself look good by bashing his critics on whatever subject he can think of.

Or the age-old "escape to success", one of many mechanism of the psyche for coping.

Indeed when I was a student I always marveled at how my teachers sometimes knew the mistakes I was going to make even before I made them, and knew exactly how to correct them. "How did they know?" And not until I had students of my own did I realize that my former errors -- and theirs -- were not as individualized and uncommon as I believed when I was a youngster. Buddha simply can't fathom that other people can fully understand his attempts to argue, fully see the errors he's making, and fully correct them.

So, he might not be searching and rehashing old bits related to this discussion but indeed thinking them from scratch and believing he's being original. Darwiness me!
 
You keep proving time and again that you are unwilling to expand your knowledge to fit the problem. You keep trying to pare away the parts of problems that don't fit your existing understanding. And hubristically you try to call other people stupid when they point out your oversimplifications and errors. What kind of a "consultant" does that?

Every management consultant I've ever met?

Dave
 
Every management consultant I've ever met?

Dave

Amen!

22XiMpw.jpg
 
“Optional stopping. It is not stated whether the total number of trials
and the number of trials completed by each subject were specified in
advance. In principle, this leaves the Jahn research open to the criticism
that a series may have been terminated at times favorable to the support of
,U the experimental hypotheses.” Palmer, page 119

Usually it is implied that the number of trials is chosen in advance; if the number is not chosen in advance the author states this in explicit form. This is a nitpicking on Palmer’s part.

“ An opportunity to test the optional-stopping hypothesis is suggested by the fact that the degree to which optional
stopping was potentially operative seems to have varied from series to
series. In 24 of the series, the number of runs in the PK+ and PK- modes
was identical. In 23 of these, the number of runs per type was either 2500
or 5000; in the remaining series it was 3000 runs. It is very unlikely
that optional stopping was a factor in these series. Thus, if the
optional-stopping hypothesis were correct, one would expect lower scoring in
these series than in the other 37.” Palmer, 120

This is bald-faced lie, Palmer deliberately misinterpreted the research by suggesting that the researchers should have used an optimal stopping rule, which comes from an entirely different branch of mathematics.

I’ll start with a general discussion. Suppose you have results of control runs and experimental run pertaining to a research program. Let’s say you do not know what kind of statistical distribution the control runs form. In this case you make an educated guess about the nature of a distribution, and after that perform a homogeneity test. Homogeneity tests are done to make sure that all samples come from the same population, which could be a Poisson distribution, for instance.

“Suppose we have made s successive sequences of observations consisting of n, m, … q observations respectively., where their numbers are not determined by chance, but are simply to be regarded as given numbers” H. Cramer, Mathematical Methods of Statistics, page 445.(this is famous graduate-level textbook, not everyone is capable of understating it)

It would be a huge mistake to say that the numbers of observation are not chosen in advance, but determined during the evaluation process. But one of my opponents made this mistake, which shows that he has no knowledge of the homogeneity tests.

Obviously, if a distribution is of a known type, no homogeneity test is required.

One of my opponents wrote that I am so naïve that I assume that all distributions are of Poisson type. This is not what I wrote. I mentioned non-Poisson distributions as well. Distortion of the opponent’s words is a lowly tactics, to say the least.

It has been proven theoretically and experimentally that fluctuations in the number of electrons during an electron emission from the surface of a metal form a Poisson distribution, so in this case a homogeneity test is not needed to confirm that.

For a t-test the number of samples is determined in advance and it remains constant throughout the process. This number depends on the set’s mean, variance and arbitrary chosen significance level (usually, it is 5%, if you want to decrease the significance level and make it more precise, you would have to increase the number of runs) . For more information on t-tests consult Crow, Statistical Manual, page 57 (this is a very simple book, everyone should be able to understand it).

Again, one of my opponents (the same one) wrote that the number of samples for a t-test is determined during the test. This is a pathetic and gross mistake that proves he has absolutely no knowledge of t-tests.

In his article Palmer made reference to the optimal stopping theory. This theory is very complex, and I have only perfunctory understanding of it along with all my colleagues. Data analysts do not use this theory because of its complexity; very few colleges offer optimal stopping courses, and even Columbia University is not one of them.

This Wikipedia article gives some information about optimal stopping
https://en.wikipedia.org/wiki/Optimal_stopping

As far as I know, this theory is often used in quality control. Quality control folk have a very basic knowledge of it, which is enough for their purpose; they use certain formulas without knowing how to derive them.

As a control systems engineer, I indirectly encountered this theory when the quality control engineers were telling us that we needed to adjust parameters in our control models. The formulas that they user allow a company to save plenty of money by testing a limited number of defective samples and stopping the production before the defects become more numerous. Basically, they test a small number of samples and, based on the analysis, decide how many samples they need to test before deciding whether to stop the production or go on.

The optimal testing theory has nothing to do with t-tests; Palmer knows that but he raised this issue on purpose. My opponent believes there is a connection between the optimal stopping theory and t-tests, but this is just his imagination going wild.



 
I’ll start with a general discussion.

No, why don't you correct the errors in your previous presentations before pontificating further.

Homogeneity tests are done to make sure that all samples come from the same population, which could be a Poisson distribution, for instance.

Oh, this is hilarious. I talk about homogenization and you dismiss it as "irrelevant." Now here you are regurgitating what I said earlier and pretending it's you who's teaching it. What a fraud.

...this is famous graduate-level textbook, not everyone is capable of understating it...

You haven't shown that your understanding of statistical analysis rises above what you might read in a software instruction manual. Sorry, you need to stop suggesting you're so much smarter than everyone else. You just aren't.

One of my opponents wrote that I am so naïve that I assume that all distributions are of Poisson type.

No, that's not what I wrote. I wrote that there are significance tests that use an empirically-determined baseline, and I specifically identified it as one of the non-Poisson types you referred to. You rejected the entire idea as impossible, because you did not then know about the t-test and believed that every such analysis must be against an ideal distribution. Your whole point was that no number of empirical runs could confirm which distribution should be used. This is entirely ignorant of the t-test, which always uses the t-distribution.

Distortion of the opponent’s words is a lowly tactics, to say the least.

Yeah, so quit doing it. You can't seem to come up with an argument that's not just accusing your critics of one thing or another. How tedious.

It has been proven theoretically and experimentally that fluctuations in the number of electrons during an electron emission from the surface of a metal form a Poisson distribution, so in this case a homogeneity test is not needed to confirm that.

I already addressed this multiple times.

For a t-test...

Oh, look, here you are again trying to "teach" stuff I already taught, after you already showed your ignorance of it. This constant face-saving is ridiculous.

Again, one of my opponents (the same one) wrote that the number of samples for a t-test is determined during the test.

No, that's not what I wrote.

This is a pathetic and gross mistake that proves he has absolutely no knowledge of t-tests.

Oh, calm yourself. 48 hours ago you had no idea what a t-test for significance was. Now you're frantically doing damage control.

I have only perfunctory understanding of it along with all my colleagues.

No. Not everyone else is ignorant just because you are.

My opponent believes there is a connection between the optimal stopping theory and t-tests, but this is just his imagination going wild.

No, that would be your imagination going wild. I made no such statement.
 
Last edited:

Sorry, but I'm not just going to let the lies stand. Obviously Buddha's been frantically doing make-up studying, during which he has apparently discovered that I was right all along. Now he's in Pontiff mode trying to belatedly lecture on the subjects I've already described and that he's already dismissed as "irrelevant." But in order to make it look like he was somehow right all along, he has to lie about what I wrote so that he can manufacture a contrast between it and what he's now regurgitating. He's also doubling down on his lies by trying to gaslight everyone into believing he is the one who has been maliciously misrepresented.

No, I'm not going to follow him on the new subject he's frantically trying to change to. He desperately wants to leave significance testing far in the dust and bring up a red herring to distract the audience from his colossal failure and his inauspicious damage-control effort.
 
Last edited:
All his other former interlocutors seem to have given up on him too.
They might also recognise that you don't need any help demolishing his posts ;)

Frankly it's just not that interesting -- and in fact a little disturbing -- to watch someone so tenaciously stroke his own ego.
Indeed, but it's quite entertaining to watch you handing him his arse on a plate.
 

Back
Top Bottom