Moderated Is the Telekinesis Real?

"Buddha" trying to get attention again...


As always. plenty of unrelated data and complete lack of knowledge of the concept of randomness. Just stop pretending that you know something about statistics and follow this link
https://en.wikipedia.org/wiki/Random_sequence




From what kind of CliffNotes do you get your vocabulary and "knowledge"?
 
Let's dissect one of the last train wreck from "Buddha" 's so you get a notion of how his mind works

This post shows that you have no idea how clinical trials are conducted. Well, you put your ignorance on full display. To start with, categorical variables are not used in clinical trials. The collected data consists of the analysis results of the subjects' blood, as it was in the leukemia clinical trials that I described.

Yes, categorical variables, such as blood type or having previously received a complete course of chemotherapy, an incomplete one or none, are not used in clinical trials.:rolleyes:


ETA: "Buddha", I got an idea. Why don't you include the threads you started here as part of your resume/CV? Your potential employers/clients will be shocked by the proficiency you show.
 
Last edited:
Bwahahaha! You must have been sick the day they taught multivariate analysis in statistics. With a wave of your hand, you simply disregard an entire field of statistics, and one of the most important techniques in experimental research that guarantees the independence of the one variable you were give data for.

What's even worse is that I described in detail how they are used in research of the type PEAR conducted, and incidentally in medical research. They are invaluable in ensuring the integrity and applicability of the study, and I took great effort to explain why. Do you address my detailed description at all? No, you simply announce that I'm wrong and give a highly simplified description of the one part of some clinical trial you may have been incidentally involved with, and then wrongly concluding that's all the statistical work that was ever done or needed to be done.

You keep proving time and again that you are unwilling to expand your knowledge to fit the problem. You keep trying to pare away the parts of problems that don't fit your existing understanding. And hubristically you try to call other people stupid when they point out your oversimplifications and errors. What kind of a "consultant" does that?



But you're not an authority on what is "usually" done in experiments. In the thread where you gave us empirical anecdotal evidence for reincarnation, you both illustrated and admitted that you didn't know what empirical controls were. Do you really expect us to forget about these things from day to day?



A subject pool is first established, usually from volunteers out of the general human population. This is true both for physiology research and for psychology research. A common source of psychology research subjects is undergraduate students, who must participate in order to get credit in the psychology classes they take. From this subject pool are drawn the subjects for some particular study based on their demographic conformance to the general population, or to the parameters of the study -- sometimes both. Usually the results are hoped to generalize to the population at large, so the sample is desired to differ as little as possible from what is known about the general population. Often the study is limited to subjects of a certain category, such as to women who have had at least one child. Obviously the men and childless women must be filtered out in that case. Sex is a category. Number of children is a category. The latter especially may be one of the proposed correlates.

Drawing at random often works because subjects often naturally fall into a representative (or desired) demographic distribution naturally. A random sample of a representative pool would also be representative to within a certain amount. Another method is to score each member of the subject pool according to how closely he fits the general population (or study parameters) and then take only N of the top scorers.

Randomness here is not the end goal. The end goal is conformance to the parameters of the study. Randomness is only a proxy for that, and not even always the best proxy.

So with an appropriately shaped subject sample in hand, what happens next? Well, as I explained previously, it's separated into the control and variable groups. In a clinical trial, the control group gets the placebo. In a psychology experiment there may be no control group, the sample having been homogenized against the general population. Where the bifurcation occurs, it is best done randomly to avoid experimenter bias. However it's done, the goal is to ensure that all the pertinent categorical variables are equally distributed on both sides of the divide. This is to ensure that the two groups are otherwise independent, besides the thing you're going to do to one group but not the other.

This is measured by looking at the pertinent categorical variables and determining whether the values those variables take on is related to which group they were randomly placed it. Here too, randomness per se is not the end goal. The end goal is homogeneity between the two experiment groups. Randomization is merely one way to achieve it.

Regardless of how it's achieved, it will never be perfect. Neither randomization process produces a sample that's perfectly aligned with the population from which it's drawn, nor a pair of groups that's exactly equally divided among potential confounds. It just avoids a particular sort of bias. But the groups will always differ a measurable amount in all the ways the matter for the experiment, and the sample as a whole will always differ a measurable amount from the population they are meant to represent.

The point here is that we can measure it. This is what statistics, at its heart, is all about. We'll come back to this.



Total hogwash.

Data are retrospectively excluded from studies all the time for reasons such as measurement error, subjects' failure to follow the protocol, experimenter error, the death or withdrawal of the subject, and so forth. In any sufficiently large study it is practically unheard of for all the subject data to make it to the end of the study. Now perhaps in the data you were given, those inappropriate data had already been culled. Once again you continue to misunderstand why your particular suggestion for removing data was invalid. Because you were challenged (correctly) on that particular reason, you seem to have evolved this childlike belief that no data may ever be excluded from analysis. If your concept of science were true, think of how many studies would have to be terminated, after years of work and funding, because one data point became unusable for whatever reason. No science would ever get done.



Emphatically no, and it illustrates just how little you really understand what statistics are for. Randomness is not itself the desired end. It is the means to other ends, and those ends are not disserved by the departure of one subject's data after the randomization process has done its thing, nor are they incorrigible. Statistics is about confident reasoning in the face of uncertainty. Your view is that a proper scientific study is a perfectly balanced house of cards that can't tolerate the merest jiggle of the table without crashing down in its entirety. That's as anti-statistical a sentiment as there ever could be. Statistics is about what you can do with what you know, even if it's not perfect.

Those subjects were never perfectly aligned anyway, randomness notwithstanding. There was always a need to measure how aligned they were to the population and to each other and to develop a confidence interval to describe how much those groups' behavior could be considered useful. When one subject departs, the confidence interval changes. But it doesn't just go up in smoke. That's what it means to reason statistically. Now if your N is too small, then one subject gone may change the confidence interval to the point where significance of effect is hard to measure. But that's not even remotely the same thing as the naive all-or-nothing fantasy you've painted.

You told me I hadn't provided an example of these things. That's a lie, because I've referred twice now to Dr. Philip Zimbardo and one of the most famous psychology experiments of all time, the Stanford prison experiment. You may recall this was formulated as an experiment to determine the effect of imposed roles on cruel behavior, and is most infamous for having developed astonishing levels of brutality before it was belatedly terminated. In his most recent book, The Lucifer Effect, he gives a retrospective of the experiment interleaved with copious mea culpa for not realizing what was going on. Naturally it's all in the original paper, which is easily available. But in Lucifer he takes you through it in detail, assuming you aren't a research psychologist.

So the first thing Zimbardo had to do was get reasonably suitable subjects. He described his recruiting efforts; most ended up being undergraduates at Stanford University, for obvious reasons. They weren't randomly sampled from the entire U.S. population, but they reasonably varied in demographics as much as college students can be expected to vary.

Then he administered a set of standard psychometric instruments to them to determine where they fell according to various variables he knew or suspected would lead to brutal behavior. He describes the variables and shows how his sample scored as a group along all these dimensions. He also compared their scores with distributions of scores from a the wider population. It's obvious that Zimbardo's sample was not perfectly representative of the U.S. population in all respects. For starters there were no girls, owing to the practical limits of human-subjects testing on this particular experiment. But it doesn't largely matter because he merely needed to establish by psychometry that they were representative enough in the ways that applied to his study.

Then he randomly divided the sample into Guards and Prisoners and showed how the psychometric evaluations were distributed between the two groups. Specifically he showed that in each of those categories the distribution of the psychometric variables was so similar between the two groups that they could be considered equivalent in terms of their predisposition to violence and brutality. Not identical, of course. There was variance. But one of the things the t-test sort of analysis can give you is a comparison between two population means, knowing the confidence interval that applies. While the groups were not identical, the confidence interval was broad enough that it could contain the (unknown) theoretical mean score for all the subjects. This is what we mean by homogenization. Unless you do it, you can't independently attribute effects to the imposed cause.

Zimbardo planned to impose different roles on the two groups, and he needed them to be otherwise as similarly as possible, so that any differences he observed could be properly attributed to the imposed role, not the emergence of some latent property that was skewed in one group versus the other. Hypothetically, if all the people with measured aggressive tendencies were somehow in the Guard group, it would be hard to chalk up a beat-down to the imposed role and not the predetermination. Or in contrast, if all the non-white people were in the Prisoner group, you couldn't separate the Guard-Prisoner dichotomy from the race category and say that everyone's behavior would be the same even if all the prisoners had been white.

All the things I've been talking about are in the book.

The rest of the experiment is history. Zimbardo let it run far too long, up to and past the point of inflicting severe emotional distress on the participants. He spent quite a lot of time cleaning up the mess. But what has emerged over time is that he has maintained contact -- and even professional relationships (some of them became psychologists themselves) -- with the former subjects. And it emerged that one member of the Guard group developed his own agenda that was not necessarily compatible with the goals and protocols of the experiment. For the quantitative part of the experiment Zimbardo had to decide whether to include that data.

A simpler example suffices. Suppose there is a drug trial and subjects are prohibited from drinking alcohol for the duration of the study. Now let's suppose the experimenter discovers that one subject is having his daily glass of wine at dinner, in contravention of the protocol. According to you, we would have to include his data in the final result even though he broke the experimental protocol and confounded whatever results he may have had from the drug with the result of drinking alcohol. That's what would affect the study, not booting him out. If we boot him out of the study, whichever group he was in -- placebo or drug -- has its N-value decremented by one, and his contribution to the homogenization check is removed. And for all we know, his departure might make the groups more homogeneous. This retains the integrity of the study because the remaining subjects (who are presumed to follow the protocol) are exhibiting the behavior it was desired to study.



No, that's not even remotely close to what Palmer suggests.



No. As explained above, randomization is the means we use to achieve various ends which are then not affected by culling bad data in a way that statistical analysis cannot overcome. Randomization is not an end unto itself. This is your pidgin-level knowledge of statistics and experimentation, against which you're trying to measure the behavior of the published experts. You're literally trying to tell career scientists "they should have known better," when you have already both demonstrated and admitted you don't know their science or its methods.
This Zimbardo experiment has nothing to do with my request to provide an example of a psychological test with a rejected outlier. Apparently, there was no such rejection during the Zimbardo experiment the way you described it. On the other hand, it is possible that your description of his experiment is incorrect, and there was an outlier. In this the case?
 
No, why don't you correct the errors in your previous presentations before pontificating further.



Oh, this is hilarious. I talk about homogenization and you dismiss it as "irrelevant." Now here you are regurgitating what I said earlier and pretending it's you who's teaching it. What a fraud.



You haven't shown that your understanding of statistical analysis rises above what you might read in a software instruction manual. Sorry, you need to stop suggesting you're so much smarter than everyone else. You just aren't.



No, that's not what I wrote. I wrote that there are significance tests that use an empirically-determined baseline, and I specifically identified it as one of the non-Poisson types you referred to. You rejected the entire idea as impossible, because you did not then know about the t-test and believed that every such analysis must be against an ideal distribution. Your whole point was that no number of empirical runs could confirm which distribution should be used. This is entirely ignorant of the t-test, which always uses the t-distribution.



Yeah, so quit doing it. You can't seem to come up with an argument that's not just accusing your critics of one thing or another. How tedious.



I already addressed this multiple times.



Oh, look, here you are again trying to "teach" stuff I already taught, after you already showed your ignorance of it. This constant face-saving is ridiculous.



No, that's not what I wrote.



Oh, calm yourself. 48 hours ago you had no idea what a t-test for significance was. Now you're frantically doing damage control.



No. Not everyone else is ignorant just because you are.



No, that would be your imagination going wild. I made no such statement.

You don’t even know the purpose of homogeneity tests, but you expect me to continue a serious discussion with you! Your misinterpretations of the t-tests and the theory of optimal stopping are so obvious that anyone with a basic knowledge of mathematical statistics would laugh at.

The difference in our debate tactics is clear – I have backed up my presentation with the references to the textbooks and articles, but you have not provided any printed material supporting your homegrown views of statistics that have nothing to do with reality. The Zimbardo experiment doesn't count because it doesn't have an outlier, you didn't fulfill my request to provide one. I consider further debates with you under current conditions a waste of time.

If you want to continue debates with me you would have to accept my conditions. Actually, there is only one condition – you would have to back up your outlandish ideas about mathematical statistics with references to books and articles. You might not have links to some books that you want to quote, but this is fine with me as long as you provide a book’s title and a quotation from it.

If you fail to meet my condition, I will ignore your posts and will continue responding to the posts of the other opponents. I am in no mood to spend my time responding to the posts of a person who has no understanding of the basics of the probability theory and mathematical statistics.

Keep in mind that debates with me are much more important for you than for me. Judging by the volume of your posts, you have a reputation here as a sophisticated and intelligent debater, while I am a stranger with a negative reputation who doesn’t have to defend it. Now you are under tremendous pressure to defend your reputation. No matter what you say, your supporters will applaud you, and you will score many points with them. But I am pretty sure that I am not your first opponent, you had many in the past. Now they see you as an intellectual weakling, and will gladly take on you in a near future. I think it would be wise on your part to accept my condition.

I still have time before my next client interview, but I want to spend it wisely by studying Palmer’s article.
 
As always. plenty of unrelated data and complete lack of knowledge of the concept of randomness. Just stop pretending that you know something about statistics and follow this link
https://en.wikipedia.org/wiki/Random_sequence

Wow.

He schooled you repeatedly and in great detail and your response is a slightly more age appropriate version of sticking your fingers in your years and chanting, "LALALA! I can't hear you! You're a poopie head!""

It's like responding to a copy of a modern biology textbook about evolution with "You're just too stupid to realize GODDIDIT!" Come to think of it, that's pretty much what was done in the abysmal and ignorant anti-science book you claim to have written.
 
Last edited:
This Zimbardo experiment has nothing to do with my request to provide an example of a psychological test with a rejected outlier. Apparently, there was no such rejection during the Zimbardo experiment the way you described it. On the other hand, it is possible that your description of his experiment is incorrect, and there was an outlier. In this the case?


"Buddha", before you bid farewell, can you list the hypothesis to be tested in Jahn's paper?
 
Last edited:
In which Buddha's Wikipedia inspired statistical ohanism is questioned

You don’t even know the purpose of homogeneity tests, but you expect me to continue a serious discussion with you!

What's your point in all of this prattle?

In the end you have one test subject in the employ of the tester who displayed what appears to be a feeble ghost of a possible ability that, due to the experimental design and the questionable baseline collection, falls well within the study's margin of error.

Endless wankery about statistics does nothing to deal with the fact that the study was crap and found nothing to suggest telekinetic powers were real.

What, exactly, are you hoping to accomplish?
 
As always. plenty of unrelated data and complete lack of knowledge of the concept of randomness. Just stop pretending that you know something about statistics and follow this link
https://en.wikipedia.org/wiki/Random_sequence

An interesting bluff, this. The idea, I suppose, is to hope that either nobody follows the link or that nobody who follows it understands it, leaving the impression that Buddha's scored a telling point.

You don’t even know the purpose of homogeneity tests, but you expect me to continue a serious discussion with you! Your misinterpretations of the t-tests and the theory of optimal stopping are so obvious that anyone with a basic knowledge of mathematical statistics would laugh at.

The difference in our debate tactics is clear – I have backed up my presentation with the references to the textbooks and articles, but you have not provided any printed material supporting your homegrown views of statistics that have nothing to do with reality. The Zimbardo experiment doesn't count because it doesn't have an outlier, you didn't fulfill my request to provide one. I consider further debates with you under current conditions a waste of time.

If you want to continue debates with me you would have to accept my conditions. Actually, there is only one condition – you would have to back up your outlandish ideas about mathematical statistics with references to books and articles. You might not have links to some books that you want to quote, but this is fine with me as long as you provide a book’s title and a quotation from it.

If you fail to meet my condition, I will ignore your posts and will continue responding to the posts of the other opponents. I am in no mood to spend my time responding to the posts of a person who has no understanding of the basics of the probability theory and mathematical statistics.

Keep in mind that debates with me are much more important for you than for me. Judging by the volume of your posts, you have a reputation here as a sophisticated and intelligent debater, while I am a stranger with a negative reputation who doesn’t have to defend it. Now you are under tremendous pressure to defend your reputation. No matter what you say, your supporters will applaud you, and you will score many points with them. But I am pretty sure that I am not your first opponent, you had many in the past. Now they see you as an intellectual weakling, and will gladly take on you in a near future. I think it would be wise on your part to accept my condition.

I still have time before my next client interview, but I want to spend it wisely by studying Palmer’s article.

And this is Trump level stuff, a precise inversion of reality that commits the exact wrongs it accuses of.

Dave
 
"Buddha" said:
You don’t even know the purpose of homogeneity tests, but you expect me to continue a serious discussion with you! Your misinterpretations of the t-tests and the theory of optimal stopping are so obvious that anyone with a basic knowledge of mathematical statistics would laugh at.


"Buddha", guess what those of us having more than a basic knowledge and having (or having had) endowed professorships are laughing about?
 
"Buddha" said:
blah blah


The difference in our debate tactics is clear – I have backed up my presentation with the references to the textbooks and articles, but you have not provided any printed material supporting your homegrown views of statistics that have nothing to do with reality. The Zimbardo experiment doesn't count because it doesn't have an outlier, you didn't fulfill my request to provide one. I consider further debates with you under current conditions a waste of time.


blah blah


"Buddha", before you say your definitive goodbye, what's your "expert" (:D) opinion on this


Detection of Outliers Due to Participants’ Non-Adherence to Protocol in a Longitudinal Study of Cognitive Decline
 
Wow.

He schooled you repeatedly and in great detail and your response is a slightly more age appropriate version of sticking your fingers in your years and chanting, "LALALA! I can't hear you! You're a poopie head!""

It's like responding to a copy of a modern biology textbook about evolution with "You're just too stupid to realize GODDIDIT!" Come to think of it, that's pretty much what was done in the abysmal and ignorant anti-science book you claim to have written.


and pretty much



this is the way this thread ends, not with a whimper but with a bang
 
You don’t even know the purpose of homogeneity tests...

False. I explained their purpose at length. You tried to declare them "irrelevant," then backpedaled and tried to suggest you're the one who introduced them to the discussion once you belatedly discovered their relevance.

The difference in our debate tactics is clear – I have backed up my presentation with the references to the textbooks and articles...

No, you threw out a couple of cited passages from books it's obvious you haven't read and don't understand, hoping to convince the audience that you weren't trying to play catch-up. I pointed out how your own cited sources contradict claims you've made in this debate. That's how I proved you didn't read them and didn't know them.

The Zimbardo experiment doesn't count because it doesn't have an outlier...

The Zimbardo experiment had an outlier, and all the other features of a psychology experiment I've been talking about.

If you want to continue debates with me you would have to accept my conditions.

No. You are the claimant. You do not get to dictate the terms by which your claims are tested.

Actually, there is only one condition – you would have to back up your outlandish ideas about mathematical statistics with references to books and articles.

Seeing as how I've backed them up from time to time with references to your cited sources, maybe you should just accept the fact that what you're trying to talk about in this thread is something that I and others know from deep and long experience, not something we're frantically Googling for on a daily basis.

No, here's my counter proposal. You started off this thread with the blanket assertion that no one who criticized PEAR could possibly know what they were talking about, and that you were expert enough to address all that criticism yourself. So we're going to keep testing those claims in the manner we've used so far, which seems to be quite effective.

If you fail to meet my condition, I will ignore your posts...

But that won't stop me from writing them and exposing your ignorance to all who might read this forum. By all means run away, if that's what you feel you must do. Your audience is already convinced you can't answer my posts, so they won't accept your excuses for avoiding them.

...the posts of a person who has no understanding of the basics of the probability theory and mathematical statistics.

I'll leave it to the audience to determine who has the appropriate knowledge. And I'm not responding to the whining and self-important drivel that forms the rest of your post.
 
Apparently, there was no such rejection during the Zimbardo experiment the way you described it.

My, how carefully worded your answer is. You mean you're claiming to be an expert in how to design experiments and evaluate data in psychological research, and you don't know how the Stanford prison experiment turned out? It's only one of the most famous psychology experiments of all time.
 
As always. plenty of unrelated data....

Except that's not true. Twice now you've dismissed my thorough coverage of a topic with these one-liner deflections, then you come back a day or so later and try to regurgitate that same coverrage, which is now somehow relevant again. Only you try to pretend you're the one teaching it, as if it were something you knew all along. Do you really think people don't see through these obvious stunts?

...and complete lack of knowledge of the concept of randomness. Just stop pretending that you know something about statistics and follow this link
https://en.wikipedia.org/wiki/Random_sequence

Yes, any beginning student is familiar with what a random sequence is. What I displayed, and what you obviously cannot understand, is how the constructs of randomness are actually used to achieve the desired ends in the statistical control of data collection and analysis, and how the randomization itself becomes moot once the ends are met. Further, in your rush to present an elementary concept as if it were some great cosmic truth, you have ignored what I said about the nature of randomness and how it applies to your assertion that "randomness" prevents any data from ever being removed from a dataset.

The sine qua non of randomness is independence. What makes a sequence truly random is the property that the value of any number in the sequence is wholly independent of any other value. Wikipedia doesn't really get into that, and that seems to have limited your understanding of the subject. If we say that no two elements in a random sequence can depend in any way upon each other, then this severely limits what we can say about relationships within and among sets suggested by any rule we invent as functions of such a sequence. That we sometimes assign meaning to those sequence values for one purpose or another does not suggest that the properties which attach to those purposes somehow trickle back to the numbers themselves and must remain faithful.

Random variables that generate such sequences stand in for quantities we do not know, or cannot observe, but about which we know some things generally. A sequence of values generated by a fair die, for example, is expected to be a random sequence governed by the rule that each of the six possible values can occur with equal probability at each roll, independent of any prior or subsequent roll. That's something we know generally about a fair die, and stands in for the complex Newtonian dynamics behavior we know actually determines the outcome. That we play various games with dice in which we assign different meaning to those outcomes does not violate the randomness of the underlying process. The random sequence serves the process of game play. That's not to say that all other elements of game play must also adhere to randomness. The rules of game play are entirely separate from the properties of Newtonian dynamics, and from the random sequence that represents them.

In the empirical sciences, a sequence produced by a random variable stands in for quantities the experimenter should not know, because if he knew them he might apply an unconscious bias and skew the results. Zimbardo tossed a coin to determine whether any given subject should be a Guard or a Prisoner. That process stood in for what he or the other experimenters might have thought about the propriety of such an assignment. All the variables that could have contributed, consciously or unconsciously, to the assignment of role were deliberately set aside. Zimbardo didn't get to say things like, "Hm, I think Harry would be a good guard," or "I think it would be interesting to see how the big Hispanic guy fares as a prisoner."

But then of course once the random sequence had done its job, the rest of the experiment was governed by the rules invented to apply to Guards and Prisoners, one of which was that Guards had to honestly pretend to be guards. Similarly, once the dice are rolled, the rules of craps determine what happens next. If the player breaks one of those rules and is dismissed from the game, it doesn't change the random process that provided the antecedent to that rule. This is where it's very important to understand independence. The assignment of role -- Guard or Prisoner -- doesn't create any sort of new dependence within the group, or new independence between the groups, that the prior randomness cares about. Tom and Dick may both have been assigned as Guards, but that doesn't mean that Tom and Dick's coin tosses are now somehow dependent. They are related only by the contrived rule that was driven by a random variable, not any property of the variable itself. Nor, if Harry is a prisoner, does this mean that any sort of dependent disjunction now exists between him and either Tom or Dick. Indeed, even if Dick gets hit by a bus and killed, his consequential withdrawal from the Guard group has bugger-all to do with the independence of the coin toss that put him there in the first place. Indeed the independence property says we can remove any arbitrary element from a random sequence and the sequence will remain random. It's vitally important that this property hold. Contrary to your assertion, randomness says we must be able to remove any arbitrary element of the sequence without the sequence collapsing into non-randomness.

The key concept there being any arbitrary element. We can't remove a number based on its value or a relationship of its value to any other value. "Remove every third element" is okay. "Remove all the even numbers" is not, because a random sequence must be able to produce an even number at equal probability with an odd one. "Remove each number that is greater in value than its predecessor" isn't okay, because that would constrain the value of every item according to its place in the sequence. "Remove the first N elements" is valid. In your rush to pontificate, you forgot that Jahn's REG used a shift register that happily discarded some of the random bits, not based on their value but upon where the digit occurred in the sequence whatever its value. Commensurately, Zimbardo classified one of his subjects as an outlier based not upon which group he belonged to -- not upon whether the coin came up heads or tails in his case -- but upon violation of the rules of the experiment. That in no way affected the randomness of the remaining Guard group. The coin tosses that drove the rule that placed them there was still a random sequence of coin tosses, because their tosses were completely independent of the tosses that put the outlier in that group.

Similarly Palmer chose to disregard Operator 010 not based on whatever process selected her as a subject from the pool, but upon the conformance of her subsequent data to the expected distribution of results and upon suspicion of violation of a control protocol (i.e., the volitional variable, which seemed to defeat even Operator 010 every time). The chastisement you received for wanting to withdraw data was not because it violated randomness but because it violated homogeneity. I explained this previously. Initially, random sequences are used to achieve homogeneity. Thereafter, homogeneity is the property the experimenters endeavor to preserve.

Your knowledge of statistics is limited to elementary concepts you frantically Google for the day of, combined with your arrogant-yet-simplistic theorization for how things "must" work. And as such you can't possibly deal with an actual informed discussion of the topic, so now you're propping up all the standard excuses for why you don't have to deal with your most competent critics. Your act is nothing but bluff-and-bluster and gaslighting. Good luck finding an audience who will endure that, or even be fooled by it for very long.
 

Back
Top Bottom