In my opinion Jay’s posts are irrelevant to the discussion so I ignore them for most part, but you have a different opinion.
Your judgment of irrelevance is not supported by the evidence. I have described in detail
here and
here how my posts are relevant, and I have supported that description with references to the literature and to your specific posts. You have not responded to those arguments. In contrast you have provided nothing but your say-so to justify ignoring my analysis of your argument. Have you considered the possibility that the audience is also considering the hypothesis that you are making excuses to avoid addressing challenges for reasons such as an inability to understand them or an inability to rehabilitate your criticism in light of them? What evidence do you offer to persuade them otherwise?
Let me remind you of the basis of your argument against PEAR's critics:
There are several objections to this research; I am going to go over them:
1. Incorrect statistical methods were used to analyze the data.
2. The methods of analysis are correct, but the results were interpreted incorrectly
[...]
The first two objections are nonsensical, people who raised them do not know what they are talking about. As a data analyst, I use similar, although not exactly the same, methods to analyze stock market data, manufacturing data, advertisement campaigns data, etc., (I work for a consulting company)
The highlighted portion is a claim to special expertise. You are claiming to be professionally competent in the field of data analysis. You do not specifically define what that means, but you say you are confident that the methods you use are similar enough to the methods used by the authors and critics that you can claim expert understanding. We stipulate in any case that you are making a claim to expertise in both descriptive and inferential statistics.
Earlier you insinuated your expertise was in "mathematical statistics" while your critics at various web forums could be categorically considered not competent in that field. But then you also suggested that one had to be a "mathematician" in order to understand how your arguments refuted Palmer -- insinuating that none of your opponents could be expected to achieve that understanding because none of your opponents was a mathematician. This is problematic because, while you claim a B.S. in applied mathematics, you concede this does not qualify you as a mathematician. Further, you claim academic credentials as an engineer, but you also claim that engineers don't learn the kind of statistics needed to understand your critique of Palmer. It is unclear then upon what foundation you expect your claim of expertise to rest. It remains a collection of contradictory and ill-defined categorical assertions.
This examination is critical because your argument is of the form, "I am an expert in statistics, and as an expert in statistics it is my judgment that Palmer's statistical analysis is incorrect." You have made your claim to expertise a premise in the argument. You have even gone so far as simply to declare "Palmer is not a scientist," or "Palmer is not an expert in statistics" as the sum total of your response to him on any given point. You've treated Jeffers with similar categorical denials. You will note, by the way, that the categorical claims that Palmer is unqualified have been refuted. When your argument consists of little more than your purported expert judgment, then the foundation of expertise becomes really the only basis of refutation. And when this occurs, to do so is not an
ad hominem argument or a personal attack. If you want the premise to stand, you must substantiate your expertise.
It is especially important in this case because we have examples of your previous arguments, which loosely followed the same form: "I am an expert in _____________, and as an expert I can say that my critics do not know what they are talking about." Those threads ended abruptly with your departure, after your critics in them demonstrated not only that they knew the subjects you were discussing, but also that they knew them better than you. Given that history, it seems prudent not to take any more of your claims to expertise at face value when they are made the premise of your argument. Hence we are testing your expertise. I can only imagine how an audience might regard such reluctance to have a premise tested.
Now you may say that your argument is not pure
ipse dixit, not based on simple declarations of intellectual superiority, and have been documented all along the way with external references -- which, by the way, you demand that your opponents also provide before you will listen to them. We covered this already; your external references fall into a number of impotent categories. First, in some cases you provide such documented presentations only after one your opponents raises a topic. Then your presentation is little more than undirected and unneeded didactics. What meaning should an audience take away from the timing? Second, in some cases your reference merely defines or mentions a concept that appears in your argument and does not in the least support the argument you have made from it. It is as if your argument is, "See, this is a real concept, therefore what I say about it must be true." This is what happened when you tried to defend your home-grown concept of randomness (i.e., to preclude excising points from a data set). You merely linked to the Wikipedia article and ignored the argument your opponents drew up using that information to show that data
must be excisable by the very nature of randomness. In short, you can't show that you understand the references you link to. That leads us to a third category: in some cases you cherry-pick items from a source while ignoring that the context firmly refutes what you plan to use that reference for. This was the case when you tried to document the proper use of the t-test. And finally, when you run across something like your discovery of the power transform, you try to interject it into aspects of the discussion where it clearly doesn't belong.
One of the questions an audience will want to answer is whether this is the way a real expert uses external citations. And that in turn bears on the propriety of a demand that others use them in the same way. As I mentioned before, pseudo-science writing often adds copious references to persuade the reader -- who typically never follows them -- that the work has been meticulously researched and documented. When we do follow up on the references, we find (as we have here) that they are not what the principal work purports them to be. The insinuation behind your request -- i.e., that an argument is not valid or rigorous unless it relies on external documentation -- is simplistic. In contrast you seem to be committing the converse error, creating an impression that the apparatus of external documentation by itself conveys rigor.
In connection with the above, you might also say that you've provided plenty of deductive reasoning and mathematical proofs to support your position, and that these should stand on their own regardless of whether or not your claims to expertise are well-founded. But as we discovered in the proof-of-God thread, you are not especially proficient in the propositional logic require to construct cogent proofs. You confused inductive proof with deductive proof, for example. You committed several other logical errors there, and you are committing a few here too. Here you seem to consider it deductively conclusive that Jeffers' double-slit experiment is invalid because a Fraunhofer diffraction pattern is observed not to be bell-shaped. This neglects the problem of your misconception of premise. The dependent variable in the double-slit experiment is not the diffraction pattern itself. Along the same lines you've tried to dismissively refute the mathematics of others. The hidden premise in those arguments is that you are competent enough in your interpretation of the math to defensibly correct others. We've shown in important cases here that you aren't, that you lack knowledge of relevant facts that would be appropriate to making the specific judgment. Thus in form, such a claim is merely a further claim to expertise with extra steps.
Given this overall state of affairs, you need to consider that your audience is also weighing the hypothesis that you refuse to discuss my posts with me because doing so might demonstrate that statistics is yet another topic on which you have claimed expertise that you could not demonstrate. By refusing, according to some pretext, to engage in meaningful criticism, a claimant can continue to enjoy some benefit of the doubt regarding his actual expertise, if doubt is what he needs in order to support that point. But he cannot insist that the audience believe his pretext.
...this is a matter of personal preference.
No. Regarding the dependent variables in the two Jeffers papers, you are simply
factually wrong.
You wrongly believed that a single-slit Fraunhoffer diffraction pattern formed a bell-shaped normal probability distribution that was then used as the dependent variable for the analysis in the papers. You did not know that a single-slit Fraunhofer diffraction pattern was multi-nodal. (A multi-nodal distribution cannot be modeled or approximated with Gaussian or Poisson processes.) You seem to have wrongly assumed this from a connection you made back when you were explaining Jahn. You correctly noted that the Poisson model governs the number of flowing electrons that pass a point in unit time. And you correctly noted that a Bernoulli process can, in the limit, be approximated by a Poisson distribution where λ=
np. But you seem to have ignored all that passed in between those two invocations of Poisson in the actual study, and therefore wrongly connected them in a way that now precludes you from understanding Jeffers. This is what I explain in parts 1 and 4 of my series, which -- as you can see -- is clearly not irrelevant to your arguments.
This is the second time you have tried to sweep egregious errors of fact under the carpet and ask that we simply agree to disagree if necessary. This is not one of those cases where there isn't a clearly right and wrong answer. It has been shown using quotations from the papers themselves that you have wrongly attributed the dependent variable in the Jeffers studies. The author clearly states what they are and clearly states how he arrived at them, and those statements bear absolutely no resemblance to your representations. When your opponents are clearly in the right and you refuse to discuss the matter, there is no impetus for them to adopt a softer or more medial position.
The audience is smart, and I think vast majority of the members understand my posts very well, and see that I provide the data relevant to the discussion, and reject the one that has nothing to do with it.
These challenges do not become irrelevant simply because you say so. They do not go away simply because you wish them to. The audience indeed sees you ignore posts, and they can see the reasons you give for doing it. But the four-part series you're trying so hard to make go away was actually written with them in mind, to illustrate in terms they would understand exactly in what way you have erred. The one thing you cannot do is stop an audience reading something that's put before them, understanding it, and seeing you assiduously avoid it.
The audience has seen you make such assertions before. They've seen you claim my posts are irrelevant, only to raise the same topic later when it suited you. They've seen you claim things were not done for which there was ample evidence, and seen you reluctantly retract those claims. In other words, you've given the audience plenty of reasons not to believe these edicts of yours. How much evidentiary merit do you think an audience will give repeated declarations that posts specifically tailored to refute your arguments are somehow irrelevant to them?
This doesn’t mean that everyone agrees with me, but any intelligent member sees that I am not asking them to waste their time on evaluation of extraneous and useless information.
This verges on saying that if someone disagrees with your opinion, they are therefore not intelligent.
Let's keep in mind what you
are asking your audience to do. After previous, fully-rebutted claims to expertise, you're asking them -- once more -- to accept you as an expert in a field, and to accept your judgment -- according to that expertise -- as evidence. When the basis of your judgment is challenged, you are asking them to set aside those challenges and keep following you on a journey of personal appraisal as if nothing happened. How does that achieve the goal of testing whether psychokinesis is a real phenomenon?
The smart ones always win!
But it's also possible that those who convince themselves they won further convince themselves they did so because they're smart, or beautiful, or well-liked, or any number of
a priori qualities. Winning, in the skeptical sense, consists of arriving at the most parsimonious explanation of the facts as they are constituted from time to time. That is what we are ostensibly doing in this thread -- determining whether the facts as presented in scientific research support a belief in the reality of psychokinesis. A victory along those lines doesn't care about intellectual superiority or prodigious debate skills, or any of the motives that seem to be buzzing about this thread like a cloud of angry insects. If a claimant's goal is merely to win, then the sure loser will be the truth.
However, I will respond to a remark by Jay – he wrote something about establishing a baseline in Jeffers’ study. This is not what I meant – I meant that the knowledge of a test’s purpose affects the test results in an undesirable way.
Please link to the statements I make rather than vaguely recalling them or paraphrasing them to your liking. I won't respond to vague claims that I "wrote something about establishing a baseline." I've written many things about many subjects in this thread, with as much precision as my circumstances afford. I would consider it a kindness if you responded to those instead of what might be construed as straw men.
Here's what you wrote:
This is an incorrect calibration procedure because the subjects were told the purpose of the experiment before the calibration, if I understood the procedure correctly (Jeffers didn’t say exactly what the subjects were told about the experiment before it started). It is incorrect because the knowledge affects the subject’s mental state and introduces a bias.
Keep in mind this is in reference to the single-slit experiment. First, the calibration runs are not the same as the baseline runs. So no, you don't understand the procedure correctly. "The equipment is allowed to run for long periods unattended; typically 10 hours overnight, generating 40,000 data sets. The calibration data recorded from these long runs has been analyzed in the following way in order to estimate realistic values for the smallest offset we can unambiguously recover from our experimental data. These data are not used to decide whether our human operators have influenced the equipment." [Jeffers and Sloan,
op. cit. 1992, p. 345]
The baseline ("inactive") data were collected interleaved with test data, as I reported earlier. (I will note that you once claimed this would invalidate the results. Have you relaxed your objection to that?) There is no indication the subjects were told a baseline data set was being taken, or even what a baseline data set is. The paper clearly states that the display does not indicate what is happening aside from giving the subject instructions. The subject sees the machine work only when it is collecting active data. "Before the start of each run, 5 data sets are taken while a prompt is displayed stating the direction of effort desired for the upcoming bin in order to give the subject 5 seconds to get ready." [
Ibid., p. 343]
You happily admit that you have no idea what exactly the subjects were actually told. Yet somehow you are able to make the determination -- without any study or analysis -- that it biases the result. In science bias is measured, not guessed at. You're attempting to drawing a scientific conclusion without having done the science. The effect of volition on PK ability is speculated, but not studied.
The first hitch in your plan to discredit Jeffers is that informed consent is a requirement of human subjects research in the United States. 45 CFR §46.116 "(a) Basic elements of informed consent. Except as provided in paragraph (c) or (d) of this section, in seeking informed consent the following information shall be provided to each subject: (1) A statement that the study involves research,
an explanation of the purposes of the research and the expected duration of the subject's participation, a description of the procedures to be followed, and identification of any procedures which are experimental;" (emphasis added)
Further, Jahn followed essentially the same procedure. Which is to say, he was certainly beholden to the same federal regulations as Jeffers about what he had to tell his subjects about the research. And he had to at least tell the subjects what he wanted them to do -- try to affect the machine in a given way without touching it. And he took his baseline data interleaved with experimental data in very much the same way as Jeffers. Jeffers is no worse off in this respect as Jahn.
More importantly, none of these concerns has the slightest to do with "baselines" or calibration. As I said before, it is as if you are groping for some nefarious connection between disparate elements of the paper, which you now choose to talk about only in vague terms. Perhaps a word salad given under color of expertise would have the power to persuade a lay audience, but it is not objectively valid criticism here. It is valid to wonder about the potential causes of bias. But it is not valid to assume bias exists and has the result you desire, and it is not valid to try to connect that to calibration or baselines simply because those concepts are mentioned in the paper.
(Incidentally, I neglected to respond to this when it was posted several days ago:
Rather than analyzing conditions of all experiments done by his scientific adversaries, I choose the conditions of Jahn’s experiment to show that Jeffers did not reproduce them correctly.
This is not the first time you've tried to misrepresent Jeffers in this way, and not the first time your opponents have corrected you. Jeffers stated plainly to Alcock (as your source
Psi Wars indicates) that he did not intend to reproduce Jahn's experiment exactly. He stated as much also in the summary section of the double-slit paper. And, if the contention was that the Jahn experimental protocol was poorly designed, what was to be gained simply by repeating an unprobative protocol? Jeffers -- with assistance from Alcock, and Dobyns and Ibison from PEAR -- strove to create more defensible protocols, and PEAR was happy to have it.)
Again, Jay’s references to Palmer’s works are irrelevant because they do not provide any data of the treatment of outliers, which was my request.
No. The obvious straw-man argument aside, none of the four-part series that challenges the foundation of your argument has anything to do with Palmer's treatment of outliers. Instead the series has to do with basic statistical modeling, which -- aside from the Operator 010 issue -- has been the bulwark of your criticism of Palmer and Jeffers. The last two parts especially deal specifically with your error in attributing the dependent variable in Jeffers' two papers.
Regarding the treatment of outliers, your request to produce citations from the literature was satisfied. Contrary to your promise, you did not read them; you merely cast admittedly uninformed aspersions against them and against me for referring to them.
One more thing – Jay wrote that single- and double-slit distributions in Jeffers’s experiments are not statistical variables, as he calls them.
First, I never called them that.
Second, I never called them "distributions" either. While they are distributions in the strictest sense in which any
ad hoc mapping of outcome to frequency is a distribution, the temptation is to conflate them with the parametrically defined distributions we've been considering, such as Gaussian, binomial, or Poission. More to the point,
you are the one who wrongly thinks the single-slit diffraction pattern -- which you wrongly claim to be simply bell-shaped -- represents one of our parameterizable bell-shaped probability or frequency distributions. This is just as wrong as it can be, not merely a matter of differing opinion. Therefore to avoid any such confusion I have scrupulously referred to the product of the Fraunhofer diffraction model as either a "diffraction pattern" or an "interference pattern," the latter chiefly in regard to the double-slit model.
Another thing they aren't is the dependent variable in either of the Jeffers papers. Your argument suggests (and still does, below) that you believe they were. This was okay with you in the single-slit case where you paid attention only to the central node of the diffraction pattern and you could compare pictures to "confirm" that it was bell-shaped. Your consternation arose only when you couldn't make the double-slit diffraction pattern fit your incorrect assumption of Jeffers' statistical model, and then tried to make that Jeffers' fault.
If this is true, Jeffers did hell of a lot of useless work that is not needed for his experiments.
Once again you're in the position of trying to figure out why a qualified practitioner would behave in the inexplicable way you think he did. You don't seem to seriously consider that the answer to the dilemma might be that what you think he did, and why, is not what he really did, or why. You can't or won't correctly name what the dependent variable was in either of these experiments, despite your claim to be an expert in data analysis. Regardless, this makes it hard for you to argue that you correctly understand Jeffers' papers. Understanding the material is, obviously, a necessary prerequisite to criticizing it meaningfully.
If you had followed the third and fourth installments in my series, it would have explained what work Jeffers did and why, and what the dependent variable ended up being in his single-slit and double-slit experiments. Others have already figured it out just by reading the papers themselves. Your wonderment here is bumping up hard against your claims that what I wrote is irrelevant. Not only is it relevant, it clears up the dilemma you're trying to pose today.