Moderated Is the Telekinesis Real?

Buddha, this is an example of how to accept and respond to criticism in a civil debate. Please follow Crossbow's example when your errors are pointed out to you.

Some chance! This is the guy who thinks he knows better than every accepted theologian, biologist, physicist etc in history, why would he admit to error?

You cast pearls before swine JayUtah, but please accept a short but heartfelt round of applause from the cheap seats for your patience and dedication to reality.
 
It is becoming increasingly apparent that this is because he either fails to understand it, or simply ignores it.

There's no question he doesn't understand it. Whether we're talking about control systems, basic descriptive statistics, or the methodologies of psychological experimentation, he stumbles over foundational concepts.

The broad phenomenon in fringe argumentation is that claimants will frequently try to dumb down the problem to fit the understanding they already have. This can lead to easily-discerned oversimplifications. But it can also lead to baffling and amusing exercises of the form, "Well, X is just like Y and I know Y, so because of these reasons pertaining to Y, I can make the following claims regarding X." Buddha is desperately trying to make psychology research look like one of the things he knows about -- or more accurately, one of the things he thinks he can bluff about without detection.

I brought up some of the ways experimenters employ methods in descriptive statistics to control for factors from the "messy" real-world way human subjects are obtained and the "messy" real-world ways things about them are measured. Buddha doesn't understand them, can't refute them, and so therefore they are "irrelevant." They don't fit what he already knows about the subject, so he changes the real world to fit his preconception. This is what he did with Dr. Jeffers, who addressed the suspicious inter-subject phenomena in the baselines. In order to pick apart Jeffers' analysis one has to know about the underlying statistics and how experimental scientists use them to confirm the integrity of their data. Again, Buddha doesn't understand them, can't refute them, and so pretends for a while that the Jeffers analysis doesn't exist before finally declaring it too to be "irrelevant."

As I mentioned above, Buddha is trying to foist the notion that psychology research and control-systems design must follow the same rules. That's not even a tenuous connection. But it serves his purpose by changing the subject to one he may feel more comfortable discussing. As we've seen several times, Buddha recovers from error by pontificating on some irrelevant subject, ostensibly to assure us readers that he really is as intelligent and well-informed as he needs the world to believe he is. When he is compelled to discuss psychology research, he can't seem to escape his own wild fantasies -- empirical control must be some kind of machine, impossible to detect deception except with some kind of truth serum, etc. If he can't conceive of how it's done, then it must be impossible. Fitting the problem to his knowledge rather than expanding his knowledge to accommodate the problem.

And then incidentally, we see how he misrepresented control systems. If we argue that control systems have nothing to do with psychology research and the subsequent statistical analysis, then by rights we should ignore what we say is irrelevant. But like a good lawyer would do when writing a brief, I've offered a few lines of reasoning to apply in the alternative. If, hypothetically, one wants to argue that psychology is indeed similar to control systems, then one still has to get the control-system part of the argument right.

Buddha doesn't. Or rather, he misrepresents the field to describe only those examples that contradict how he believes Dr. Palmer is looking at the PEAR data. I provided other examples and described how they work as congruently to Palmer as can be expected from such an inapt comparison. At best it shows the depth of Buddha's errors in his defense of PEAR. There is very little of anything he talks about on any subject that he manages to get right. But what I find most amusing is the tortured saving of face he attempts when confronted with information from his own professed field that contradicts him. He doesn't even discuss whether the information is right, or how it affects his argument. He turns immediately to casting aspersions on the person who provided it, insinuating that I'm not qualified to have that knowledge and that it must have been such an arduous chore for me to put it together. He isn't concerned at all about the argument; he only cares that his status as Alpha Brain remains intact, approachable only by extreme effort from his critics.

That's what cements this and all the other threads he's started here as a fairly predictable exercise in ego reinforcement.
 
Last edited:
The broad phenomenon in fringe argumentation is that claimants will frequently try to dumb down the problem to fit the understanding they already have. This can lead to easily-discerned oversimplifications. But it can also lead to baffling and amusing exercises of the form, "Well, X is just like Y and I know Y, so because of these reasons pertaining to Y, I can make the following claims regarding X."

Is it just me, or does this approach invariably demonstrate that the claimant doesn't even know about Y?
 
“A more uniform distribution of scoring across subjects is suggested by
m analyses using the subject as the unit. A mean run score on the
experimental runs was computed for each subject by reversing the direction
of the PK- scores and taking the average of the PK+ and PK- scores, weighted
by the number of runs in each condition. The mean of these scores was
100.03 which is significantly above chance, although barely so (t[211-1.74,
T<.05, one-tailed). However, when the experimental scores are contrasted to
the baseline scores using a dependent t-test, the result falls just short of
significance (t[21]-1.67).”Palmer, page 119

This is not a baseline, contrary of what Palmer thinks; he doesn’t have a clear idea of what the baseline is. A baseline is determined before the start of a project, not during it. For example, see https://ec.europa.eu/eurostat/statistics-explained/index.php/Glossary:Baseline_study

When it is possible, the scientists use theoretical considerations to establish a baseline; there is a good reason for that, I will explain it later. When it is not possible, a baseline is based on the data available before beginning of a project

Fluctuations of electrons from the surface of a metal form a Poisson distribution, as the theory shows. In Princeton study the baseline is 0.5 (actually, this is a Bernoulli trials process with the limiting case being Poisson distribution).

Except for the electron emission part of the equipment, it is possible that other equipment parts introduce the bias, which may result in non-Poisson process. To rule out this possibility, the researchers run the device without the subjects being tested, collect the results and use certain statistical methods to determine if results form a Poisson distribution.

Let’s say the results do not form a Poisson distribution. In this case the team follow well-known guidelines: 1. They check if the equipment is assembled according to the manufacturer’s instructions; 2. They make sure that there are no unwanted feedback loops between the equipment and external devices (in this case the external device is the recorder). 3. They make sure that the pressure, temperature, electric current, etc., are within the allowable limits. 4. They shield the equipment from external electromagnetic fields, solar radiation, etc.

If a theory is correct, these measures guarantee that the scientists are dealing with a Poisson process.

When it is possible, the scientists do not base the baseline on empirical data for several reasons: 1. If there are no theoretical considerations, scientists would not know what kind of process they are dealing with, which makes an establishment of the baseline extremely difficult. I think I should elaborate a bit on this topic. Let’s say that results of an experiment do not form Poisson distribution. There are non-Poisson processes as well with their own rules of choosing a baseline. Without knowing which one of them is present in a particular case, you won’t be able to choose a baseline. Of course, you have a choice to use all available fitness tests to identify a distribution. But they all could come up empty. At another extreme, your empirical data might fit more than one distribution, which would make the choice impossible.

2 It would require an infinite number of runs to determine the nature of a process. Take, for example, the coin-toss process. The percentage of tails fluctuates around 50% but it is very seldom exactly 50%. You have to postulate that, if the coin is unbiased, the probability of ether outcome is 0.5. You take certain precautions to make sure that the coin is not biased, but this is the best you could do.

This post is not finished, but I have to go back to work. I’ll be back on Tuesday. Happy Labor Day!
 

tl;dr -- Buddha doesn't understand the statistics PEAR used to analyze their findings. He has conflated the incidental means of validating the baseline with the actual tests for significance against the baseline. Based on that misunderstanding, he accuses Palmer of erring when he recomputed the significance. I explain how the t-test for significance works and provide an example to illustrate what Dr. Jeffers found suspicious about PEAR's use of it.

This is not a baseline, contrary of what Palmer thinks...

Yes it is, contrary to what you think.

[H]e doesn’t have a clear idea of what the baseline is.

Yes he does, in the context of PEAR's research which intended -- correctly so -- to use the t-test for significance. Dr. Palmer explicitly notes that PEAR's decision to use the t-test instead of the Z-test improves over Jahn's predecessor Schmidt in studying the PK effect. I've mentioned this several times, but you never commented on it. I'm going to explain why it's an improvement, why PEAR was right to use it, why Palmer was right to endorse it's use, and why you don't know what you're talking about.

A baseline is determined before the start of a project, not during it.

No.

If your intent is to use the t-test for significance, then baseline data must be collected empirically. It can't be inferred from theory. It can only be done after the project design, apparatus, and protocols are in place. Now where the baseline calibration factors are all static given the above, all the empirical baseline data may be collected prior to any experimental trials -- or even afterwards, as long as the baseline collection is independent of experiment trials. But if instead the calibration factors include environmental factors that cannot be controlled for except at the moment of trial, then it would be a mistake to compare experimental data collected in one environment at the beginning of the project to baseline data collected in a different environment as the project proceeds.

It is up to the judgment of the experimenter to know which factors apply. In this case Dr. Jahn, an experienced engineer, properly understood that the REG apparatus was sensitive to several environmental factors, only some of which he could control for explicitly. Hence the protocol properly required calibration runs at the time of trial. This is so that the collected data sets would be reasonably assured to be independent in only one variable.

When it is not possible, a baseline is based on the data available before beginning of a project.

No.

There is no magical rule that says that all baseline data must be collected prior to any experimental data, and absolutely no rule that says calibration runs may not interleave with experimental runs. You're imagining rules for the experimental sciences that simply aren't true. We know from your prior threads that you have no expertise or experience in the experimental sciences, so you are not a very good authority on how experiments are actually carried out. We further know that you will pretend to have expertise you don't have, and that your prior arguments depart from "rules" you invent from that pretended expertise and then try to hold the real world to.

Fluctuations of electrons from the surface of a metal form a Poisson distribution, as the theory shows.

Yes, in theory. The REG design is based on a physical phenomenon known to be governed principally by a Poisson distribution. That does not mean the underlying expectation transfers unaffected through the apparatus from theoretical basis to observable outcome. In the ideal configuration of the apparatus, and under ideal conditions, the outcome is intended to conform to a Poisson distribution to an acceptable amount of error.

Except for the electron emission part of the equipment, it is possible that other equipment parts introduce the bias...

Not just possible, known to confound. We'll come back to this.

To rule out this possibility, the researchers run the device without the subjects being tested, collect the results and use certain statistical methods to determine if results form a Poisson distribution.

The results will never "form" a Poisson distribution. The results will only ever approximate a Poisson distribution to within a certain error.

Let’s say the results do not form a Poisson distribution.

Specifically, if the machine is operating properly a Z-test for significance applied to the calibration run will produce a p-value less than 0.05. If the p-value is higher, it means some confound in the REG is producing a statistically significant effect.

But you misunderstand why this is of concern. You wrongly think it's because it is the goal of the experimenters to compare the experimental results to the Poisson distribution. Instead, an errant result in an apparatus carefully designed and adjusted to approximate as close as possible a Poisson distribution indicates an apparatus that is clearly out of order. This in turn indicates an unplanned condition within the experiment that cannot for that reason be assumed later not to have confounded in an unknown qualitative way with the experimental data. The Z-test for conformance to the Poisson distribution merely confirms that the machine is working as intended, not that the machine is working so well that the Poisson distribution can be substituted as a suitable baseline.

In this case the team follow well-known guidelines...

Well-known, but known not to be exhaustive. This is the part you're missing. Yes, the operator of an REG (or any other apparatus) will have a predetermined checklist to regulate the known confounds with the hope of reducing measured calibration error to below significance. That doesn't guarantee he will succeed in removing all error such that he can set aside measurement in favor of theory.

If a theory is correct, these measures guarantee that the scientists are dealing with a Poisson process.

No.

Certainly a conscientious team will look for sources of error. But they know they cannot exhaustively do so, and that they will never reduce the Z-test p-value to zero. Nor is it possible to. They will only reduce the results of the calibration Z-test to a p-value that is acceptably small for their purposes. "Acceptably small" does not mean zero. It merely means they have confidence that the machine is working as expected. They know from the start that they are dealing with a Poisson process. The calibration merely ensures that the Poisson effect dominates the machine's operation.

If they were to use the Z-test to compare the experimental data to the idealized Poisson distribution, the error remaining in the calibration Z-test would still be a factor. And it would have been set aside in that method.

Let's say the calibration runs produce a p-value in the Z-test of p < 0.045. That's certainly enough to ensure that the machine is operating within tolerance. But the error still exists as a non-zero quantity. Using the Z-test combines that error with any variance in the experimental results such that they cannot be separated. When the expected variance in your experimental results is very small, this becomes a concern.

Hence the t-test for significance, which relaxes the constraint that the expected data conform to any theoretical formulation of central tendency. The data may incidentally conform, but that's not a factor in the significance test.

There are non-Poisson processes as well with their own rules of choosing a baseline.

Yes, which is why the other tests for significance besides the Z-test exist. The rules for choosing a baseline in the t-test are that the baseline is determined to an acceptable degrees-of-freedom extent by some number of empirical runs where the test variable is not varied. The protocol for running the calibration is determined by known factors of the test, and looks at how the expected or known confounds are thought to vary. Jahn et al. expected the confounds to vary mostly by factors that would exhibit themselves only at the time of trial, and could only be partially controlled for by machine adjustment. Hence calibration runs interleaved with trial runs.

"Better than chance" in these contexts doesn't mean varying from the Poisson distribution. It means varying from the behavior that would be expected were not some influence applied. Whatever that behavior would have been is completely up for grabs. You don't have to be able to fit it to a classic distribution. You only have to be able to measure it reliably.

Without knowing which one of them is present in a particular case, you won’t be able to choose a baseline.

Which is why a test was developed that determines a baseline empirically, without the presumption that it would conform to some theoretical distribution. If you aren't sure which theoretical distribution is supposed to fit, or you know that no theoretical distribution will fit because of the nature of the process, then descriptive statistics provides a method for determining whether some variable in the process has a significant effect by comparing it against an empirically-determined baseline. The limitations of baselines determined empirically translates to degrees-of-freedom in the comparison, but does not invalidate it entirely. This is Descriptive Stats 101, Buddha. The fact that you can't grasp this simple, well-known fact in the field says volumes about your pretense to expertise.

At another extreme, your empirical data might fit more than one distribution, which would make the choice impossible.

The t-test requires no choice -- it always uses the t-distribution. You fundamentally don't understand what it is, why it's used, or how it achieves its results.

It would require an infinite number of runs to determine the nature of a process.

No.

This is comically naive, Buddha. You're basically arguing that the t-test itself is invalid, when it is actually one of the best-known standard measurements of significance.

No, the t-test does not require "inifinite number of runs" to establish a usable baseline. The confidence in the baseline is determined by the distribution of means in the calibration runs. The standard deviation of that metric determines the degrees of freedom, which is the major parameter to the t-distribution. The degrees-of-freedom flexibility in the t-distribution is meant to compensate for uncertainty in the standard deviation in the distribution of means in the calibration runs.

You don't compare the calibration runs to some idealized distribution. You compare them to each other. The central tendency of that distribution of means measures the consistency of the calibration runs from trial to trial. If the calibration runs are very consistent, only a few of them are needed. If they are not consistent -- i.e., the standard deviation of the distribution of means is large -- then many more runs will be required to establish a true central tendency.

But once you know the degrees of freedom that govern how much the t-distribution can morph to accommodate a different distribution, you know whether you have a suitably tight baseline.

Let's say you do twenty calibration runs, and for all of them the Z-test against the Poisson distribution produces a p-value in the range (0.044-0.046). That's approaching significance, but the p<0.05 threshold may be sacrosanct in your field. So you're good to go. Your confounds are just below the level of significance when compared to Poisson.

But instead we might find that the distribution of means in the calibration runs is extremely narrow. That is, the machine might be on the hairy edge of accurately approximating the Poisson distribution, but it could be very solidly within the realm of repeating its performance accurately every time. This is why the t-test is suitable for small data sets (in Jahn's case, N=23) where such behavior might be revealed in only a small number of calibration runs.

A small standard deviation in the baseline means translates to fewer degrees of freedom in the ability of the baseline to "stretch" to accommodate values in the comparison distribution of means. That means any data that stands too far outside the properly-parameterized t-distribution will be seen as significantly variant. That is, it is the consistency among the baseline runs, not their conformance to one of the other classic distributions, that makes the comparison work.

But what's more important is that any concern in the p-values of the Z-test on the calibration is irrelevant. Whatever was causing the machine to only-just-barely produce suitably random numbers was shown in the t-test baseline computation not to vary a whole lot from run to run. Whatever the confounds are, they're well-behaved and can be confidently counted on not to suddenly become a spurious independent variable. If the subject then comes in and produces a trial that varies at p < 0.05 in the t-test from the t-distribution parameterized from those very-consistent prior runs, that's statistically significant. If that subject's performance had been measured instead according to the Poisson distribution, then the effect hoped to be statistically significant would still be confounded with whatever lingering effect was causing the p < 0.046 etc. values in the calibration.

In your rush to play teacher, you've really shot yourself in the foot today.

First, I covered all this previously. It was in one of those lengthy posts you quoted and added to it a single line of dismissive rebuttal. You constantly attempt to handwave away my posts as "irrelevant" or somehow misinformed, but here you are again trying to say what I've already said as if you're now the one teaching the class. The way the Poisoning-the-Well argument technique works is that you're not supposed to drink from the same well. I explained how the t-test and its parameters worked, but now it's suddenly relevant when you decide to do it...

...and get it wrong. That's our second point. You fundamentally don't understand how tests for significance work. It's clear you've ever only worked with the basic, classic distributions and -- in your particular mode -- think that's all there could ever be. As I wrote yesterday, you're trying to make the problem fit your limited understanding instead of expanding your understanding to fit the problem. And in your typically arrogant way, you have assumed that your little knowledge of the problem, gleaned from wherever, "must" be correct, and that someone with a demonstrably better understanding of the subject than you -- the eminent psi researcher John Palmer -- "must" have conceived the problem wrong.

These are questioned intended entirely seriously: Do you ever consider that there are things about some subject you do not know? Do you ever consider that others may have a better grasp of the subject than you? Have you ever admitted a consequential error?

Third, now it's abundantly clear why you're so terrified to address Dr. Steven Jeffers. Your ignorance of how the t-test for significance works and achieves its results reveals that you don't have the faintest clue what Jeffers actually did. You're ignoring him because you don't have any idea how to even begin. It's so far over your head.

So the t-test for significance compares two data sets that are categorically independent according to some variable of interest (in PEAR's case, whether PK influence was consciously applied). All the potential confounds are expected to be homogeneous across the two sets. One data set is the calibration runs, represented by its mean and standard deviation. The other set is the experimental runs, similarly represented. The N-value (23, for PEAR) and the standard deviation in one distribution determine the degrees of freedom that the corresponding t-distribution can use to "stretch" or "bend" to accommodate the other distribution.

What Jeffers discovered was that PEAR's t-distribution for the calibration runs was too tightly constrained. Working backwards, this translates into not enough degrees of freedom, then into not a lot of variance in the calibration means for a sample size of 23. In fact, an absurdly small amount of variance. Too small to be possible from PEAR's protocol. Why? Because while the process underlying the the REG operation is theoretically Poisson, the process variable gets discretized along the way. Discretizing a variable changes the amount by which it can vary, and consequently the ways in which statistical descriptions of such variance can appear.

Let's say you ask 10 people to name a number between 1 and 10. We take the mean. Can that mean have a value of 3.14? No. Why not? Because our divisor is 10, and can never produce more than one digit past the decimal. It could be 3.1 or 3.2, but not 3.14. Do that 20 times, for a total of 20 means computed from groups of ten. If we aggregate the means, they can't vary from group to group by anything finer than 0.1. Data points will be either coincident or some multiple of 0.1 apart. If we look at the distribution of those means, there is a limit to how closely they can approximate a classic distribution because they are constrained by where they can fall in the histogram. They can fall only on 0.1-unit boundaries, regardless of how close or far away from the idealized distribution that is. All our descriptive statistics are hobbled in this case by the coarse discretization of the data.

All that occurs because the customary response to "pick a number between 1 and 10" is an integer. If we re-run the test and let people pick decimal numbers to arbitrary precision, then the group means can take on any real value, the aggregate of means can take on any value, and the distribution of those means across all groups has more flexibility to get close to a classical distribution. More importantly, the standard devision of that distribution has more places to go.

What Jeffers found was that the purported distribution of means in the calibration runs is not likely to have actually been produced by the REGs because it offered a standard deviation not achievable through the discrete outputs the REG offered, just like there exists no set of integers such that their sum divided by 10 can be 3.14.

I would like you to address Jeffers, the critic of PEAR you've been avoiding for weeks. I would like to see you demonstrate enough correct knowledge of the t-test for significance to be able to discuss his results intelligently, and at the same time realize that John Palmer is not misinformed as you claim. At this point you seriously don't know what you're talking about.
 
After all, if telekinesis were real, then there would be people making millions of dollars per year simply by going to casinos and using their powers to rig games like roulette and craps to their benefit. Or these people would periodically win big on Powerball Games and other such things which involve random chance objects.

Didn't you know that they are hired by the casinos to make you lose money? Additionally winning big in Poweball Games and the like is the way the illuminati amassed their fortune :tinfoil:tinfoil:tinfoil:tinfoil:tinfoil:tinfoil:D

Summary: telekinesis is as real as the illuminati.
 
Jeffers' !!! Jeffers' !!!

For what ever reason you do not understand some, although not all, my responses. As for being repetitive, I agree with you. I try to respond to some posts that I find interesting, although they may contain similar data.

You forgot to mention "his replies have little to do with the posts his quoting", which is exactly what you're doing here :D

You might be a retiree , but I am not, so I do not have time for everything I would like to do. Yes, I know, I have repeated some stud that I wrote before, but I find your post interesting so I respond to it on a personal basis.

Oh, you have the clinical eye! :rolleyes::rolleyes::rolleyes::D:D:D

What ever made you say that? Everyone knows I'm here to learn English (not from you, off course). But whatever my reasons are, everyone knows I'm very busy, even when I'm here. That's why I'm barely reading your posts now. Reading JayUtah's and others' here give me all I expect (and it's worth).

"Buddha", I'm looking forward to your next zero-content post so I will be able to enjoy their replies and learn.

[you should have read it for real before even responding to it .... by the way: Jeffers' !!! Jeffers' !!! -the constant reminder that you have definitively lost this debate: any person with a forehead taller than one inch will read Jeffers' and will find that, at the date of the post I'm replying to, your avoidance of it in this thread, not to mention your puerile attempts to claim nobody had provided that link, defines you as an hedonistic believer and not the "thinker" you pretend to be-]
 
You forgot to mention "his replies have little to do with the posts his quoting", which is exactly what you're doing here :D

You can see that he's devolved into quoting my entire posts -- especially the long ones that go into great detail -- and simply writing one or two dismissive sentences. They might have something to do with what I wrote, but at best only with one of several things that I wrote. I guess he thinks that if he can be seen to quote a post and write something -- anything -- then he can convince someone he's keeping up with the debate.

When Buddha says he doesn't have time to pursue everything he wants to, I tend to believe him. But I don't approve of how he budgets his time. He doesn't spend it on research or analysis. He just takes whatever little knowledge he has at hand and skips to the part where he produces the final result. He read perhaps one book on philosophy and then considers himself the ultimate philosopher. But all he can do is try to shoehorn everything into the topic of that one book. He clearly didn't study much biology before he wrote his book on evolution. He wants quick-and-dirty adulation, the illusion of erudition. Shortcuts work for a while, but then sooner or later you come up against a problem where you needed to have studied in depth before jumping to the conclusion. Sadly Buddha seems to have a disposition that precludes him from ever admitting failure.

His two most recent posts haven't been exactly content-free. It's just that the content is woefully naive and comically wrong. And he is probably just hoping most people will buy it without questioning it too much. But when he gets actual, in-depth criticism he runs away claiming it's all "irrelevant." All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.
 
All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.


Hopefully sales. In past jobs I’ve had the misfortune of working with people who made whatever promises they thought would get the sale. When reality hit they dealt with the impossibility of making any of those promises come true by gaslighting and lying to keep their commission at all costs.

I say “hopefully” because I don’t really want to contemplate the consequences of those tactics being used in designing any sort of a production or test environment. I’ve had to clean up some of those messes and it’s a nightmare, especially when the poor souls who were fooled are still under the sway of the incompetent consultant.
 
Hopefully sales.

I say “hopefully” because I don’t really want to contemplate the consequences of those tactics being used in designing any sort of a production or test environment

Indeed, I've had plenty of experience with unbridled sales teams and clueless "consultants." Frankly I had something darker in mind -- interpersonal relationships. The term "gaslighting" comes from the Angela Lansbury film Gaslight, which deals with the eponymous behavior in a marriage. The incapacity to admit error in a relationship is heinously bad.

Years ago I helped produce an unaired pilot for the History Channel that involved interviewing the late Apollo hoax proponent Ralph Rene. The guy was bat-crap crazy, and if you could have put a face to the Stockholm Syndrom, it would have been his wife. This guy was utterly convinced he was the smartest guy on the planet and that evil forces were conspiring to prevent him from being recognized as such.
 
Ingrid Bergman. Not Angela Lansbury. I mention this only because now I can say I got to correct JayUtah!

Great stuff as always.
 
Ingrid Bergman. Not Angela Lansbury. I mention this only because now I can say I got to correct JayUtah!

Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.
 
Well, yes and no. There were two adaptations made in 1940 and 1944. I remember that the better one was whichever one Angela Lansbury was in. Don't ask me why I don't remember that the better version is the Ingrid Bergman one, but it probably has something to do with it being Lansbury's introductory film role.

With respect, let us settle this like gentlemen, sir. Angela Lansbury has a secondary role in the 1944 film,cast list here. She was not in the 1940 film, a British production, cast list here.
 
With respect, let us settle this like gentlemen, sir. Angela Lansbury has a secondary role in the 1944 film,cast list here. She was not in the 1940 film, a British production, cast list here.

There's not much to settle as far as I'm concerned, but thanks for completing the research. The "yes and no" was in response to Garrette's "I got to correct JayUtah." Yes, because -- as you point out -- Lansbury has only a supporting role, not the lead. It would have been more proper to cite the leading lady. I will happily accept the correction to avoid future confusion. No because she is, in fact, in the film -- the one I regard as the better adaptation, and that's just the way I remember it. It's more probably more to do with the way I watch films. I tend to look for the first performances of actors who later became famous. Those associations then stick out in my mind.
 
Oh, I was joshing. As a matter of fact, the 1940 version was almost lost. MGM, the studio that brought out the 1944 version, bought all rights from the British studio and the contract called for the British company to destroy all prints and negatives so there would be no competition. A good print did survive, and now both versions are available. Before the tangent ends, I admire your patience and cogency, Mr. Utah.
 
You can see that he's devolved into quoting my entire posts -- especially the long ones that go into great detail -- and simply writing one or two dismissive sentences. They might have something to do with what I wrote, but at best only with one of several things that I wrote. I guess he thinks that if he can be seen to quote a post and write something -- anything -- then he can convince someone he's keeping up with the debate.
When Buddha says he doesn't have time to pursue everything he wants to, I tend to believe him. But I don't approve of how he budgets his time. He doesn't spend it on research or analysis. He just takes whatever little knowledge he has at hand and skips to the part where he produces the final result. He read perhaps one book on philosophy and then considers himself the ultimate philosopher. But all he can do is try to shoehorn everything into the topic of that one book. He clearly didn't study much biology before he wrote his book on evolution. He wants quick-and-dirty adulation, the illusion of erudition. Shortcuts work for a while, but then sooner or later you come up against a problem where you needed to have studied in depth before jumping to the conclusion. Sadly Buddha seems to have a disposition that precludes him from ever admitting failure.

His two most recent posts haven't been exactly content-free. It's just that the content is woefully naive and comically wrong. And he is probably just hoping most people will buy it without questioning it too much. But when he gets actual, in-depth criticism he runs away claiming it's all "irrelevant." All he ever does is pontificate and gaslight from a position of easily-seen ignorance. I shudder to think of the contexts he's been in where that might have actually worked for him.
As to the highlighted, we see this a lot. These appeals to lurkers, that just don't exist on this forum, are in so many threads where the poster pretends he is doing some public service by battling the big bad stupid closed minded skeptics.

Have any of them been proved correct? Has anybody...delurked.. and exclaimed that were swayed by a woo argument?

In the JFK and 9/11 sections, when somebody starts posting in another's defense, they can't make it past one or two points before their positions diverge into opposite convictions and don't support one another. They often then ignore each other and bloviate on their own versions of fantasy.



Just who are these lurkers that follow the ISF yet never seem to jump into a thread and proclaim their agreement with the woo de jour?
 

Back
Top Bottom