Is ESP More Probable Than Advanced Alien Life?

Puppycow said:
I disagree about the "default position".
Good, because it's a flat-out fabrication. There's no default position in science for possibility. If it violates well-established theories and has no evidence supporting it it'll get rejected, but that's about as much of a default position as we have. Each scientist makes up their own mind about whether to accept an idea as possible. I'm willing to accept ideas that don't obviously contradict known data or principles for the sake of discussion, while others demand a certain degree of evidence (which varies between researchers) before they'll even discuss an idea. It all depends on the person, the context, and how much beer everyone's had. (I tend to increase my standards as I get more drunk, which in my experience is not unusual among scientists...)

This is typical pseudo-science nonsense: make up arbitrary rules, declare that that's how science works, and you get to declair your opponents to be unscientific when they object.

In as much as ESP violates everything we know about how the brain works, the conclusion any competant researcher would arive at is that it's not possible.
 
No. Only necessary truths are "100%" true. There's nothing necessary about the laws of physics (either taken separately OR as a whole). They are contingently true and are based on observations that fallible humans have made.

I didn't say neccessary, I said true. As in, they simply are what is.

As to contingency -- it depends on the assumptions one makes about what we don't know.

If you assume that there is a set of immutable laws underlying everything, and we simply have imperfect observations of the effects of those laws, then we can use mathematics to be sure of whether a model we make is tolerable in all cases. You can be 100% sure that if you drop a boulder on an egg it will break the egg. 100%.

If you on the other hand assume that what underlies everything might be imperfect, and that for a reason we can't fathom the rules could suddenly change ( for example, if we are in a simulation and the owners decide to flip a switch and change the fundamental constants of physics to double their current values ), then all bets are off anyway.
 
Good, because it's a flat-out fabrication. There's no default position in science for possibility. ...

I noticed that Puppycow put the term in double quotation marks:
... I disagree about the "default position". ...
The default position refers to the default position of the null hypothesis, which Fudbucker mangled or reversed in his OP:
...
Is ESP possible? Yes. The default position is that a thing is possible until it's been proven impossible. A tea cup floating around Jupiter is possible (though very very unlikely).
...

I suspect Puppycow does not agree with Fudbucker's erroneous use of the term "Default position". Rightly so, since Fudbucker has mangled the null hypothesis by simply reversing the null hypothesis.
 
...
There is no set for ESP.
You've made that statement more than once. What does it even mean?
...
I suspect that "set for ESP" refers to incidence of ESP.

Use of "set" instead of "incidence" in this thread originates with Fudbucker's post 66:
...
I'm claiming the set of possible worlds that contain alien life is equal to the set of possible worlds where ESP occurs.
Hilite by Daylightstar
 
That's not an argument I'm currently making, but I think one could have some success making an argument along those lines.
Could you then spell out precisely what the argument is that you're currently making? You seem to have changed several times what it is you're arguing.



Actually, I would have no problem assigning a probability to the hypothesis that a "grue" is in my house without knowing what a "grue" is. I would have two competing hypotheses: H1, a grue is in my house, and H0, no grue is in my house. Since I don't even know what a "grue" is, I can have no evidence that would cause me to favor either H1 or H0 over the other. Moreover, I have no reason to think it more likely that you were telling me the truth or not. Consequently, I have two competing propositions, exactly one of which has to be true, and absolutely no reason to believe one more than the other. Therefore, my probability for each proposition must be 0.5.
Isn't this where the null hypothesis comes in then? Maybe I'm confused though, but I can't see how the probability can be 0.5 for each proposition.


In contrast, we have the very successful Standard Model of particle physics, according to which no particles or forces exist that would allow for ESP. The probability that the Standard Model is wrong is thus an upper bound on the probability of ESP. The theoretical physicist Sean Carroll has suggested that that probability is less than 1-in-1-million.

According to his blog:
http://www.preposterousuniverse.com/blog/2008/02/18/telekinesis-and-quantum-field-theory/

He says, "...I would put the probability that some sort of parapsychological phenomenon will turn out to be real at something (substantially) less than a billion to one." I personally thing that's still to high, but I couldn't find any million-to-one odds statement that he made.



Now you, too, are just making stuff up. A prior probability is subjective, personal. For better or worse, some people have irrational beliefs, and hence irrational priors. For example, some people believe exaggerated claims about the effect on human health of how livestock is raised. Other people believe in ESP. Such beliefs might be irrational, but we, or they, can assign them a probability which we can update as new evidence arrives. A key, foundational fact of Bayesian inference is that no matter how irrational our hypothesis is, it will always be overcome by sufficient evidence. This is a feature of Bayesian inference that skeptics, in particular, should appreciate: Bayesian inference can shoot down irrational beliefs about subjects like homeopathy, psi, etc. Your prior probability of homeopathy might be .99, but if enough evidence shows that homeopathy is ineffective, then Bayesian inference guarantees that your probability will (asymptotically) approach 0. No matter how irrational your initial beliefs are, with sufficient evidence they must become rational if you update them in accordance with Bayesian principles.
This is an excellent post, thank you.
 
The Norseman said:
Isn't this where the null hypothesis comes in then? Maybe I'm confused though, but I can't see how the probability can be 0.5 for each proposition.
If you have enough math acting as a shield between yourself and reality, you can make any argument you want and "support" it by equations. Take the log of any line enough times and it's a straight line, as a professor of mine used to say.

Note that his argument is entirely an argument from ignorance--since he refuses to learn what a grue is, his math magically works! If he'd bother to learn the term, however....
 
Actually, I would have no problem assigning a probability to the hypothesis that a "grue" is in my house without knowing what a "grue" is. I would have two competing hypotheses: H1, a grue is in my house, and H0, no grue is in my house. Since I don't even know what a "grue" is, I can have no evidence that would cause me to favor either H1 or H0 over the other. Moreover, I have no reason to think it more likely that you were telling me the truth or not. Consequently, I have two competing propositions, exactly one of which has to be true, and absolutely no reason to believe one more than the other. Therefore, my probability for each proposition must be 0.5.


Isn't this where the null hypothesis comes in then? Maybe I'm confused though, but I can't see how the probability can be 0.5 for each proposition.


H0, the hypothesis that there is no "grue" in my house," is the null hypothesis, but the null hypothesis has no special standing in Bayesian statistics. Unlike frequentists (sometimes) do, Bayesians don't assume the hypothesis with the word "not" in it is true and then "reject" it only if they collect data that would be unusual if the hypothesis were true. Frequentists are forced to use such procedures because they aren't allowed to assign probabilities to hypotheses in the first place. But frequentist procedures aren't inherently correct or even logically coherent.

Bayesians have it better. Given two hypotheses, if I have absolutely no reason to believe one more likely to be true than the other, then I see no way to represent those beliefs probabilistically other than to assign them equal probabilities that sum to one; that is, to represent my degree of certainty in each hypothesis as 0.5. I hope we can agree that in the abstract this is true. You might argue that for the "grue" problem it is not. For example, you might reason that a "grue" could represent an infinite number of things, and that since my house contains a tiny subset of those things, that the probability of H1 should be less than the probability of H0. But that entails an assumption about how the thing represented by "grue" was determined that I don't think is justified by how the problem was posed. To me it seemed like the problem provided no reason whatsoever to believe one hypothesis more likely than the other.

According to [Sean Carroll's] blog:
http://www.preposterousuniverse.com/blog/2008/02/18/telekinesis-and-quantum-field-theory/ he says, "...I would put the probability that some sort of parapsychological phenomenon will turn out to be real at something (substantially) less than a billion to one." I personally thing that's still to high, but I couldn't find any million-to-one odds statement that he made.


That's the quote I was thinking of. I evidently misremembered the number. I'm actually relieved, because I was troubled by thinking that the probability should be lower than that proposed by a subject-matter expert.
 
Last edited:
jt512 said:
Bayesians have it better. Given two hypotheses, if I have absolutely no reason to believe one more likely to be true than the other, then I see no way to represent those beliefs probabilistically other than to assign them equal probabilities that sum to one; that is, to represent my degree of certainty in each hypothesis as 0.5.
This is one reason I dislike this emphasis on statistics: all of this is basically a verbous and numeric way of avoiding the fact that you have no idea what you're talking about. You don't know what a grue is--therefore it is logically impossible--and intellectually indefensible--to assign ANY probability to it. The honest thing to do is to say "I have no idea what the probability is; what the hell is a grue?" To assign its presence a probability of 0.5 is to go far beyond what the data can support. Remember, there is no data here; there can't be, since you don't know what counts as data.

I honestly don't understand what the aversion to saying "I don't know" is. Why go through the mental gymnastics of drafting an equation to get a result that's fundamentally meaningless? If you don't know what a grue is, you don't know how likely it is to be in your house, and that's where you have to stop, at least until you learn what a grue is.
 
This is one reason I dislike this emphasis on statistics: all of this is basically a verbous and numeric way of avoiding the fact that you have no idea what you're talking about. You don't know what a grue is--therefore it is logically impossible--and intellectually indefensible--to assign ANY probability to it. The honest thing to do is to say "I have no idea what the probability is; what the hell is a grue?" To assign its presence a probability of 0.5 is to go far beyond what the data can support. Remember, there is no data here; there can't be, since you don't know what counts as data.

I honestly don't understand what the aversion to saying "I don't know" is. Why go through the mental gymnastics of drafting an equation to get a result that's fundamentally meaningless? If you don't know what a grue is, you don't know how likely it is to be in your house, and that's where you have to stop, at least until you learn what a grue is.


You apparently live in a bubble where each thing is either known or unknown with complete certainty. For those of us in the real world, certainty is often partial, and it is often useful to quantify it. This is true in innumerable situations—playing poker, making an investment decision, designing a spam filter, searching for a lost child, preventing terrorist attacks, to name a mere handful.
 
The honest thing to do is to say "I have no idea what the probability is; what the hell is a grue?" ... If you don't know what a grue is, you don't know how likely it is to be in your house, and that's where you have to stop, at least until you learn what a grue is.
I learned long ago what a grue is, so I know how likely a grue is to be in my house.
crane; (Slang) hooker, tart
http://dictionary.imtranslator.net/more-translations/french-english/grues/
 
jt512 said:
You apparently live in a bubble where each thing is either known or unknown with complete certainty.
Nope. YOU have stated that YOU don't know what a grue is. Then you did some magical math to asign a probability to it. That's what I am objecting to. If you were uncertain, that'd be different--but YOU said YOU don't know.

I'm saying nothing about knowledge as such. I am saying that your self-reported knowledge OF THIS SITUATION is nil. We can state with certainty that you do not know what a grue is, because you said so. It's a given. Anything that comes after that, therefore, that isn't "How do I find out what a grue is?", is a statement not based on evidence.

For those of us in the real world, certainty is often partial, and it is often useful to quantify it. This is true in innumerable situations—playing poker, making an investment decision, designing a spam filter, searching for a lost child, preventing terrorist attacks, to name a mere handful.
Yeah, I get that--but how do you asign the probability of winning a poker game when you've never heard the term before? Because that's what you're doing.
 
Actually, it's rather hillarious that you accuse me of not acknowledging shades of confidence in conclusions, given my area of expertise. Paleontology is basically a case-study in how to determine how confident one can be in one's conclusions. However, it also demands a very strict acknowledgement of the limits of one's knowledge, and that's where I find you committing an error. If you have no data, you can't say anything--that's the principle I was taught, and the principle every scientist ostensibly if not actually accepts. If you can't define the term, how can you asign probability? How can you possibly know that the probability is 50%? Why not 25%? Why not 99%? Saying "Well, there are two posibilities" doesn't work. My dad used to say that all statistics were 50/50--either it happened or it didn't. You're using the same logic here, except that when my dad did it he was at least joking.

ETA: I'm always willing to be proven wrong. Please provide the data you used to asign the probabilities--the data regarding grues, not the number of possible outcomes. If you can't do that, you can't asign probability, not rationally. And the only rational conclusion is "I don't know"--which you've already stated.
 
H0, the hypothesis that there is no "grue" in my house," is the null hypothesis, but the null hypothesis has no special standing in Bayesian statistics. Unlike frequentists (sometimes) do, Bayesians don't assume the hypothesis with the word "not" in it is true and then "reject" it only if they collect data that would be unusual if the hypothesis were true. Frequentists are forced to use such procedures because they aren't allowed to assign probabilities to hypotheses in the first place. But frequentist procedures aren't inherently correct or even logically coherent.

Bayesians have it better. Given two hypotheses, if I have absolutely no reason to believe one more likely to be true than the other, then I see no way to represent those beliefs probabilistically other than to assign them equal probabilities that sum to one; that is, to represent my degree of certainty in each hypothesis as 0.5. I hope we can agree that in the abstract this is true. You might argue that for the "grue" problem it is not. For example, you might reason that a "grue" could represent an infinite number of things, and that since my house contains a tiny subset of those things, that the probability of H1 should be less than the probability of H0. But that entails an assumption about how the thing represented by "grue" was determined that I don't think is justified by how the problem was posed. To me it seemed like the problem provided no reason whatsoever to believe one hypothesis more likely than the other.
Cool, thanks for the clarification. I'm still trying to learn how to apply BT to real life examples.




That's the quote I was thinking of. I evidently misremembered the number. I'm actually relieved, because I was troubled by thinking that the probability should be lower than that proposed by a subject-matter expert.
Glad to have helped! I understand the need not to make appeals to authority, but Carroll's position is well-reasoned and has evidence behind it, so I tend to trust what he's said about ESP/etc. and how it's a done deal that it doesn't exist.
 
Also remember that all scientists work on an assumption that the future will resemble the past. This, of course, is Hume's famous "riddle of induction" (which was updated by Goodman). While we can assume it's highly probable the future will resemble the past, it's not necessarily true (100% certain) that future observations will be consistent with past observations. There is a non-zero chance that future observations will be totally different from past observations.

Um, this is just sophistry and belongs in the R&P, the speed of light is constant, and that goes for all the observable pass as well.

When you show that the observations of nature vary in measurement , and not through refinement, then let us know.

Otherwise this is twaddle.

Hume , huh, how much modern physics did they practice?
 
There is no debate- scientific claims never have a probability of 1. Anyone who thinks they do is wrong.

I wouldn't bother debating this fact any more than I would bother debating a Young Earth Creationist. Anyone who disputes it is either an idiot or extremely ignorant.

Nope, the earth orbits the sun: 1
 
There is no debate- scientific claims never have a probability of 1. Anyone who thinks they do is wrong.

I wouldn't bother debating this fact any more than I would bother debating a Young Earth Creationist. Anyone who disputes it is either an idiot or extremely ignorant.

More sophistry
 
Nope. YOU have stated that YOU don't know what a grue is. Then you did some magical math to asign a probability to it. That's what I am objecting to. If you were uncertain, that'd be different--but YOU said YOU don't know.

I'm saying nothing about knowledge as such. I am saying that your self-reported knowledge OF THIS SITUATION is nil. We can state with certainty that you do not know what a grue is, because you said so. It's a given. Anything that comes after that, therefore, that isn't "How do I find out what a grue is?", is a statement not based on evidence.


The propositions H1, there is a grue in my house, and H2 there isn't have equal probabilities to me because of my total ignorance about them. The logic is simple: I have no reason to believe that P(H1) > P(H0) and no reason to believe that P(H0) > P(H1), but P(H1) + P(H0) = 1; therefore, P(H1) = P(H0) = 0.5. This is the principle of indifference. As the article explains:

The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. Suppose that there are n > 1 mutually exclusive and collectively exhaustive possibilities. The principle of indifference states that if the n possibilities are indistinguishable except for their names, then each possibility should be assigned a probability equal to 1/n. In Bayesian probability, this is the simplest non-informative prior....

The "Principle of insufficient reason" was renamed the "Principle of Indifference" by the economist John Maynard Keynes (1921), who was careful to note that it applies only when there is no knowledge indicating unequal probabilities.


A principle of how to assign probabilities to two mutually exclusive and collectively exhaustive possibilities that are indistinguishable except for their names, you know, like "grue" and "no grue," that applies when there is no knowledge indicating unequal probabilities.

Actually, it's rather hillarious that you accuse me of not acknowledging shades of confidence in conclusions, given my area of expertise. Paleontology is basically a case-study in how to determine how confident one can be in one's conclusions.


On the contrary, it is rather sad that if that is your job that you don't understand how to apply Bayesian statistics.


However, it also demands a very strict acknowledgement of the limits of one's knowledge, and that's where I find you committing an error. If you have no data, you can't say anything--that's the principle I was taught, and the principle every scientist ostensibly if not actually accepts.


On the contrary, most scientists, when doing a Bayesian hypothesis test, would give equal prior probability to the two competing hypotheses, to avoid biasing the test in one direction or the other.


If you can't define the term, how can you asign probability? How can you possibly know that the probability is 50%?


As I've said more than once in the thread, it is a mistake to think that there is a "the probability" to know. Bayesian probabilities are numbers used to quantify our degree of uncertainty about things. In the grue problem, my degree of uncertainty is equal for both propositions precisely because I know nothing about either of them. Equal uncertainty about two propositions whose uncertainties have to add to 1 implies that my degree of uncertainty in each should be represented by the number 0.5.


Why not 25%? Why not 99%?


Because 25% would imply that I think the other proposition is 3 times as likely, but I don't; and 99% would imply that I think the proposition is 99 times as likely as the other, but I don't.


Saying "Well, there are two posibilities" doesn't work.


It does if you have no reason to believe that one possibility is more probable than the other.


My dad used to say that all statistics were 50/50--either it happened or it didn't. You're using the same logic here, except that when my dad did it he was at least joking.


No, just because the numbers are the same, the logic is not.


ETA: I'm always willing to be proven wrong. Please provide the data you used to asign the probabilities--the data regarding grues, not the number of possible outcomes. If you can't do that, you can't asign probability, not rationally. And the only rational conclusion is "I don't know"--which you've already stated.


No, you are not willing to be proven wrong; you're not even willing to consider that you might be wrong. But your steadfastness is easy to understand. After all, you are a better scientist than Feynman, and apparently a better mathematician than Laplace.
 

Back
Top Bottom