Proof of Immortality III

Status
Not open for further replies.
- We haven't worked out the specific day and time, but the statistics professor with whom I've been corresponding has agreed to meet me for lunch to discuss my claim about one, finite, life (at most). I'll ask him to write down his opinion.

-1. Why are you telling us this nothingness?
-2. Why do you need lunch for him to write down his opinion?
-3. How will you remember to turn up?
-4. Why can't a statistics professor work out a specific day and time?
-5. Am I doing the " - " right?
-6. Don't let the "Sally" find out. It might get jealous.
 
Are you discussing the Bayesian statistics part of your claim with this professor, or the philosophical aspects?

Are you willing to share how you calculated the probability of ~OFL to be 0.01 - or will you concede that you made that number up?
Agatha,

- We'll be discussing the Bayesian statistics part.

- Actually, P(~OFL) is calculated from P(OFL); i.e., P(OFL)+P(~OFL) = 1.00.
- P(OFL) is only estimated.
- Being quite generous (IMO), I estimated P(OFL) to be .99. Consequently, P(~OFL) = 1.00-.99 = .01.

- I suspect that you really want to know how I estimated P(H).
 
- Actually, P(~OFL) is calculated from P(OFL); i.e., P(OFL)+P(~OFL) = 1.00.
- P(OFL) is only estimated.
- Being quite generous (IMO), I estimated P(OFL) to be .99. Consequently, P(~OFL) = 1.00-.99 = .01.

I think you'll find that ROFL is more appropriate.
 
This is Dr. Roger Hoerl, still, right?

Couple of things to keep in mind: Sooner is better than later, because finals rapidly approach, and after that, commencement.

Where did you decide to go for lunch?
js,

- Yes. It is Dr. Hoerl.

- We're set for next Wed.

- For Dr. Hoerl's sake, I probably shouldn't advertise just where we're meeting.
 
Agatha,

- We'll be discussing the Bayesian statistics part.

- Actually, P(~OFL) is calculated from P(OFL); i.e., P(OFL)+P(~OFL) = 1.00.
- P(OFL) is only estimated.
- Being quite generous (IMO), I estimated P(OFL) to be .99. Consequently, P(~OFL) = 1.00-.99 = .01.

- I suspect that you really want to know how I estimated P(H).


Just to clarify, if two numbers add up to 1 and you make up one of the numbers, you are actually making up both of them. You can't just say that you estimated made up one number, but everything else is calculated.
 
For Dr. Hoerl's sake, I probably shouldn't advertise just where we're meeting.

This is cryptic. Why would you think Dr. Hoerl would be in any danger? Not that you need to advertise your meetings, of course. But why would you volunteer this particular sentiment?
 
This is cryptic. Why would you think Dr. Hoerl would be in any danger? Not that you need to advertise your meetings, of course. But why would you volunteer this particular sentiment?


Secret Shroud Stuff.
Jabba's wearing a bowler hat, and carrying a copy of Thursday's Times under his left arm.
His contact is wearing a yellow rose in his left lapel.
The secret greeting goes:
"How is the dog?"
"0.1"
 
We'll be discussing the Bayesian statistics part.

What assurances do we have that you'll present your entire model, not just the parts of it you think your authority will endorse? What assurances do we have that you'll faithfully and accurately represent his criticism? What assurances do we have that you're even in fact in contact with him and aren't just dropping his name?

By now you must realize I'm quite familiar with various common patterns of fringe argumentation. The offline consultation is fraught with them. To deal with our last concern first, very often fringe claimants will assert some relationship to some well-known or easily discovered authority who -- it is represented -- endorses the claimants beliefs. But great lengths are typically taken to prevent the claimant's critics from verifying that supposed association.

If the authority actually exists and actually is consulted, the claimant almost always presents to him a sanitized, emasculated, motte-and-bailey version of the controversy. I remember one guy who asked a physicist to confirm his computations of radioisotopic decay. The problem with his claim was that the phenomenon he was discussing was not governed by that model. So while the expert confirmed his working of the model, the claimant had conveniently forgotten to mention what problem he was trying to solve with it.

Based on the thirty or so claimants who have mentioned offline authority and the all-but-one who invoked one and misrepresented the claim, I estimate the probability that you will misrepresent your present claims to Dr. Hoerl at p > 0.967.

Then there's the answer. If the authority exists and is actually consulted and actually gives an answer, about half the time the claimant cherry-picks the answer for only the parts that favor his beliefs.

Now who would do such a thing? Exactly you. You have a documented history of presenting weakened facsimiles of your arguments in order to garner agreement. You have a documented history of appealing to people like "Sally" whom you hide from your critics. You have a documented history of misstating and misusing other people's quotes unfairly.

Now please address these concerns.

Being quite generous (IMO), I estimated P(OFL) to be .99.

Your "generosity" is irrelevant. If you propose to quantify something, you must show how the quantity was arrived at. p = 0.99 may seem like a very high probability to you, but I work in a world dominated by component reliability requirements along the lines of p = 0.99999 -- five "nines." And I get to see those pass through a series of combinatorics -- including Bayesian models -- that rapidly bring those down to a system reliability of around p = 0.8.

One of the classic examples of a Bayesian system involves medical tests that have a measured reliability of 0.99. When you apply the real Bayes model (not your silly cobbled-up travesty of it), some very intriguing posterior probabilities emerge. That's why Bayes is so powerful -- as with all good statistical models, it challenges our intuition.

But it's also why Bayes is so often misused by fringe theorists. It alludes to quantified beliefs, which leads them wrongly to conclude it can inject rigor into what is never more than simply a statement of belief. Bayes can accurately show you the effect of information upon belief, but it does not test the truthfulness either of the information or the belief.

I suspect that you really want to know how I estimated P(H).

Naturally. You have a habit of telling us you'll explain where all these "estimates" come from, but never doing it.

But more importantly we really want to know whether Dr. Hoerl endorses the part of your model where you just make up the numbers you put into it. As I have explained many times to you, the strength of Bayesian reasoning is no greater than the validity of the priors. If you simply invent them, however "generously," then there simply is no strength.
 
Just to clarify, if two numbers add up to 1 and you make up one of the numbers, you are actually making up both of them. You can't just say that you estimated made up one number, but everything else is calculated.
Monza,
- You're right. In effect, I am "making up" both. But, at the same time, if I make up the first, I can calculate what the other one must be.
- And obviously, what I "make up" in your terminology, is "estimate" in mine.
 
In effect, I am "making up" both. But, at the same time, if I make up the first, I can calculate what the other one must be.

This is very wrong on several levels.

First, Bayes deals only in probability. Statistical probability by definition cannot determine what "must be" because it is precisely the mathematical manipulation of uncertainty. It can in no way produce certainty. Determination of what "must be" is the purview only of deductive reasoning. You don't have the requisite facts in your case to address this with that mode of reasoning.

Second, if you propose to use Bayes to determine actual probability, then your priors must be real data, not simply arbitrary numbers. There is no magical branch of mathematics that turns arbitrary numbers into fact.

Third, if your priors are simply quantified belief and not real data, then at most Bayes will tell you the effect of information upon that belief. It does absolutely nothing to test the validity of either the information or the belief.

And obviously, what I "make up" in your terminology, is "estimate" in mine.

Nonsensical equivocation. There is a clear distinction between estimation and invention. Estimation allows for uncertainty but is based on objective fact insofar as that fact can be determined. There must be some fact and articulable reasoning in order for it to be called estimation. Simply pulling a number out of your kiester and calling it "generous" is invention, not estimation.
 
Monza,
- You're right. In effect, I am "making up" both. But, at the same time, if I make up the first, I can calculate what the other one must be.
- And obviously, what I "make up" in your terminology, is "estimate" in mine.

-And an "outright lie" in the terminology of the intellectually virtuous. :)
 
I thought "estimates" had to have something to do with data or experience by definition.
 
This is very wrong on several levels

(snip)
Second, if you propose to use Bayes to determine actual probability, then your priors must be real data, not simply arbitrary numbers. There is no magical branch of mathematics that turn that turns arbitrary numbers into fact.

(snip)

On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Charles Babbage

I think Mr Babbage would find jabba's ideas very confusing indeed...
 
js,

- Yes. It is Dr. Hoerl.

- We're set for next Wed.

- For Dr. Hoerl's sake, I probably shouldn't advertise just where we're meeting.
Sure. That is unnecessarily prudent, but prudent nonetheless.

But can you guess with whom I am about to communicate via email?
 
Status
Not open for further replies.

Back
Top Bottom