• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

10/10 scale

I don't see the meaning in your claim, "there are no real odds". If there are no real odds, how do casinos make a profit?

Casinos make money by having better information than other betters. You acknowledge that yourself.

In some cases, like fair dice and properly balanced roulette wheels, our knowledge about future events is incomplete but the knowledge we do have is extraordinarily precise. We don't know what result will come up...

but we know the probability that any given result will come up very precisely.

There is no probability. That's the claim that was made above. With a fair die, it's still either going to come up an ace or it isn't, and a skilled manipulator can make it do so at will. Alternatively, I could roll a die ("fairly"), take a high-speed video tape of it and do an analysis of the bounce pattern by computer and predict how it will land. It's not a "random" event, but an event about which we have little knowledge of the details.
 
Casinos make money by having better information than other betters. You acknowledge that yourself.

I'm still in the dark as to what you mean by "there are no such things as odds", then. Maybe it would help if you defined what you think "odds" means, because it's possible that you are denying the existence of something other than what I think you are.

There is no probability. That's the claim that was made above. With a fair die, it's still either going to come up an ace or it isn't, and a skilled manipulator can make it do so at will. Alternatively, I could roll a die ("fairly"), take a high-speed video tape of it and do an analysis of the bounce pattern by computer and predict how it will land. It's not a "random" event, but an event about which we have little knowledge of the details.

I agree with each individual statement of fact, but I can't see how they add up to an argument.

I may be misunderstanding you, but it seems like all you are saying is that there is no spooky ideal randomness involved, which I thought went without saying. We agree that these are just complex systems that generate results which are highly predictable in their frequency over time but very difficult to predict individually, right?

Now how do you get from that to the really kooky idea, that unknowns must be treated as 50/50s? I've already provided an argument that knocks that idea down, the one with the two different worlds in which people are running races.

If you have to bet on one of two unknowns you should be indifferent as to which one you bet on, but that is not at all the same thing as thinking that the odds are 50/50.

Because it allows us to represent aspects of probability theory and to solve other problems with extreme (and remarkable) accuracy. In fact, Bayesian probability theory is probably the single most powerful investigative tool that modern scientists have at their disposal, across all fields from aeronautics to zoology.

That's like asking "why should we accept written language if it doesn't represent pitch accurately?"

That sounds pretty impressive. What exactly does this Bayesian probability business have going for it that lets it do all this wonderful stuff that regular boring old probability and statistics can't?
 
I'm still in the dark as to what you mean by "there are no such things as odds", then. Maybe it would help if you defined what you think "odds" means, because it's possible that you are denying the existence of something other than what I think you are.

Under the Bayesian framework -- and under a long-standing philosophical tradition that dates back long before Bayes -- there are no such things as "random" events. An event will either occur or not. If you like, you can think about it in Calvinist religious terms; God has determined the path that the universe will take, and not a raindrop falls but that He has determined where and when. If we knew the mind of God, we would know whether or not any particular roulette wheel spin would land on red. As I said, this philosophical tradition goes back long before Bayes and in fact before probability.

We don't however, need to invoke the miraculous. In almost every area of inquiry (with the possible exception of quantum mechanics), any apparently "random" process can be shown to be at least semi-deterministic and if we had better information, we could predict the outcome of that process; in a limiting case, we would know beforehand what happens. Neither dice nor roulette wheels are random -- they are physically as deterministic as a billiard shot. It's just that we're not that good at predicting them on the fly.

The standard non-Bayesian philosophy, "frequentism," states simply that the probabililty of a repeated event is the number of times that the event occurs divided by the number of trials. (For strict formalism, throw in a limit process.) For example, since a die can be rolled repeatedly, we can say that we rolled a die 60,000 times, got an ace 10,000 times, and hence the probability is 1/6. The obvious flaw, of course, is that a one-off event has no probability in this framework. If we talk about an asteroid having a 1% chance of hitting the earth, there aren't an infinite number of identically situated asteroids of which one in one hundred will hit the earth.

The Bayesian framework handles things differently by accepting that probability is not a statement of likelihood, but a statement of knowledge. The likelihood of any event happening is either one or zero -- either it will happen, or it won't. A Bayesian probabiliy of 1% is a recognition that a rational observer, in possession of all the available information at the time the computation was made, would consider the event to be unlikely, and specifically to be sufficiently unlikely that he would neither expect to win nor to lose money if offered a bet at 99:1.







Now how do you get from that to the really kooky idea, that unknowns must be treated as 50/50s? I've already provided an argument that knocks that idea down, the one with the two different worlds in which people are running races.

Your first "world" does not exist. There is no situation in which two runners are equally likely to win the next race. Certainly, we could create a frequentist situation where two runners are likely to evenly split the next dozen races or so. But the outcome of the next race is determined -- perhaps one of the racers hasn't stretched out well enough, or ate one too many forkfuls of pasta, or something.

That sounds pretty impressive. What exactly does this Bayesian probability business have going for it that lets it do all this wonderful stuff that regular boring old probability and statistics can't?

It lets you assess the "probability" of an unrepeatable event.
 
Suppose two people are going to run a race.

In World #1, I happen to know that the two runners are so evenly matched that the race is a 50/50 proposition.

In World #2, I have no idea how good each runner is.

That's an important difference. In World #1, just as one example, I would be rational to automatically take any bet on either runner that offers favourable odds. In World #2 I would not be rational to automatically do so.

Knowing that things are 50/50 is epistemologically distinct from not having any idea what the odds are.
One of the ways you can think about this is that frequental stats describes what happens when you make runner1 and runner2 race each other again and again.

However Bayesian stats 'samples' from the set of all possible worlds, if you made two arbitrary runners race and then picked another set of two arbitrary runners and raced them and did the same again, and again you would find that the odds tend to 1/2 of runner 1 winning rather than runner 2.

This has important side effects. While the probabilities of runner 1 and runner 2 are hard coded into the frequental model and not subject to change, under the Bayesian viewpoint increased information distorts the sampling space of possible worlds changing the probability distributions.
 
Under the Bayesian framework -- and under a long-standing philosophical tradition that dates back long before Bayes -- there are no such things as "random" events. An event will either occur or not. If you like, you can think about it in Calvinist religious terms; God has determined the path that the universe will take, and not a raindrop falls but that He has determined where and when. If we knew the mind of God, we would know whether or not any particular roulette wheel spin would land on red. As I said, this philosophical tradition goes back long before Bayes and in fact before probability.

We don't however, need to invoke the miraculous. In almost every area of inquiry (with the possible exception of quantum mechanics), any apparently "random" process can be shown to be at least semi-deterministic and if we had better information, we could predict the outcome of that process; in a limiting case, we would know beforehand what happens. Neither dice nor roulette wheels are random -- they are physically as deterministic as a billiard shot. It's just that we're not that good at predicting them on the fly.

No arguments so far.

The standard non-Bayesian philosophy, "frequentism," states simply that the probabililty of a repeated event is the number of times that the event occurs divided by the number of trials. (For strict formalism, throw in a limit process.) For example, since a die can be rolled repeatedly, we can say that we rolled a die 60,000 times, got an ace 10,000 times, and hence the probability is 1/6. The obvious flaw, of course, is that a one-off event has no probability in this framework.

So in their terms probability is a measurement based on past events, right? Obviously that's a limited way to define probability.

If we talk about an asteroid having a 1% chance of hitting the earth, there aren't an infinite number of identically situated asteroids of which one in one hundred will hit the earth.

As I understand it such statements are about the margin of error in astronomical observation. The motion of asteroids is deterministic, as is the motion of the earth, but our measurements are imperfect. So that statement means "that asteroid could have a variety of trajectories for all we know, and about 1% of the possible trajectories will intersect the earth's".

The Bayesian framework handles things differently by accepting that probability is not a statement of likelihood, but a statement of knowledge. The likelihood of any event happening is either one or zero -- either it will happen, or it won't. A Bayesian probabiliy of 1% is a recognition that a rational observer, in possession of all the available information at the time the computation was made, would consider the event to be unlikely, and specifically to be sufficiently unlikely that he would neither expect to win nor to lose money if offered a bet at 99:1.

This too sounds sane.

Your first "world" does not exist. There is no situation in which two runners are equally likely to win the next race. Certainly, we could create a frequentist situation where two runners are likely to evenly split the next dozen races or so. But the outcome of the next race is determined -- perhaps one of the racers hasn't stretched out well enough, or ate one too many forkfuls of pasta, or something.

Okay, I see where you went wrong. You misinterpreted what I said about World One to mean that I was saying that the outcome of the race would be non-deterministic. That's not necessary at all. All that is necessary is that I know with great certainty that the runners are such that it it is very highly likely that over time if they ran enough such races they would have an even number of wins each, and that nobody can predict ahead of time which individual races each of them will win.

In that situation, I should accept any bet with beneficial odds on either runner in that race.

Contrast that with a situation where I know nothing about the runners. In that situation I would not be rational to accept any bet with beneficial odds on either runner in that race.

It lets you assess the "probability" of an unrepeatable event.

How does treating unknowns as 50/50 propositions do that? I've got no problems with the rest of what you are describing, it's that one peculiar idea I want to see justified.

You seem to be arguing that this peculiar idea is true because it's part of Bayesianism, and Bayesianism is better than Frequentism, but the conclusion just doesn't follow from that premise.
 
Wow, here's another thread that I can maybe contribute to.

Original post was the idea of attitude scaling (very different from a measurement issue like temperature, or something) and asking people to respond from a 0 point, which was scaled to "no opinon" to a positive 10, no doubt but agree, to a negative 10, no doubt but disagree.

What's the problem? We're trying to elicit degrees of agreement or disagreement to particular questions, in a way that gives the respondant a choice and some room for nuance.

There is a lot of research on how people respond to scales like this. Likert-types use odd numbers, generally. I discovered that 5 scale points in one direction are not sufficient. People need to be able to go "between" verbally anchored points, apparently. They do. If you give them 5 points, some will still mark in between the 4 and 5, no matter how hard you try to anchor the "strongly" to "somwhat" to "seldom" to the numbers.

I think a 10 point scale on the positive, coupled with a "zero" option for no opinion, coupled with a 10 point scale on the negative, is a very wise approach. Ask a bunch of difficult questions, but ones that are subtle, and that would be very interesting data.

My thinking now is that these are indeed ordinal scales. So the Pearson correlation is not useful. Spearman rho or the other rank-order correlations are much more useful. I have never run into a decent question set that made those two numbers different enough where I doubted the value of doing a quick r correlation, and seeing where it led.

Note that I am not creating a reality out of my head, called "opinion." I'm just talking about the various ways that measurement can be applied with grace and refined with insight. There has been a staggeringly intricate amount of work on scaling and processing the results of these kinds of verbal self-report questions. I got most of it out of journals in university libraries, like Journal of Educational Measurement or Journal of Personality and Social Psychology. Those authors are required to explain their scales, pilots, findings, precedents.

-10 to 0 to 10 makes fine sense. You can always recode if you bear in mind the ordinal nature of the numbers. Relative rank order, not 0.1 bigger or .98 smaller. I think this is what set Dr. Adequate off.
 
Okay, I see where you went wrong. You misinterpreted what I said about World One to mean that I was saying that the outcome of the race would be non-deterministic. That's not necessary at all. All that is necessary is that I know with great certainty that the runners are such that it it is very highly likely that over time if they ran enough such races they would have an even number of wins each, and that nobody can predict ahead of time which individual races each of them will win.
But if the situation is deterministic, it is always theoretically possible to predict the outcome given sufficient information. The frequental assumptions boil down to no one having any more relevant information than you.

In that situation, I should accept any bet with beneficial odds on either runner in that race.
Yes.

Contrast that with a situation where I know nothing about the runners. In that situation I would not be rational to accept any bet with beneficial odds on either runner in that race.
No. Lets apply your frequental assumptions to the Bayesian situation. Assume you're betting against someone who also knows nothing about the runners, you both have no idea what they look like or can do, they're just arbitrary labels on a sheet, numbers 1 and 2, and as with the frequental situation you're going to just keep on betting long enough for the long term trends to emerge and dominate the situation.

The important difference here from the frequental situation is that in each race you still know nothing about the runners, they might be new runners, they might not, they might have just switched places.

Now by symmetry, runner 1 is just as likely to win any race as runner 2. If you pick any faster runner he's just as likely to be labelled runner 1 as he is to be labelled runner 2.

And this is sufficient to guarantee that if your equally ignorant friend starts betting on one of the runner labels (1 or 2) winning with better than 50:50 odds you should take him up on that.

With frequental statistics if you are betting on a single race, you can still choose your behaviour to maximise the long term expectancy and that's exactly what you do in the Bayesian situation. You know nothing about the two runners, your friend knows nothing about the two runners, but he's making a stupid bet so you can still take advantage of it.
 
But if the situation is deterministic, it is always theoretically possible to predict the outcome given sufficient information. The frequental assumptions boil down to no one having any more relevant information than you.

Yes. So?

No. Lets apply your frequental assumptions to the Bayesian situation. Assume you're betting against someone who also knows nothing about the runners...

You just changed my scenario into something very different. Please don't do that.

I want to talk about the case where you don't know you are betting against someone equally ignorant. That was the whole point.
 
Yes. So?



You just changed my scenario into something very different. Please don't do that.

I want to talk about the case where you don't know you are betting against someone equally ignorant. That was the whole point.
But the frequental model is basically a statement that you are not betting against some one less ignorant than you. If you throw that away, of course you can't talk about the expectance in the same way.

That's it. Even if the roulette wheel comes up black half the time, you wouldn't take a bet from a man who already knew what the outcome of the next role was going to be, regardless of what odds he offered.
 
But the frequental model is basically a statement that you are not betting against some one less ignorant than you. If you throw that away, of course you can't talk about the expectance in the same way.

Sorry, you might as well be speaking Mandarin for all the sense I can make of that.

How on earth can any model, frequental or otherwise, be based on whether you are hypothetically betting with an uninformed human or an informed human? What am I "throwing away", and why does it matter to the question of why you would equate total uncertainty with a 50/50 proposition?

That's it. Even if the roulette wheel comes up black half the time, you wouldn't take a bet from a man who already knew what the outcome of the next role was going to be, regardless of what odds he offered.

This comes across to me as a total non sequitur, so I'd appreciate it if you tried again to explain why this is relevant. Like a lot of things DrKitten has said it seems obviously true but equally obviously irrelevant to the question I am trying to get an answer to.

I may well be missing something or just being stupid today, but for the life of me I can't see it.
 
Sorry, you might as well be speaking Mandarin for all the sense I can make of that.
I'll try again, but I've probably had to much coffee to be entirely coherent.

Hopefully you agree with the idea that it is possible to know, in advance, the outcome of an event we are assigning probability to (like the roulette wheel).

Now if you do know the outcome of the event already (it's going to be black) the probabilities you assign to the event, P(It will be black)=1, will be different to the probability you would naively assign only knowing it is a fair wheel where P(It will be black)=0.49ish.

Hence in a deterministic universe, probabilities are a function of how much you know and so (big jump here) can be used to express how certain you are that an event will occur/did occur given your prior information.

Now all of this means that Bayesian probabilities are not bound to the repetition of a certain scenario (a particular horse race or a single role of the dice) but to how much you know about the scenario. Someone with different information will assign different probabilities to the same event.

The same thing happens in frequental stats but it is all wrapped away in the formulation of the problem, one person coming to the race will assign a different distribution of the horses winning to another statistician with differing information.

When we're assigning probabilities to different events then to be consistent we have to assign the same probabilities to events we have the same information about them, it shouldn't make any difference if the events are switched. Hence the 50:50 odds on an arbitrary person winning the 2 man race.

If you're wondering what Bayesian statistics tells you about gambling, what it says, and which frequental statistics doesn't make a fuss of, is don't bet with people who know more than you, they have better odds.:D

I hope that helps.
 
I want to talk about the case where you don't know you are betting against someone equally ignorant. That was the whole point.
Ah. I didn't realize that was the point.

If you're betting against someone who knows the runners, then you aren't completely ignorant about them. Even if you were initially ignorant, you get some information about who is faster by seeing what sorts of bet the knowledgable someone is willing to make with you.
 
Okay, I see where you went wrong.

I haven't gone wrong yet. You just haven't followed the argument.

You misinterpreted what I said about World One to mean that I was saying that the outcome of the race would be non-deterministic. That's not necessary at all. All that is necessary is that I know with great certainty that the runners are such that it it is very highly likely that over time if they ran enough such races they would have an even number of wins each, and that nobody can predict ahead of time which individual races each of them will win.

I.e., you're assuming a frequentist definition of probability.

Now what happens if we know that this race can only be raced once, so the idea of "ran enough races" doesn't make sense?

Under the frequentist framework, you can't get any probability at all.



How does treating unknowns as 50/50 propositions do that?

Because in Bayesian statistics, all events have (prior) probability distributions associated with them (and the associated information you have about them). If you have no information, then the probability distribution must, by definition, be the minimally informative one. The minimally informative one (for a binary choice) is can be mathematically 50/50. (You can see that intuitively by noting that the Shannon-information, a measure of the uncertainty contained in a distribution, is maximized at p=0.50. If you assume any distribution other than a 50/50 split, you're making an unwarranted assumption that biases the inference results.)


You seem to be arguing that this peculiar idea is true because it's part of Bayesianism, and Bayesianism is better than Frequentism, but the conclusion just doesn't follow from that premise.

Yes, it does. Bayesianism necessitates that all events have prior probabilities,. and the maximally uninformative prior is the one that you use if you are maximally uninformed.
 
How on earth can any model, frequental or otherwise, be based on whether you are hypothetically betting with an uninformed human or an informed human? What am I "throwing away", and why does it matter to the question of why you would equate total uncertainty with a 50/50 proposition?

Because you can get information when someone accepts (or not) your bet.

This is one reason that bookies adjust the odds (and option pricing in the market shifts); if the "smart money" consistently bets one way or another, then obviously the odds you're offering aren't balanced.

Perhaps an example might help. Let's pretend that you're a bookmaker, and you need to offer odds on a game between the Florin Brutes and the Guilder Eels, two teams about which you know nothing save the name, and you don't even know the rules of the game they're going to play? At what do you set your initial odds?

Let's say you set the initial odds at 100:1 in favor of Florin. As soon as you do this, your shop is overrun by punters wanting to get bets down on Guilder, because they know easy money when they see it. Not being a total idiot, you will start to lower the odds, perhaps to 50:1, which will reduce the flood somewhat. Bayes' theorem provides a firm mathematical formula on which to base your new odds, based on the old odds and the amount of information you've gained through the betting.

But that's a key aspect of Bayesian statistics. Everything has a prior probability. If you object to the idea of using 50/50 as a prior probability, what would you use instead? Because in the Bayesian formulation, everything has a prior. You can't simply refuse to set one.
 
Ah. I didn't realize that was the point.

If you're betting against someone who knows the runners, then you aren't completely ignorant about them. Even if you were initially ignorant, you get some information about who is faster by seeing what sorts of bet the knowledgable someone is willing to make with you.

That assumption negates the point of the question again.

Assume that I don't know if the person offering me the bet is knowledgeable, or misinformed. Maybe they know something I don't. Then again, maybe they have false information I don't.

I don't know which runner will win, and I don't know if the people offering me bets are better informed than me or misinformed.

In that situation, it is not rational to accept any favourable-odds bet offered to me.

Whereas if I knew that each runner was equally likely to win, it would be rational to accept any favourable-odds bet offered to me.
 
Because you can get information when someone accepts (or not) your bet.

No. See my response to 69dodge.

But that's a key aspect of Bayesian statistics. Everything has a prior probability. If you object to the idea of using 50/50 as a prior probability, what would you use instead? Because in the Bayesian formulation, everything has a prior. You can't simply refuse to set one.

That's not an argument that doing so is rational. It's just a statement that Bayesian statistics does so. I believe you when you say Bayesian statistics does so, but I don't believe it's rational.

I haven't gone wrong yet. You just haven't followed the argument.

I.e., you're assuming a frequentist definition of probability.

Now what happens if we know that this race can only be raced once, so the idea of "ran enough races" doesn't make sense?

Under the frequentist framework, you can't get any probability at all.

Actually, I hate to contradict you but it's you that haven't followed the argument.

I didn't say the runners had run a lot of races. I said that the runners were such that if they did, then they would win about half each and nobody could predict in advance which one would win any individual race.

I don't care what "ism" you call it, but in that scenario it is very definitely the case that I would be rational to take any favourable-odds bet offered to me on either runner. Even if the race is a one-off, this is still true.

Whereas if I knew absolutely nothing about the runners, and did not know that the people offering me bets were rational or well-informed, I would not be rational to do so.
 
That assumption negates the point of the question again.

Assume that I don't know if the person offering me the bet is knowledgeable, or misinformed. Maybe they know something I don't. Then again, maybe they have false information I don't.

I don't know which runner will win, and I don't know if the people offering me bets are better informed than me or misinformed.

In that situation, it is not rational to accept any favourable-odds bet offered to me.
Hmm. I think I'm back to not knowing what the point of the question is.

I understand that you're saying it's not rational to bet on either runner. I just don't understand why you're saying that. I mean, it's definitely the case that one runner or the other will win. So how can it be irrational to accept a favourable-odds bet on one, and also be irrational to accept a favourable-odds bet on the other?

If you accepted both bets simultaneously, you'd be guaranteed to come out ahead. Doesn't that seem inconsistent with both bets individually being a bad idea? How can making two bad bets be good?

I understand the desire to say "I don't have enough information to make a decision." But, in the end, you have to do something or other. Deciding not to bet is also a decision. Is it the best decision you could make? Given that you know nothing about the runners but that you do know you'll be paid more money if you win the bet than you'll pay if you lose it?

When a Bayesian says, about a (single) race between runner A and runner B, that the probability is 1/2 that A will win, he is not making any claims about the relative athletic abilities of the runners. All he's saying is that he doesn't know who will win the particular race in question. The reason he doesn't know might be that he knows the runners are evenly matched, and the reason might be that he knows nothing about the runners. It's not true, I admit, that these two states of knowledge are equivalent for all purposes---for example, they're not equivalent when trying to decide whether to bet that A will win between 40 and 60 out of his next 100 races against B---but they are equivalent, I claim, when it comes to betting that A will win his next race against B. The latter bet is not about whether you can correctly say who's the better runner; it's only about whether you can correctly say who will win this particular race. Knowing that they're evenly matched gives you no advantage in predicting who will win this race, as compared to not knowing anything about them. In both cases, you don't know who will win the race. What difference does it make why you don't know?
 
Hmm. I think I'm back to not knowing what the point of the question is.

I understand that you're saying it's not rational to bet on either runner. I just don't understand why you're saying that. I mean, it's definitely the case that one runner or the other will win. So how can it be irrational to accept a favourable-odds bet on one, and also be irrational to accept a favourable-odds bet on the other?

If you accepted both bets simultaneously, you'd be guaranteed to come out ahead. Doesn't that seem inconsistent with both bets individually being a bad idea? How can making two bad bets be good?

Yet again you are "answering" the question by tacking on a new assumption, that you are offered two bets, one on each runner. Once again, that negates the point of the question.

The point is, I'm questioning (refuting, actually, by the look of things) the Bayesian claim that it is rational to treat an event you know nothing about as a 50/50 proposition. Your answer avoids that point by redefining the situation so that it doesn't matter what the "real" odds are, because with a favourable bet each way you win whether the odds are 50/50 or 100/0.

I understand the desire to say "I don't have enough information to make a decision." But, in the end, you have to do something or other. Deciding not to bet is also a decision. Is it the best decision you could make? Given that you know nothing about the runners but that you do know you'll be paid more money if you win the bet than you'll pay if you lose it?

In that situation you simply don't know what your best option is, nor can you make any kind or probabilistic judgement about what your best option is likely to be.

When a Bayesian says, about a (single) race between runner A and runner B, that the probability is 1/2 that A will win, he is not making any claims about the relative athletic abilities of the runners. All he's saying is that he doesn't know who will win the particular race in question.

I know that. My point is that this is a silly thing to do, because the Bayesian does not have the information to support that claim.

In both cases, you don't know who will win the race. What difference does it make why you don't know?

I already explained this. The difference is that in one case it is rational to take any positive-expectation bet offered to you, and in the other it is not. That right there is proof that total uncertainty is not the same thing as a known 50/50 proposition.
 
I already explained this. The difference is that in one case it is rational to take any positive-expectation bet offered to you, and in the other it is not. That right there is proof that total uncertainty is not the same thing as a known 50/50 proposition.

You've said this a few times, but I don't see that you've demonstrated it.

Let's look at your two scenarios again:

1. Two runners of equal ability are facing off against each other.
2. Two runners of unknown ability are facing off against each other.

Someone offers you a favorable bet. In which circumstance is it necessarily correct to accept it?

Neither. In the first circumstance, the offerer may have privileged information that you don't that has led him to an understand that one runner is more likely to win than the other. In a deterministic universe such information must exist. You can get out of this by saying that you know it's impossible for anyone to gather or analyise that information, of course. In which case, if you can be certain that no one can have more information about the outcome of the race than you already have, you should necessarily accept that favourable bet in scenario 1.

Now let's look at scenario 2. Again, if the person making the bet against you has information about the race's competitors, conditions, etc. that you don't have, making the bet might be a bad idea. If you can be certain that the other party doesn't have information that you don't have, again you should accept the bet.
Why?

Let's look at it this way. Someone offers you favourable odds. You've never heard of these runners, but then neither has he. You decide, ah, I'll flip a coin, heads I take the bet, tails I don't.

Here are the possible outcomes:
You take the bet and win - C
You take the bet and lose - I
You don't take the bet, but would have won - I
You don't take the bet, but would have lost. - C

In 50% of these outcomes, you made the right decision (denoted with a C). In 50% of them you made the wrong decision (denoted with an I).

In those outcomes where you made the right decision and won, your winnings were greater than your losings in those outcomes where you made the wrong decision and lost. But these outcomes are both equally common.

You point out that this is only true when the other party has no information that you don't have. But the same is true of both scenario 1 and 2, and so can't be used to distinguish between them, except in so much as you have more reason to suspect that the other party might have more information than you in scenario 2 than in scenario 1.
 
Yet again you are "answering" the question by tacking on a new assumption, that you are offered two bets, one on each runner. Once again, that negates the point of the question.

The point is, I'm questioning (refuting, actually, by the look of things) the Bayesian claim that it is rational to treat an event you know nothing about as a 50/50 proposition. Your answer avoids that point by redefining the situation so that it doesn't matter what the "real" odds are, because with a favourable bet each way you win whether the odds are 50/50 or 100/0.
I'm not saying, "here's a new situation; forget about the old one." I'm saying, "here's a new situation that I think sheds some light on the old one."

The important point is (just to repeat), how can making two bad bets be good?

In that situation you simply don't know what your best option is, nor can you make any kind o[f] probabilistic judgement about what your best option is likely to be.
But you've said that, in that situation, you know it's not rational to accept the bet. Or did you simply mean that you don't know that it is rational to accept it? What would you actually do?

I know that. My point is that this is a silly thing to do, because the Bayesian does not have the information to support that claim.
He doesn't have the information to support the claim that he doesn't know who will win the race? He certainly doesn't know who will win the race. That's not a very hard claim to support.

I already explained this. The difference is that in one case it is rational to take any positive-expectation bet offered to you, and in the other it is not. That right there is proof that total uncertainty is not the same thing as a known 50/50 proposition.
That explanation doesn't help me, because I think it is rational to take either bet and I don't know why you think it isn't.
 

Back
Top Bottom