eight bits
Graduate Poster
- Joined
- Sep 5, 2012
- Messages
- 1,580
OK, let's work backwards.
No, not in the same problem, that is, so long as the same individual's beliefs are being represented by the probabilities discussed. Different people will typically have different beliefs, and the same person may change beliefs "between problems." Looking at the problem you pose:
Assuming that 0 < Pr(X) < 1 (you believe it to some degree, but are uncertain), then viewing a white swan can only increase your confidence that all swans are white. In your notation, that is Pr(X/Y) > Pr(X).
Neither Pr(X/Y) nor Pr(X) changes by observing something that makes Y true. Your current degree of confidence in X, however, is no longer Pr(X), but rather Pr(X/Y). Representing that change of belief is "the whole point of doing a Bayesian analysis." What represents the change is satisfying different conditions while maintaining a single, unchanging comprehensive probability distribution over all possible states of the world, evaluated as of the beginning of the problem.
There may be a temptation to denote the current probability of X as Pr(X)... but you've already used that notation for a different quantity, the prior probability of X, and in any case Pr(_) has been used for constants, not variables. So you can't use Pr(X) for something else, unless you declare "Let's call the search for more evidence a new problem." OK, you could do that.
But nature's way of telling you that that would be a bad idea is when six posters have eight ideas about which probability is being discussed, and that's what happened in this thread with the New York Lawyer, and the Apocalyptic Preachers of Old Jerusalem, and so on.
I'd like to reserve comment on Hans's case for now, if you don't mind. He's just posted, and probably it's best if you and I stayed with these issues for now.
Different people often have different beliefs. Subjective probability represents those beliefs. So, yes, "the" probability that A committed the crime will differ depending upon whose beliefs the probability represents.
There are some problems where we expect people to agree if they have the same information, as when there is a well understood chance set up, like a bag of marbles. "Whether A committed the crime" has no such chance set-up. We can only expect people to agree if there is hefty shared information, and they may well disagree about how hefty any particular state of information is. The racist may see some pieces of evidence as less bearing than his partner does.
As The Norseman remarked
There isn't any. The Christ-myther may see some pieces of evidence as less bearing than the Christian apologist does. May?
There is no Bayesian way to broker these disagreements. There are two "right" interpretations of the evidence, right because each one faithfully represents the changing beliefs of the person whose interpretation it is. Which is "correct" depends on which person, if either, is correct, and ultimately, in the detectives' problem, whether A did in fact commit the crime. Neither of the detectives knows, nor do we. Hence the name, inference under uncertainty.
If you think about it, it would be magical if a method for representing beliefs caused only correct beliefs to be represented. Bayes offers many wonderful things, but magic is not one of them.
That's fine if you were talking about two different problems, each with its own background information. But you spoke of some one probability changing (skyrocketing), not differing between two problems, and what occasioned the change was an "added" detail, suggesting a change in the available information within the same problem, which is what usually happens in a Bayesian problem.
Then you did discuss two different believers (OK! Two different problems here we come), but then used the same "k" for their different prior beliefs (huh?), and ... well, it all needed to be sorted out, I thought. Hopefully, we have. or at least made a dent in it.
Pr(X/Y) changes (unless the evidence is unclear), which is the whole point of doing a Bayesian analysis. If you believe all swans are white(X), and you see a white swan (Y), Pr(X/Y) will increase.
No, not in the same problem, that is, so long as the same individual's beliefs are being represented by the probabilities discussed. Different people will typically have different beliefs, and the same person may change beliefs "between problems." Looking at the problem you pose:
Assuming that 0 < Pr(X) < 1 (you believe it to some degree, but are uncertain), then viewing a white swan can only increase your confidence that all swans are white. In your notation, that is Pr(X/Y) > Pr(X).
Neither Pr(X/Y) nor Pr(X) changes by observing something that makes Y true. Your current degree of confidence in X, however, is no longer Pr(X), but rather Pr(X/Y). Representing that change of belief is "the whole point of doing a Bayesian analysis." What represents the change is satisfying different conditions while maintaining a single, unchanging comprehensive probability distribution over all possible states of the world, evaluated as of the beginning of the problem.
There may be a temptation to denote the current probability of X as Pr(X)... but you've already used that notation for a different quantity, the prior probability of X, and in any case Pr(_) has been used for constants, not variables. So you can't use Pr(X) for something else, unless you declare "Let's call the search for more evidence a new problem." OK, you could do that.
But nature's way of telling you that that would be a bad idea is when six posters have eight ideas about which probability is being discussed, and that's what happened in this thread with the New York Lawyer, and the Apocalyptic Preachers of Old Jerusalem, and so on.
In Hans case ...
I'd like to reserve comment on Hans's case for now, if you don't mind. He's just posted, and probably it's best if you and I stayed with these issues for now.
For example, a racist detective may believe "Suspect A (who's black) committed the crime" has a high probability. His non-biased partner might assign a much lower probability. If they don't straighten it out, the evidence won't be interpreted right.
Different people often have different beliefs. Subjective probability represents those beliefs. So, yes, "the" probability that A committed the crime will differ depending upon whose beliefs the probability represents.
There are some problems where we expect people to agree if they have the same information, as when there is a well understood chance set up, like a bag of marbles. "Whether A committed the crime" has no such chance set-up. We can only expect people to agree if there is hefty shared information, and they may well disagree about how hefty any particular state of information is. The racist may see some pieces of evidence as less bearing than his partner does.
As The Norseman remarked
Cool! Now where is the bag full of green marbles with regards to an historical Jesus?
There isn't any. The Christ-myther may see some pieces of evidence as less bearing than the Christian apologist does. May?
There is no Bayesian way to broker these disagreements. There are two "right" interpretations of the evidence, right because each one faithfully represents the changing beliefs of the person whose interpretation it is. Which is "correct" depends on which person, if either, is correct, and ultimately, in the detectives' problem, whether A did in fact commit the crime. Neither of the detectives knows, nor do we. Hence the name, inference under uncertainty.
If you think about it, it would be magical if a method for representing beliefs caused only correct beliefs to be represented. Bayes offers many wonderful things, but magic is not one of them.
It's the same hypothesis ("Will I pick a green marble out of the bag?"), but the prior probabilities differ because of the different sets of background information.
That's fine if you were talking about two different problems, each with its own background information. But you spoke of some one probability changing (skyrocketing), not differing between two problems, and what occasioned the change was an "added" detail, suggesting a change in the available information within the same problem, which is what usually happens in a Bayesian problem.
Then you did discuss two different believers (OK! Two different problems here we come), but then used the same "k" for their different prior beliefs (huh?), and ... well, it all needed to be sorted out, I thought. Hopefully, we have. or at least made a dent in it.