So what am I allowed to say? Am I allowed to say “if”? “When”? “Evaluated at”?
Any of those options indicate you’re assuming, at least provisionally, in the “mathematical” sense you identified in another post, that H is true. It doesn’t mean that you believe H, or that you’re committing yourself to act as if H. The assumption is purely for argument’s sake.
This type of assumption is as if I’m a official for a city and, because earthquakes sometimes occur there, I design the building code to demand a certain level of earthquake protection in building structures. That doesn’t mean, however, that I’m saying an earthquake WILL occur. Further, if no earthquake occurs, I’m not liable for having demanded too much from builders.
This type of assumption is also evident in "devil's advocate" type of arguments. The arguer could be quite sure the assumption is false, but s/he makes the assumption for the sake of showing the assumption's implications.
I’ve described a simple hypothetical. Put aside your issues regarding testing, and just answer it directly. I’m not talking about testing, I’m just talking about inferences.
Before I can address that, we have foundational questions (raised above) to answer about confidence & its difference, if any, from probability. The only way I know to talk about statistical inference involves probabilities of hypotheses. Probability as inference obeys laws & can be discussed; confidence (in your as-yet undefined sense) as inference I don’t know how to discuss meaningfully.
Well, they are statements about the hypothesis. They are not statements about the probability of the hypothesis.
Well, when no data either demonstrate H nor falsify H, we can’t say whether H. So, if we can’t even speak of Pr(H given data), then what can we say?
As for how I would try to prevent fallacious reasoning, I would be sure to go through the precise mathematical formulation, and emphasize that validity flows from mathematical foundation.
If, as I believe you hold, statistical inference involves assigning confidence to hypotheses (or is it to people?), then does confidence get included in this “precise mathematical formulation?”
The full statement is that if 0<P(A),P(B)<1, and P(B|A)>P(B), then P(A|B)>P(A).
Good point.
That makes no sense. B supports A regardless of whether C does.
The degree of statistical support by B of A is only in relation to the degree B supports C. But I’m seeing now that this, too, depends on unanswered questions about “support.” Because I believe statistical “support,” or statistical evidence, in favor of H, is anything that increases Pr(H), I can talk meaningfully about support. But you, with your emphasis on “confidence,” may have something completely different in mind.
”Guess” refers to issues of fact, not choices in general.
Any rational choice depends on (imperfect) knowledge of issues of fact. E.g., if I have to choose whether to invest in Stock A, I will gather & use as much information as possible about whether Stock A will appreciate or depreciate. If I have no knowledge about those issues of fact, as in the choice of alpha, I am guessing, rather than choosing rationally.
”Confidence” is a general term. “Bayesian Confidence” is a type of confidence, and it does follow rules.
I ask again: Does your confidence in general follow rules? Or only bayesian confidence?
Except that that does not do what you claimed it does, nor what you are now claiming. Earlier you said it’s neutral, but now you’re saying it supports Ho.
That was a different example. Art, we need to focus on foundational issues first. What is confidence, does it obey rules, etc.
Probability is what’s studied in probability theory. Confidence is not.
You’ve said more about what confidence is not than what it is.
If I say that I trust someone, am I saying something about that other person? Or myself? Or both?
Good question; I think it leads in a potentially productive direction.
Humans experience (elements of) the world in what is often called the subject-object relationship. The subject is usually the one who acts or experiences; the object is the one that is acted upon or is experienced. Subjects & objects only exist together; without subjects there are no objects, & vice versa. Confidence, as a function within the s-o relationship, applies subjectively to humans, & objectively to hypotheses. The answer to your question is BOTH.
The s-o relationship “happens” in a variety of “aspects,” or types of properties and laws: quantitative, logical, biological, legal, ethical, certitudinal, and other aspects. Confidence, trust, certainty, reliance, etc. are all functions within the certitudinal aspect. They involve the degree of certainty subjects have in objects. Without objects, there’s nothing to be certain about. Without subjects, there’s no one to be certain.
I could go on. . .
Or if he rejects the hypothesis, and the hypothesis is correct.
True. Either way, once some one has concluded a state (true or false) a hypothesis, they are wrong if the other state (false or true) is true about the hypothesis. So, the probability they are wrong is a probability of the other state.
No, the focus was on the specific value of .05, and that specific value is not a feature of hypothesis testing.
Well, I guess we were not communicating, because I was talking about alpha.
Or you could have a situation where P(Ho)=1 but p=.001. Such a situation will happen only .1% of the time. If you’re willing to have it happen that frequently, then you should set alpha at .001.
You didn’t comment on the more important part of what I was saying in “For instance, in many applications, p=0.03 and Pr(Ho|data)=.20, say, could be realized given the same data. Mistaking p for Pr(Ho|data) leads here to an unjustified strength of belief that Ho is false. That strength of belief could, in turn, support a decision to bet 7 to 1 against Ho, whereas the max justified bet against Ho would be 4 to 1.” Maybe that is because you doubt probabilities apply to hypotheses at all (even tho you said here “P(Ho)=1”, but perhaps just to participate in the discussion)?
It’s impossible “to say whether someone is right for interpreting inductively a p-value (say) in such and such a way?” I’m surprised you would say that, and I wonder about the reason. Is that because you believe there are no norms for induction, or the only norm is complete freedom, or what?