The short answer is: in statistics, you usually want to answer a question that cannot be answered. The Bayesian approach is to make an answer up; the frequentist approach is to answer a different question.
For example, suppose I toss a coin 10 times and get HHHHHHHHHH. What is the probability that it is biased? If it's a coin I just found in my pocket, I'd be pretty confident it is unbiased, while if it just came out of a christmas cracker, I'd assume it was double headed.
Assuming for simplicity that the tosses are definitely independent, so the only question is what is the probability p that each one is heads, a Bayesian would of course assign different priors in the two situations, but this prior would still be made up - what is the chance that a coin from my pocket is double headed? Worse, what is the chance that p is between .700 and .701?
In simple cases, hypothesis testing basically amounts to simply asking `what is the probability of an unbiased coin giving a result this extreme?' That's not what you want to know, but the advantage is that you can answer it. Of course, the problem now (apart from getting an answer to the wrong question) is that you need to choose sensible hypotheses to test: why did I decide in advance to test p=0.5, rather than p=0.7? But in some cases, such as this one, there's a single obvious choice, so at least you know what to do, unlike the Bayesian. (Should we assume p is uniform on [0,1]?)
Of course, there are situations where the Bayesian approach is sensible, and in which any statistician would use it. Many medical examples are like this, because the question you want to know the answer to is close to one that can be answered: `if a person A is chosen at random from a population in which the incidence of a certain disease is x%, and A is tested with a test with known rates of false positives/negatives, given a positive result, what is the chance that A has the disease?' Of course, in the real question, A is not a random person, but me, but (especially if you modify things by taking additional data into account), the random person approximation is reasonable. For the coin question, there's no corresponding reasonable approximation by an answerable question.
Confidence intervals are confusing at first: again they don't answer the question you want, but something different. Formally, we have a family of probability distributions: here, for each p in [0,1], there's a distribution Pr_p, corresponding to H having probability p. (Assuming indepedence still.) If X denotes a sequence of coin tosses, then a 99% confidence interval for p is a pair of functions a(X), b(X) such that, for any p, if X is in fact chosen randomly according to the distribution Pr_p, then the probability that a(X)<p<b(X) is at least 99%. The right way to think of this is that p is fixed and unknown, and the probability that the random interval produced surrounds p is at least 99%. This is not what you want, but it's what you get. Of course, the point is that a(X) and b(X) can be calculated from X without knowing p.
Part of the point is that there is often a single best way to calculate them: in this case (for a symmetric interval) it boils down to choosing b(X) as small as possible so that max_p Pr_p(b(X)<p) <= 0.005, and similarly for a(X). So you get a definite answer, just to the wrong question. The answer is still useful, because one can say in advance, that 99 of 100 times you do this, the true value will lie in the interval. But think of it as: 99 of 100 times the interval will surround the true value.