one objection to "the" bayesian program is that in order to have a well oiled machine, it takes as input only human intuition.
Technically it's an objection, but that doesn't make it true.
Let me talk you through an application of Bayesian probability to computer vision, feel free to point out the point where human intuition kicks in.
We want to find all of the grass in a photo.
For tractability, we model these images as a Markov random field where each pixel is connected to it's nearest neighbours.
We set the unary potentials of each pixel by looking at the probability that a pixel of that colour, and in that location was grass by looking at the frequency of occurrences in a training set with labelled ground truth.
We form the pairwise links between pixels similarly, learning how often two pixels with a similar colour difference between them and in a similar location share the same label.
So the pairwise links look like:
[latex] P_A_s B_t = P(A_s B_t)/(P(A_s)P(B_t))[/latex]
and the unary terms:
[latex] U_A_s =P(A_s) [/latex]
We then look for the maximal solution to:
[latex] \Pi_{\forall A} (U_A_s \times \Pi _{\forall B>A} P_A_s B_t) [/latex]
Which you can normally find by running graph-cuts or TRW-S providing it's sufficiently tractable.
Now without question this is an exclusively Bayesian method, I'm talking about what the probability is that a single pixel in a single image is grass, an idea which just doesn't make sense in terms of the old frequential framework.
But still, it is an objective (no intuition used here) method for finding the closest model from a set of models and assigning a confidence to it.