Due to Unwin's recent letter to the editor in response to a review of Dawkins's The God Delusion that mentioned Dawkins's criticism of Unwin's Bayesian argument for the existence of God (PZ Myers helpfully quotes the relevant selection from Dawkins here), the question of the status of Bayesian arguments of this sort has been raised. So I thought I'd say something about this.
Bayesian epistemology is a noxious weed that first entered the philosophical garden through philosophy of science. Struggling to develop an account of the scientific confirmation of theories that was both adequate and formal, people began adapting the work done on Bayes' Theorem in the probability calculus to the topic at hand. From there it has spread all over the place, with a number of variations. It's important to keep in mind that this sort of adaptation of mathematical Bayesianism is not the same as mathematical Bayesianism itself. The strengths and weaknesses of Bayesian statistics don't automatically translate into the strengths and weaknesses of every adaptation of it to a different field, because every adaptation makes use of assumptions that may or may not be good assumptions. Bayes' Theorem is quite cool on its own; there's no reason, of course, to think that every absurd use of it is the fault of the innocent little Theorem.
Philosophical Bayesianisms of all sorts may be roughly characterized as claiming the following things:
(1) There are degrees of belief.
(2) There are degrees of evidence.
(3) Degrees of belief and degrees of evidence are commensurable to the extent that they can both be characterized by a single probability calculus.
(4) The relation between degrees of belief and degrees of evidence must be consistent with the probability calculus in all epistemic agents who are acting rationally.
(5) Given an original degree of belief and the addition of new evidence, the rational agent (at least usually) transforms an original degree of belief to a new degree of belief by determining the probability of what is believed on the new evidence.
There are slight variations on these, but these are more or less the positions held by philosophical Bayesians, whether they are Bayesian about scientific confirmation, about decision theory, or whatever else. The idea is that this is supposed to be a model of rational belief change (or non-change). I happen to think that all five claims are false in the senses required for all five together to be a good model of rational belief change. I'm a Newmanian about assent, which means that I don't believe that belief has degrees -- although, naturally, there are other things (like feelings) that are closely related that do, and which allow us to talk about degrees of belief in a loose sense. And the arguments in favor of the claim that belief does have degrees are all very bad in any case. My position on evidence is similar. Even setting that aside, I think it's clear that we have no stable way to measure either (the most plausible candidate is wagering behavior, which won't do); so we can't establish any clear commensurability. I disagree with (4) again because I agree with Newman about assent; and, in fact, while initially plausible, it seems to require implausible ideas of rationality (e.g., the logical omniscience problem). Bayesians themselves generally recognize that (5) has to have a qualification (like "at least usually"); I just think the exceptions are much more common than Bayesians do.
But my interest in this post is not to give my own reasoning on why philosophical Bayesianism is false, but to protest a certain slipperiness with the use of Bayesian arguments. So let's set aside all my qualms and suppose that (1)-(5) are all true (presumably in some form slightly more precise than the vague one I've given). What then?
A common approach seems to be to treat it in this way. Suppose you want to consider the question, "Is there a black hole in such-and-such region of space?" You assess your initial estimate of the probability (degree of belief) that there is, identify the evidential factors you think relevant, and then establish a new probability for (degree of belief in) the claim that there is a black hole in such-and-such region of space given those factors. Up to this point all is well and good, at least if you accept a Bayesian epistemology.
However, a problem arises if this rational thought process is taken as an objective argument that there is a black hole in such-and-such region of space; because it's a horrible way to argue for anything (except, perhaps, for the claim that you are believing rationally in believing as you do, given your assessments). In essence, what we have in philosophical Bayesianism is a theory of rational belief change. Given an original set of beliefs, we have a set of rules for how they should change (or not change) due to new evidence coming in. But because this is an account of rational belief change, it works very poorly as a means of arguing. The Bayesian account covers every rational belief change (at least in probabilistic circumstances) for every combination of initial probabilities, evidences, and claims. By presenting a Bayesian argument you haven't presented a good argument that there is a black hole in such-and-such region of space; you have presented one of infinitely many rational justifications of belief changes that favor or don't favor the conclusion. No one else, even if they are Bayesian, will accept that particular line of reasoning unless they agree with your assessments of prior probability, of relevant evidences, and of the relations between the evidences and the claim. You've not given an argument; you've just given an example of how one might come to be persuaded without ceasing to be rational. This can be an interesting result, if you don't try to make it into something it is not. Unfortunately, there is a widespread tendency to make it into something it is not.
To put it another way. Assuming that Bayesian epistemology is true, and assuming that you are accurately portraying your own assessments of evidences and probabilities, all you have shown is that you are not irrational in believing (or disbelieving) the claim there is a black hole in such-and-such region of space, given those assessments. But this is a very weak result, much weaker than the conclusion that there really is a black hole in such-and-such region of space. The weaker result can only become the stronger result if you have good, solid arguments that everyone, or almost everyone, who is rational will assess things more or less as you have done. But this means that the actual work done in showing that there is a black hole in such-and-such region of space is done by different arguments, not the actual Bayesian argument you are running. The Bayesian argument is just a way to organize those arguments so that you can see their point. It itself does no serious work in the actual proof.
So it makes no sense to run 'Bayesian arguments' for a given conclusion, unless that conclusion is that you are rational to accept that conclusion on the assessments you've made, even if Bayesianism is true. It's fairly clear, however, that philosophers do this a horrid amount. And the result is always just silly, since anyone can point out that a different set of assessments leads to a different conclusion. The result is both that Bayesian epistemology looks much more stupid than it really is, and that the conclusion looks laughable in light of how apparently arbitrary you had to be to get the conclusion. This leads to the irritation of intelligent mathematicians and others who wonder what in the world you think you're doing, and is bad for the reputation of philosophy in general. I'm very much opposed to philosophical Bayesianism, but even I have to protest how silly this rush to argument makes philosophical Bayesianism (and philosophers generally) look. Even Bayesian epistemologists deserve better.