Wednesday, March 19, 2025

Probabilities and Just-So Stories

The notion of a 'just-so story' goes back to Rudyard Kipling's 1902 book, based on the fact that his daughter kept demanding that he tell stories 'just so', in exactly the right way; Just So Stories are fanciful stories about how the leopard got its spots, how the elephant got its trunk, and so forth. The phrase 'just-so story' became popular in the late 1970s when Stephen Jay Gould used it to criticize some of the explanations used in evolutionary psychology. Of course, as with all such labels, people have tried at various times to claim both that just-so stories are good and that just-so stories don't really exist. But of course, they do, and they fail as explanations. I saw one just recently, in which someone on Substack, I think, argued that people have difficulty with critical thinking because it takes up so much energy. It's the sort of thing that has a narrative plausibility but is completely unfounded. Current views of the brain all indicate that thinking takes very little energy (so little, in fact, that neuroscientists struggle sometimes even to estimate it with the methods available to them). Having a brain is immensely energy-expensive; but, given that, thinking hard costs almost nothing in comparison. The human brain is like an extremely powerful engine; just keeping it running consumes fuel like crazy. But this energy is all used to make it possible to do a lot of things with only just a little extra, so the difference between the engine idling and the engine applying its power this way or that is relatively tiny. It's a lot of work to have a brain; it's not a lot of work to use it. But it makes a good story, particularly since we've all had the experience of being tired after a lot of thinking (which is not, contrary to what one might think, due to your brain but due mostly to the fact that you use a lot more muscles when you are concentrating than you usually realize, combined with the fact that we often concentrate hardest when we are already stressed about something). Note that it's not the bare fact of explaining by a story that's the problem here; it's that the story, however plausible it may sound, is just a story despite the pretense that it is more. It's not even, in Plato's sense, a 'likely story', a story that is not a closely reasoned account but might be true, more or less, in some sense or other, because it captures things that are known to be true and that can be given a closely reasoned account. It's not a likely story; it just looks like one.

While the notion of a just-so story developed in the context of philosophy of biology, broadly construed, there are many other fields in which just-so stories are found. And I think they have been spreading like weeds in philosophy, in part due to uncritical and careless uses of broadly Bayesian kinds of reasoning -- uncritical, because the reasoning often goes with a failure to think through what the probabilities involved actually are, and careless, because there are often no serious safeguards. We're led through a story about how one might reason, but the story hangs in air and doesn't connect to anything. 

Probabilities are not random numbers, nor do we have direct insight into them, on any account of probability. Strictly speaking, a probability is a comparison of a subset of possibilities (like the ways five dice can land on 2) to the larger set of possibilities (like the ways five dice can land on any number). There are lots of different ways one can interpret these sets of possibilities, and preferred ways become elaborated in various philosophical accounts of probability, but to have any probability in any interpretation of probability, you have to know something about the possibilities on the table, and you have to be able to measure those possibilities in such a way as to compare the relevant subset to the total. If you significantly change the possibilities or how they are measured (for instance, if we switch from five dice to twelve dice, or from dice to coins), your probabilities stop being straightforwardly comparable -- you have shifted the comparison on which the numerical probability is based. The short of it is that, for a probability to mean anything, you have to have some way of knowing what possibilities are on the table, and some method of measurement.

People will argue like this. Suppose, just to take one example, naturalism, N, and suppose some particular evidence, E, and our background evidence, R. Then we can find the probability of E given N&R, and let us suppose it is much, much lower than the probability of E given ~N&R. Thus E is evidence against N, at least with respect to R. All well and good if the numbers, even handled algebraically mean anything. But there are problems right at the beginning in this particular case, because N doesn't really predict things. Naturalism is not a predictive hypothesis; it doesn't tell you what will be the case, but it is rather an incomplete framework fitting entire families of hypothesis -- it tells you (if true) what kind of hypothesis could possibly tell you what will be the case. Suppose we take something like, the existence of the human mind, and claim on the basis of some version or other, however modified, of the above argument that it is evidence against naturalism. The problem, though, is that naturalism on its own doesn't predict anything about the human mind; whether or not the human mind exists, naturalism purports to describe what kind of explanation you'll need to look for in order to explain that fact. It does, of course, rule out some things, namely, those things that are inconsistent with its assumptions or constituent features, but it tells us nothing either way about most things.

But, Brandon, you might say, surely it makes some of those other things more or less probable? No, not necessarily. For one thing, naturalism doesn't identify a well-defined set of possibilities; it is an open-ended framework and we don't know, even in a general way, all the possibilities consistent with it. We can, again, definitely rule some things out, based on what we can show to be inconsistent with its assumptions and definition, but that's hard work and we certainly have not plumbed the complete depths of that inquiry, which involves entirely different kinds of argument than the above argument. But it's also the case that these kinds of arguments involve no method of measurement that lets us compare one set of possibilities with another. How are we getting these numbers (or ranges, as the case may be)? Are we taking field surveys of possible worlds? Are we drawing on metaphysical experiments about the nature of reality? No, the numbers and ranges are made up. They are completely made up. Does naturalism or theism better predict the existence of a rational animal suffering? Show me the analysis of the possibilities on the table, and the method you are using to measure predictiveness, and the model you are using to put numbers to it all, and then maybe I will regard you as not just telling a Bayesian version of a just-so story. Otherwise, all you are telling me is a story based on narrative plausibilities, or what you think are narrative plausibilities, about the journey of discovery; that you throw some fictitious numbers into it doesn't change that.

It is probably impossible to break analytic philosophers of the superstitious habit of assuming that they have direct mystical insight into the probabilistic structure of reality, but that doesn't mean that nay of the rest of us have to treat it as more than a charming fictional story about how an elephant's nose got stretched into a trunk.

In the broad, traditional sense of 'probable', you can do probable inferences without any numbers even implied; all you need to know is that something is possible, that there are causal tendencies for it, and that the things that can prevent it are missing, or else you establish what is required for demonstrative argument for a conclusion and show that, while you can't fully deliver on all of them, you can deliver enough to make extrapolation of the rest reasonable. Nothing prevents these traditional kinds of probable inference still being available. But once you start using numbers, as in Bayesian arguments, you have to justify the numbers. (This is true, although there are subjective Bayesians who might try to deny it, even on the most subjective of subjective Bayesianisms, because in an argument for something you need to have something connecting the argument to reality.)

A different kind of case. I recently came across another argument (unfortunately I cannot find it again) on a different subject. The argument went something like,

(1) On moral realism, we would expect morality to robustly make sense in a unified way.
(2) Morality does not robustly make sense in a unified way, but in fact is very patchwork.
(3) Therefore, moral realism is wrong.

Whatever might be said of moral realism, this is an obviously bad argument. We have the same attempt to tell a story about "what we would expect" from moral realism. In this case, though, it's not just a matter of our not having done the work of measurement, it's that (1) is just wrong, and in the most glaring way. Moral realism is a position in which moral facts can't be merely assumed to depend on what makes sense to us; therefore, it's false that we would expect morality to make sense given moral realism on its own, at least if we are going on the definition of moral realism rather than associations in our imagination. It's a just-so story about how an inquiry might possibly reach a particular, pre-selected conclusion, and floats free of anything that is necessary to deliver the conclusion. It is not moral realism but moral anti-realism, or at least some forms of it, which would have a problem if it turned out that (2) is true, because in some forms of moral anti-realism, morality would in fact just be a byproduct or outgrowth of our own sense-making capacities. Moral realism, on the other hand, is consistent with the moral world being quite baroque and surprising, in need of extensive exploration and discovery in order to know what it even involves. It is also consistent with that other expectation, to be sure; here again, we have a situation in which, short of going through all the possibilities, we have no way of getting any definite probabilistic numbers or ranges, so there's no way to say which is more probable -- they are just possibilities on the table, and we don't have the means to compare them to the totality of all the relevant possibilities. If we try to get more specific, all we get is another just-so story.