Thinking about this further, I realized that I could be more precise about what aspects of casuistry decision theory models. A very important distinction in traditional casuistry is the distinction between safe and unsafe conclusions. The distinction is not intended to be absolute, but rather a distinction between the two end-points of a continuum. On one end we have completely unsafe conclusions; on the other end we have completely safe conclusions. Other conclusions are more or less safe or unsafe. What I would suggest is that decision theory models apparent relative safety. It models safety because its concepts (risk, uncertainty, cost, gain, etc.) are those concepts used to determine whether a practical conclusion is a safe or unsafe one to accept. It models relative safety because it is essentially comparative. And it models apparent safety because it models the safety of conclusions from particular perspectives (note that this is not due to the distinction between objective and subjective expected utility, both of which are apparent in this sense, but between the real status of the question and the status of the question on the available evidence; the latter is apparent).
If this is so, two corollaries follow. (1) The aspect of casuistry that decision theory models is extremely important. (2) People who think that decision theory models rational decision are bonkers. If decision theory models apparent relative safety, it cannot be applied to decision at all except on supposition of some principle of application. Suppose you have two positions, one of which has a higher expected utility than the other. This conclusion is absolutely useless unless you have some principle that tells you what to do with it. For decision we need not merely relative safety but real safety; and that requires a principle that establishes a threshold of sufficient safety, i.e., a dividing line between conclusions that are safe enough to act on and conclusions that are not safe enough to act on. It is often assumed that the principle is: Do whatever has a higher expected utility. However, not only is this decision-theoretic rigorism psychologically impossible (e.g., as a matter of fact our grasp on probabilities, risks, gains, etc., isn't as clear and precise as this; everyone is, as a matter of fact, laxist and not rigorist about matters of indifference, etc.) it doesn't model rational decisions except in cases where the higher expected utility is known to be the only safe conclusion. This is often not so. We often make choices between alternatives both of which are safe enough for action, although not equally so; and we often are faced with situations in which we don't have that certainty, but only probabilities that a given conclusion is safe enough. Likewise, we often have cases where we would say that the difference between the higher expected utility and its rival is not enough for a rational person to niggle about. Related to this point is the fact that decision theory doesn't model the purposes according to which one applies the matrices and arguments to practical situations. Further, part of rational decision is recognizing the means for the decision, and decision theory doesn't directly model means. So decision theory lacks purposes and principles of actions, and it lacks a practical model of means to ends, and as such does not adequately model rational decision. (Traditional theory also lacks a clear way of handling issues of noncommensurable and nonquantitative values, with which casuistry also has to deal.) As I've already noted, however, it can model the apparent relative safety of possibilities, and in cases where one's purposes, principles, and means are clear, that's largely all one needs.
It's worth noting, by the way, that there is an epistemological as well as a moral casuistry; and the two are remarkably parallel. An epistemological rigorist holds the view that nothing should be accepted unless it is certain, i.e., the only conclusions that are safe enough that holding them is rational are conclusions that strictly meet a particular standard of high certainty; an epistemological laxist holds the view that anything may be regarded as safe enough for rational belief if there is any authoritatively recognized evidence for it at all; and, of course, there are positions between.
It's interesting thing to think about how this might affect interpretation of Pascal's Wager. The most probable interpretation of the Wager sees it as addressed to a type that would have been common at the time, namely, someone who makes claims that presuppose epistemological rigorism, but who is a moral libertine (a libertine is someone who is not even laxist - a laxist requires that one follow only opinions that are probably safe according to some recognized authority who has reasoned through the matter on the basis of recognized principles, whereas a libertine doesn't even expect so much of himself). As a Jansenist, Pascal is a moral rigorist, and part of the Wager is clearly to start libertines off on the road to moral rigorism -- not to make them moral rigorists, obviously, but to get libertines closer to it. Because Pascal wishes to argue that the case is one where the difference between the safer case (believing God exists) and the less safe case (disbelieving) is massive, the Wager itself does not commit one to any particular position in epistemological casuistry; it does, however, commit the user to a denial of epistemological rigorism. (The part of the Wager usually modeled by decision theory doesn't say why; however, the Wager as Pascal presents it in his fragmentary notes does, because Pascal explicitly relates his Wager reasoning to utility of believing reasoning such as is found in Augustine and Montaigne -- and such reasoning is an argument against epistemological rigorism.)