A number of philosophical accounts on Substack have recently been talking a bit about moral intuitionism. It is a reminder of why I'm not on Substack -- philosophy Substack is generally not all that good. Don't get me wrong, there's still good work on there, but there is a lot of fairly verbose and pretentious trash. In any case, I thought I would say a thing or two about moral intuitionism. I will start by clearing away a number of the obvious errors I have seen people on Substack making about the subject.
(1) Moral intuitionists are not committed to moral truths being self-evident.
(2) Moral intuitionists are not committed to moral truths being easy to discern.
(3) Moral intuitionists are not committed to snap judgments in moral matters being generally trustworthy.
(4) Moral intuitionists do not necessarily build their intuitionism on the concept of intuition.
(5) Moral intuitionists are not committed to basic moral judgments being indefeasible or incorrigible.
There is a very common, but very bad, habit of basing one's assessments of a position or family of positions not on any definition or survey of the position(s), but on what one imaginatively associates with the label that came to be attached to them. Moral intuitionism is in fact a very large family of philosophical positions; it's only called 'moral intuitionism' because of some of the things that 'intuition' meant in the nineteenth century, and it is for purely contingent historical reasons that it was the label that stuck to the whole family. You should not assume that any moral intuitionist is necessarily committed to any baggage that your imagination, in the twenty-first century, happens to pile on top of the word 'intuition'. In the case of (1), I allow some leeway; the SEP article, Intuitionism in Ethics, gets this (somewhat) wrong, claiming that all classical intuitionists hold that basic moral propositions are self-evident; this is certainly not strictly true, even with the qualifications the author makes (although it would cover a considerable amount of the historical ground). It would be more accurate to say that all classical intuitionists take some basic moral classifications to be reasonably evident in human experience. But even the SEP article is explicit that it is using 'self-evident' to cover things that we would not usually label as self-evident and confines itself to 'classical' intuitionism, recognizing that you can have intuitionisms with somewhat weaker positions.
Moral intuitionism is a very large family of positions in moral epistemology whose general position we might put colloquially as (1) at least some moral matters are, as such, naturally recognizable in experience of some kind and (2) this natural recognition plays an important role, of some kind, in calibrating moral judgments generally. It's generally contrasted with its major historical enemy, utilitarianism (considered as a position in moral epistemology), for which moral matters properly speaking are calculated, and sometimes with various kinds of conventionalism, for which moral matters are things that are invented or created, although there are lots of other positions that are potentially inconsistent with it.
Charles Darwin, who was a moral intuitionist, has an interesting argument against utilitarianism and for intuitionism in The Descent of Man. Darwin's own form of intuitionism is a moral sense theory, which in turn is one of the classical intuitionisms that does not require moral foundations to be self-evident. Darwin's argument is that we have extensive evidence that morality, moral assessment, and moral behavior predates anything on which utilitarianism is based. In particular, Darwin argues that the biological evidence indicates that moral assessment existed long before any ability to rationally calculate or even estimate something like overall happiness, and against Mill, Darwin argues that the social feelings that Mill thinks are acquired in the course of moral development are clearly found innate and instinctive in animals closely related to human beings. Thus, Darwin says, the general theory of evolution makes utilitarian accounts of moral epistemology extremely improbable, but is very consistent with the idea that moral assessment is a natural capacity that does not need to be acquired by calculation or anything like it. Darwin's argument, if accepted, would also work against many forms of moral conventionalism and constructionism. Note, incidentally, that Darwin is not committed in any way to the human moral sense (or any animal's moral sense) being infallible or perfectly reliable, or its deliverances being self-evident or necessarily true or indefeasible. Indeed, it's quite clear that Darwin does not hold any of this. But in this sense, it's just like all our cognitive and perceptual systems; the fact that our visual system is not absolutely perfect does not mean that we don't have it, and in fact, significant portions of our reasoning necessarily presuppose a human natural ability to perceive things visually, even if human eyesight turned out to be comparatively poor -- indeed, even if this or that individual happens to be blind.
Darwin's kind of moral sense theory is very far from being the most popular form of intuitionism, but it is genuinely a form of intuitionism. Another kind of argument that is very common, with variations, among many different kinds of intuitionists is one we find in William Whewell (who, like Darwin, is arguing against John Stuart Mill). Whewell holds that we could very well create a utilitarian system of calculation that could deliver correct moral judgments. But, paraphrasing heavily,
(1) human beings in general do not learn to make correct moral judgments by learning such a system;
(2) there are infinitely many possible and mutually exclusive utilitarianisms, some of which are obviously insane and others which are subtly wrong, and the only way to find the kind of utilitarian system that does not at some point go wrong is repeatedly to check it against reasonable moral assessments that we already have;
(3) and utilitarian systems are complicated to use correctly (even committed utilitarians regularly take shortcuts by just assuming things that a strict utilitarian would have to calculate), so while such systems might be useful for particular technical purposes, there will be many situations in which they will not be the best way to make moral assessments.
Whewell is much nicer to utilitarians than intuitionists have often been, but the argument that a moral system, of whatever kind, requires calibration in light of moral experience, and therefore always presupposes that we are already capable of some moral judgment independently of the system, is a common intuitionist idea. That is, we are capable of moral assessment prior to being utilitarian, or whatever, and utilitarianism, or whatever, can only be established, if at all, by reference to such pre-existing capability for moral assessment.