Friday, March 30, 2012

Free-Rider Problem

The free-rider problem has been discussed a fair amount in the past few weeks in talking about health insurance, and the more I hear about it the less certain I am that there is really such a thing. There are free-riders, of course, in some sense of the term, but people rarely define the sense in which they are using it, and when you try to pin it down, the sense in which the free-rider problem constitutes a genuine problem becomes very elusive. For instance, children are in some sense always free-riders. Infants don't pay taxes; in fact, they don't do much at all except inconvenience other people. But they benefit from all sorts of collective action. If you look at the definitions most people use to describe free-riders, infants are clear examples, receiving benefits they do nothing to support. But obviously there is no problem with infants riding free on the system. No one complains that infants get the full benefits of fire, police, and other emergency services despite not contributing to the support of these services. Likewise, nobody claims about the massive free-rider problem of children receiving rather expensive public educations on public dime, all paid for by people who are manifestly not children.

Thus the mere fact that anyone is riding free on the system doesn't mean that there is any actual problem. If people receive services to which they are entitled, but are actually unable to help pay for them, there is likewise no free-rider problem: if they're entitled to the services independently of whether they can pay, then they are fully within their rights to take the services whether they pay for them or not. If the entitlement doesn't depend on the payment, who is paying for the service is a completely different issue from who is receiving it. The one is simply not relevant to the other. And in fact a lot of services and benefits are set up this way precisely so that no one will miss out even if they can't pay. It's not the only reason, but it is one of the reasons we engage in collective action in the first place -- so that some people can ride free, if they have to.

What most people are really talking about when talking about the free-rider problem is evasion of the responsibility to contribute, perhaps combined with the question of how to make the system sustainable. But this is a completely different matter from people receiving benefits and services to which they do not contribute. We see this with the infant case: the reason there is no problem with infants riding free on the system is that they have no responsibility to contribute. But the reverse direction shows it as well: someone may have the responsibility to contribute even if they themselves don't benefit at all. We see this with taxes: as a citizen your tax responsibilities are set without any regard whatsoever for whether you get any benefits at all. To be sure, virtually everyone who pays taxes does receive benefits that are supported by taxes. But there is no intrinsic necessity to this, and the responsibility itself is not fixed (for instance) with any concern for whether the rich are getting their fair share given all that they put in, or whether they are in fact just subsidizing a lot of poor people who can't pay taxes. The sort of collective action involved in government simply doesn't work that way. We do it because we all recognize the benefit; only then does the practical problem arise of how to make the benefit sustainable, and this happens any way we can manage it. Only in light of this plan for sustainability does the responsibility to contribute actually get decided, and the best plan for sustainability might well allow for, or even guarantee, free-riders. And if it's really the best plan for sustainability, there is simply no problem with there being free-riders.

So the problems people are really talking about have nothing whatsoever to do with whether anyone is riding free on the system; what people are really talking about are responsibility-evaders who endanger the sustainability of the system. Whether the free-riders are responsibility-evaders, or vice versa, has to be shown, not assumed. And thus the whole talk of free-riders ends up being otiose and obfuscating; all the work is done by assumptions about who is responsible for contributing. Which is perhaps a good thing, since if we actually understood 'free-rider problem' as broadly as Hardin does in the SEP link above, we are almost all free-riding almost all the time.

I think the problem here is that there is a genuine abstract 'free-rider problem', in game-theoretical discussion of the logic of collective action, that has begun to be used to model all sorts of behaviors that don't necessarily meet the original game-theoretical assumptions (e.g., that responsibility is equal or at least proportional to ability to contribute and that the benefits are made possible by consent of the contributors), and which has no moral or political implications on its own, being simply a problem about optimal and suboptimal strategies. But we all know the irritating character of the Freeloader who manages unfairly to get by on our work rather than his own. So the combination of these leads to people using some loose, vague mix of these to describe all sorts of situations that seem unfair. But it doesn't actually contribute anything. What you really want to know is whether the system is worth it, whether it is sustainable, what division of responsibilities is best for sustaining the worthwhile systems, who is actually evading their actual responsibilities, and what can be done (if anything) to prevent such evasions. These make the problem precise, whereas it seems to me, more and more, that 'the free-rider problem' is just thrown in as a vague diagnosis without much critical examination, and always signals that the discussion is going to become less useful.

10 comments:

  1. Mark Sloan4:04 PM

    Brandon, why do you say that “… there is a genuine abstract 'free-rider problem', in game-theoretical discussion of the logic of collective action, … which <span>has no moral or political implications </span> on its own, being simply a problem about optimal and suboptimal strategies.”

    Solutions to free rider problems are chock full of moral implications.

    Biology based moral emotions such as empathy, loyalty, and guilt that motivate altruism exist because they were selected for in our ancestors by the increased reproductive fitness benefits of the cooperation that altruism instigated and sustained. If the moral emotion of indignation – which motivates altruistic punishment of free-riders – did not exist, then empathy, loyalty, and guilt (except perhaps regarding close kin) could never have evolved. Free-riders would have made altruistic strategies losers regarding reproductive fitness.
    Do empathy, loyalty, and guilt have no moral implications?

    Cultural norms such as “Do unto others as you would have them do unto you” that advocate altruism exist in a parallel fashion because they were selected for by whatever synergistic benefits of cooperation people found attractive (but not necessarily reproductive fitness). Without the emotion of indignation, that we can now identify as “Righteous indignation” as an advocated social norm, free-riders might go unpunished and could similarly make following the Golden Rule a losing strategy. Does enforcement of the Golden Rule have no moral implications?

    Finally, optimum strategies for enforcing social norms (punishing free-riders) is now a hot topic in game theory. Here “optimum” means the punishment strategy that is most likely (in morally relevant situations) to “increase the benefits of altruistic cooperation in groups”.

    For example, people are sometimes uncertain about how to apply the Golden Rule when dealing with criminals and in time of war. The science of the matter is that it is immoral (based on the evolutionary function of morality in cultures, the primary reason it exists) to follow the Golden rule if doing so is likely to decrease, rather than increase, the benefits of cooperation in groups.

    Do answers to the question “When is it immoral to follow the Golden Rule?” have no moral implications? 

    ReplyDelete
  2. branemrys5:14 PM

    All of your examples require adding moral and political assumptions that are independent of anything involved in game theory. Game theory merely identifies abstract relations; actual implementation, whether political or moral, requires the addition of assumptions relevant to those domains. Whenever people claim that these kinds of problems have moral or political implications, they are either equivocating or claiming that they have moral or political implications in combination with assumptions that have to be verified independently for the practical domain.

    Your example of the Golden Rule is a good example. What actually does almost all the substantial work here are the assumptions about "the evolutionary function of morality," its relevance to the particular question at hand (criminals and time of war), the domain-specific goals that the combination of these require, and the bridge principles that link these ends to particular abstract features of game problems. In short, what's really doing the work here is an entire moral and political theory, and game theory simply identifies acceptable conclusions given that theory. It would be a mistake to say that these conclusions follow from game theory; they follow from the combination of an extraordinary number of substantive moral and political assumptions.

    An analogy can be given to logic -- and in fact, it's an exact analogy. Logic has no moral or political implications. But, someone might say, logic obviously has moral or political implications, look at all the moral and political conclusions drawn logically. To which the reply is that these particular conclusions are simply the logical implications of prior moral and political principles, not implications of logic as such. Likewise, you are really just pointing to game theoretical implications of moral and political principles that have been assumed prior to any application of game theory. Game theory can't tell you when it is immoral to follow the Golden Rule, any more than the predicate calculus can tell you when a scientific claim is correct.

    ReplyDelete
  3. Mark Sloan8:56 PM

    Brandon,

    I like, and largely agree, with your comment: “In short, what's really doing the work here is an entire moral and political theory, and game theory simply identifies acceptable conclusions given that theory. It would be a mistake to say that these conclusions follow from game theory; they follow from the combination of an extraordinary number of substantive moral and political assumptions.” (As I will describe though, I think “an extraordinary number of substantive moral and political assumptions” overstates your case.)

    I also like your analogy to logic.

    However, a closer analogy may be to “science”.  As a matter of logic, science can’t tell us what we ought (imperative ought that is somehow binding regardless of our needs and preferences) to do, but science is extraordinarily useful for telling us what we ought (instrumental) to do if we desire to achieve given goals.

    Perhaps some clever philosopher will someday produce a generally accepted argument for the reality of such “imperative” oughts. Despite many attempts by very clever people, all such attempts have failed. Because of the nature of physical reality, I expect all such attempts to fail in the future, but don’t know how to conclusively show that. Until I learn different, I will refer to such imperative oughts as ‘magic’ oughts.

    While we are waiting for such ‘magic’ oughts to be somehow proved real, it seems to me obviously culturally useful to define a morality that is the best instrumental choice for both 1) groups to enforce, and 2) individuals to, almost always, accept the burdens of.

    As a matter of science, the primary reason that enforced cultural norms (moral standards) exist is to “increase the benefits of altruistic cooperation in groups”.

    My position does rely on the following idea. The best instrumental choice (to meet common needs and preferences) for a morality for 1) groups to enforce, and 2) individuals to, almost always, accept the burdens of, is defining as moral “altruistic acts that increase the benefits of cooperation in groups”.
    I expect you would call this an assumption. I think I can defend as fact, but that is too complicated a position to defend here.  But note it is not “an extraordinary number of assumptions”.

    In my initial reply I did not mention this idea.  

    I thought that pointing out that the emotions empathy, loyalty, guilt, and indignation and enforced cultural norms such as the golden rule (indirect reciprocity), (and other norms such as circumcision and not eating pigs which I did not mention) made the moral implications of a particular class of game theory strategies obvious even if the connection to morality was not obvious.

    Thanks for pointing out that this is not the case.

    Could you clarify

    1. Do you believe imperative oughts are actually somehow binding on people regardless of their needs and preferences?

    2. Do you believe that cultural utility is not the overriding basis for groups deciding what norms to enforce (moral standards to enforce), if imperative oughts are indeed just an illusion of our emotions?

    I appreciate your conversation.

    ReplyDelete
  4. rex crater9:16 PM

    lol, please

    Educating children - an admirable task - will never ever be the equivalent of giving stuff out for free to ADULTS. 

    Giving handouts for free is a socialist idea that bankrupts nations and only gets worse with time. 

    I think the main problem is people's perception of government. It's not a charity service, it's a law enforcement vehicle to combat the existence of fraud and exploitation. 

    Charities and non-profits should be the organizations that deal with poverty and lack of education. 

    ReplyDelete
  5. branemrys9:51 PM

    Mark,

    I think you have more assumptions going on than you suggest. Consider:

    (a) you have an unknown number of assumptions giving a basic account of science and identifying it as relevant to the moral question in the first place (it would have to be a very large set of assumptions -- science is not a simple thing and its relation to morality is a highly controversial thing);
    (b) you are assuming that it is culturally useful to define a best instrumental morality;
    (c) which in turn presupposes the assumption that there is any such thing, and an unknown number of assumptions establishing that it is coherent;
    (d) you are assuming that an instrumental morality that would be best would involve both group enforcement and individual acceptance of burdens;
    (e) you have an unknown number of assumptions about common needs and preferences;
    (f) you have an unknown but very large number of assumptions backing your definition of moral acts ('very large' because you said it was complicated to defend, which indicates a lot of assumptions);
    (g) you have an unknown number of assumptions backing the implicit idea that this is the only relevant kind of moral act (i.e., that there are no relevant moral acts answering to other definitions);
    (h) and then there are assumptions about all the fields you would apply this to -- empathy, loyalty, guilt, indignation, enforcement, cultural norms, indirect reciprocity, and so forth.
    and that's just running through very quickly; I'm quite sure that closer analysis would uncover more.

    None of this is bad thing, but my point is that all you are doing is very briefly summarizing a very complex moral theory, and that you already have this very complex moral theory in place before you even get to anything that looks game-theory-like. I quite agree that once you have all these things in place the strategies would be obvious; once you have all these things in place, game theory doesn't have much left to do except for a few minor calculations here and there.

    On your questions:

    (1) I don't think there are any 'oughts' that are binding independently of our needs and preferences. (I think 'oughts' are defined relative to practical problems in obtaining goods.) I don't think they are illusions of our emotions, either, since I don't think our emotions give us any such illusions -- people who think there are such 'oughts' do so on the basis of rational arguments that I happen to think incorrect. I know of no one who accepts such a position solely for emotional reasons.

    (2) Cultural utility is obviously not the overriding basis for groups deciding what norms to enforce. Groups clearly do not in fact generally take cultural utility to be overriding in this way, so the only thing that can be meant here is that groups ought to give priority to cultural utility in deciding norms, which requires that there is some more fundamental basis for deciding which norms to enforce (namely, whatever basis grounds the claim that groups ought to give overriding priority to cultural utility). Further, any such claim would be very costly, so to speak: such a claim is inconsistent with the view that there are rights independent of those granted by the group, for instance. Further, the formulation raises the questions of 'Which group?' and 'How does one handle inconsistencies among the various different cultural utilities of various different groups?'

    ReplyDelete
  6. branemrys10:12 PM

    In other words you agree with me that it's not about free-riding but evasion of responsibility.

    ReplyDelete
  7. Mark Sloan9:01 PM

    Brandon, I appreciate you continuing the discussion when our positions are so different.

    a) I am not aware of any controversy relating to science being fully able to 1) arrive at descriptive facts about, for instance, the evolutionary origins of the biology underlying empathy, loyalty, guilt, and indignation, and their present function in cultures, 2) uncover underlying unifying (descriptive) moral principles (if there are any) for past and present enforced cultural norms, and 3) provide critical information for instrumental choices regarding common specified goals such as determining what cultural norms should be enforced to increase the benefits of altruistic cooperation in groups.

    b) By the normal understanding of cultural evolution, enforced cultural norms (moral standards) are selected by groups based on whatever the group ‘finds’ appealing, and the primary selection force has most commonly been increased benefits of altruistic cooperation within the group. This process of a group defining what norms to enforce is complex, but I am aware of no controversy about this process being in the domain of science in its normal descriptive sense. I have spoken of this process as being “defining culturally useful moral standards”. If a group did not think they were useful, then, over time, I expect they would reject them and formulate others.

    Your points about supposed assumptions such as “should morality be culturally useful?” and c) through h) would be quite telling if there somehow actually were imperative moral obligations that were binding regardless of our needs and preferences.

    However, I was glad to see that you also reject the reality of such “magic” oughts.

    Without such ‘magic’ oughts, I assume we agree that the only rational justification for accepting the burdens of any morality is as an instrumental justification in aid of fulfilling, ultimately, personal goals?

    If we agree about ‘magic oughts, it seems to me that no assumptions are required of the kind you propose. Perhaps you might say that groups rationally ‘deciding’ to enforce whatever norms they think are their best instrumental choices and individuals rationally ‘deciding’ to accept the burdens of whatever norms they think are their best instrumental choices is an assumption. I see it as a rational choice, which is the kind I am interested in here.

    You said, “I don't think our emotions give us any such illusions -- people who think there are such 'oughts' do so on the basis of rational arguments that I happen to think incorrect. I know of no one who accepts such a position solely for emotional reasons.”

    It is clear we know very different kinds of people. I would say the opposite of people I know.

    Michael Ruse irritates me no end when takes apparent perverse pleasure in misleadingly saying “Morality is an illusion!” when all he means is that the emotional basis of our intuition that cultural norms are somehow ‘magically’ imperative is an illusion.

    If, as you claim, groups do not instrumentally choose, and thus evolve, what norms are enforced based on their common goals (choose moral standards based on cultural utility) what do you suggest those choices are based on?

    <span>I have the feeling we are missing the key point in our differences that might solve the puzzle for both of us (as to what the other's position actually is). I am just not yet sure what that key point is. (The font has strangely changed on this last bit and I can't figure out how to fix it.)</span>

    ReplyDelete
  8. branemrys10:54 PM

    Hi, Mark,

    Of course science can give a descriptive account of the origin and preservation of empathy and the like; but this is not relevant unless you make assumptions about what  such accounts get you about the actual moral questions concerned with these. For instance, if morality derives from needs and preferences, evolutionary accounts have no direct relevant -- whatever the origin, the needs and preferences either exist or not, here and now. In order to make the evolutionary accounts relevant, you are actually modifying a basic needs-and-preferences account. Again, my point is not that any of your assumptions are wrong, but that you are actually assuming a rather massive amount of moral theory before even getting to the stage where game theory is relevant.

    On your question: "<span>Without such ‘magic’ oughts, I assume we agree that the only rational justification for accepting the burdens of any morality is as an instrumental justification in aid of fulfilling, ultimately, personal goals?" I'm not sure what you mean by "personal goals" here. If you mean 'private goals', this doesn't seem to follow; but it would be inconsistent with making cultural utility overriding as you were suggesting, since cultural utility goes well beyond private goals. If, on the other hand, you mean simply, 'goals recognizable and pursuable by persons', it becomes trivially true; even someone like Kant, who is the major philosophical defender of imperatives that are independent of any need or preference, could agree with this. So I'm not really sure what you mean.
    </span>

    On your point, "<span>If we agree about ‘magic oughts, it seems to me that no assumptions are required of the kind you propose," I think you are missing the point. To get to the point of agreeing about what you call "magic oughts" people have to have already in hand a massive amount of moral theory, so if this is where we've start, my point is automatically made -- you are getting the results you are getting with game theory because you've already assume a large-scale moral theory by the time you get to it.</span>

    On your question, "<span>If, as you claim, groups do not instrumentally choose, and thus evolve, what norms are enforced based on their common goals (choose moral standards based on cultural utility) what do you suggest those choices are based on?" I 'm not sure if you are asking about what groups actually base their choices on or what they ought to base their choices on. What groups actually base their choices on is a matter of historical record: cultural utility, yes, but also private utilities of people with power and influence, rules just assumed without any calculation of utilities, habits, feelings, etc. The things you can base a choice on are as varied as human psychology, and group choices are even more complicated because they are the negotiations of many different such choices according to many different kinds of mechanisms. If you are asking what they ought to base their choices on, I think this would necessarily depend on the nature of the choices with which they are faced. </span>

    (If you respond, start a new comments thread above rather than replying to this comment; as comments threads with more than a few comments slip down the list in my dashboard, I lose the ability to see at a glance that someone has replied, because the reply is on the second or third page. If you start a new thread, though, I'll be able to see your reply at once.)

    Cheers,
    Brandon

    ReplyDelete
  9. Mark Sloan9:12 PM

    Brandon,

    As per your suggestion, I am posting my reply to you last comment as a new thread. I'll note your comment I am repsonding to and my reply

    First of Brandon’s comment: Of course science can give a descriptive account of the origin and preservation of empathy and the like; but this is not relevant unless you make assumptions. …  Brandon:

    Marks reply: Try this line of reasoning:

    Science can be very useful for informing us as to instrumental oughts, what we ought (instrumental) to do to accomplish a goal. 

    So science might tell us what we ought (instrumental) to do if our goal is to “Grow a big crop of beans”. But science is also logically able to tell us what we ought (instrumental) to do if we wish to meet our common needs and preferences and think the best way to do that is to “Increase the benefits of altruistic cooperation in groups”. If we are not interested in cooperation, we have no need to continue in the group. 

    Science, largely based in biological and cultural evolution and game theory, provides an instrumental ought to best meet a common goal of “increasing the benefits of cooperation in groups”. (This is a common goal, though usually unrecognized, because it is highly effective in meeting our common needs and preferences regardless of what these are.) If this is the overriding goal of our group, then we ought (instrumental) to enforce norms (moral standards) that advocate “Altruistic acts that increase the benefits of cooperation in groups”. Or stated as a definition of morality (that underlies enforced cultural norms), “Altruistic acts that increase the benefits of cooperation in groups are moral”.

    Assuming the science is right, the only assumption that I see here is that there is some group whose overriding goal is to meet their common needs and preferences and think the best way to do that is to “increase the altruistic benefits of cooperation in the group”

    Prime examples of such enforced norms selected for by cultural evolution are norms advocating “reciprocity” (the game theory strategy direct reciprocity) and “Do unto others as you would have them do unto you” (the game theory strategy indirect reciprocity if punishment of cheaters is added).

    It seems to me that such a moral principle fits people like a key in a well-oiled lock. This is because this key is what largely shaped this lock, the lock being the psychological parts of people that make us social animals, animals that are remarkably successful in cooperating in groups. I still don’t see my “numerous assumptions about morality”.   Mark:
     
    Brandon: On your question: "Without such ‘magic’ oughts, I assume we agree that the only rational justification for accepting the burdens of any morality is as an instrumental justification in aid of fulfilling, ultimately, personal goals?" I'm not sure what you mean by "personal goals" here. …  Brandon:

    Mark: No, I do mean private goals. That is, I am rationally justified in accepting the burdens of least my brand of social morality (enforced cultural norms) because I think doing so is, almost always, likely to best meet my private goals, perhaps durable well-being over my lifetime.

     You correctly point out that enforced cultural norms (moral standards) often advocate behaviors against the self-interest of individuals but in the interest of the group.  But there is a powerful interaction going on between the norms the group enforces and the norms that individual’s accept the burdens of. It is counter-productive for the group to try to enforce norms that people do not accept the burdens of. And it is, almost always, counter-productive for meeting the individual’s [...]

    ReplyDelete
  10. branemrys9:58 PM

    Mark,

    Every step in your line of reasoning skips several steps and thus increases the number of assumed moral principles. To give some examples: You are assuming that the goal of "our common needs and preferences" is a well-defined and adequate end capable of grounding a problem well-defined enough that science is capable of addressing it; you are assuming an account of what it is for a group to have needs and preferences; you are assuming accounts of altruistic cooperation and of criteria for determining what counts as a genuine benefit; you are assuming an account of what it is for a goal to be overriding in the context of common needs and preferences; you are ssuming an account of group membership and the way in which an ought for the group could be binding on individuals, since individuals are different from the groups of which they are members; and you are assuming that benefits should be maximized rather than satisficed (i.e., an account of what counts as an increase). All these are moral presuppositions. Again, this is none of this definitely bad; my point is that you are bringing a pretty hefty account of what human moral life is.


    "because I think doing so is, almost always, likely to best meet my private goals, perhaps durable well-being over my lifetime."

    You see, this, again, is something that has been assumed prior to being able to recognize any game-theoretical conclusions as relevant to the moral problem.

    "Dismissing the reality of ‘magic’ oughts implies to me no assumptions worth mentioning"

    There are no such things as 'assumptions not worth mentioning' when the question is what is being assumed. We both know that you aren't 'dismissing the reality of 'magic' oughts' for no reason at all, and you conceded in your earlier comments that if there are any of what you call "magic oughts" then it gets around your reasoning entirely, so you cannot afford to dismiss them for no reason. Likewise, you have already conceded that there are many people who hold that there are such oughts. Therefore dismissing them requires an entire moral account already: you are presupposing an account of what 'oughts' are, what the problems with "magic oughts" are that make them non-obvious, and why they can be dismissed rather than (say) merely taken tentatively, etc.

    There really is no way around it. You are not getting moral conclusions from "science and logic"; you are getting moral conclusions from science and logic combined with a moral theory about how to logically derive genuinely moral conclusions from scientific theories. But the moral theory can be examined and defended on its own; there is no reason to hide it misleadingly behind the labels "science and logic". You seem to want to get around having magic oughts by substituting magic inferences for them, i.e., inferences that mysteriously get you right up to the conclusion. But a closer at these inferences show that they don't work that way; they get to their conclusions because they involve moral theories of group membership, of the relation between group goals and individual goals, of what counts as a genuine benefit, of how goals become overriding (which in and of itself gets dangerously near magic ought territory and therefore has to be carefully distinguished from it), etc.

    There is also the problem that the group you are proposing in your example doesn't exist, or at least rarely does: groups do not in general have norms of altruistic cooperation but norms of limited cooperation (which they often allow to be non-altruistic). It is not a norm of my family to increase the benefits of altruistic cooperation; cooperation pretty much stays at its ordinary benefit level and no one is thrown out of the family for not contributing to an increase in [...]

    ReplyDelete

No anonymity (but consistent pseudonyms allowed). Abusive comments, especially directed toward other commenters, will be deleted; abusive commenters will be hunted down and shot. By posting a comment you agree to these terms and conditions.

Please understand that this weblog runs on a third-party comment system, not on Blogger's comment system. If you have come by way of a mobile device and can see this message, you may have landed on the Blogger comment page; your comments will only be shown on this page and not on the page most people will see, and it is much more likely that your comment will be missed (although I do occasionally check to make sure that no comments are being overlooked).