Laura Niemi, Edouard Machery, and John M. Doris have an op-ed in Scientific American, Milgram’s Infamous Shock Studies Still Hold Lessons for Confronting Authoritarianism, in which they defend a certain interpretation of Milgram's shock studies from criticism:
By reexamining the data from Milgram’s experiments and considering the outcomes of several conceptual replications (more recent studies that used different approaches to probe people’s susceptibility to authority figures), we determined that, in fact, Milgram’s work and conclusions still stand. That finding has several important implications, particularly for confronting the knotty question of how people might overcome the tendency to submit to malevolent authority.I am unconvinced of the broader conclusion. For one thing, it's unclear how generalizable Milgram's actual results would be. People often, as the authors do, take it to provide some sort of insight into authoritarian regimes. But none of Milgram's own experiments concerned obedience in authoritarian regimes; they (and almost all of the replications that the authors talk about) are all about direct or indirect obedience to a scientific expert in a scenario constructed within what they recognize as a formal experiment. These are not the same kind of obedience, nor the same kind of authority. If we take them at face value, one can very well argue, what they would seem to show is the very grave danger posed by purported experts like scientists, arising from the tendency of people not to obey, simpliciter, but to assume that such purported experts know what they are doing. (That the authors take the Milgram experiments to have implications like being careful in selecting political leadership and not, as would be much more justified, being careful with letting people like themselves determine practical policies, is, I think, an example of a very common kind of wishful thinking among academics in which they jump to assuming that their work has broad, grand relevance on a basis that simply shows that it has some local relevance.)
Further, it's not at all clear that the experiment is correctly described as concerned with "the tendency to submit to malevolent authority"; certainly the participants did not regard themselves as submitting to malevolent authority, nor were the experimenters actually malevolent authorities. There are also other complications. For instance, it has been argued that it is often the case that as experimenters move from encouraging an action in ways that are obviously requests to giving what are clearly orders that people become less likely to comply. That is to say, people are less likely to obey if it is clearly made a matter of obedience. Related to this, it's unclear that the experiments are actually tracking obedience or submission to authority as opposed to (to take just one example) interpretation of the distribution of responsibility.
The authors of the op-ed essentially jump from answering one particular objection -- that perhaps the participants didn't think it was real* -- to trying to reinstate it in promiscuous applicability. This is not a legitimate inference, and the defects are not filled by their more formal work-up.
It's also unclear how far their claim can work on the authoritarian regime side. We have plenty of evidence (e.g., testimony of people who have lived under authoritarian regimes) that compliance under authoritarian regimes is heavily motivated by the belief that compliance is necessary to survival, either literally or figuratively in terms of 'getting by'. For understanding most of the obedience problems in an authoritarian regime, the supposed tendency to obey authority that people infer from the Milgram experiments does not seem to be particularly useful.
----
* It's unclear to me that they have quite established that the objection fails. What they have established is that when people are specifically asked whether they believed it was real, they often say that they did. But when you look at the examples, I'm not sure you couldn't interpret some of this as people affirming that they did indeed believe that this was what was proposed as part of the experiment in which they were participating. If you have people participate in a murder mystery game, you can get them afterward to say that they did indeed believe that so-and-so was the murderer; what they mean is not that they believed that so-and-so was the murderer in real life, since they don't even believe that there was a real murder, but that they believed that the murder mystery game did in fact work in such a way that so-and-so had the murderer role. I'm not saying that this is how the participants' comments should be taken; my point is that the authors of the op-ed (like, it seems, many of the psychologists doing these kinds of experiments) seem to be assuming that belief is single, unitary, straightforward thing. But there are lots of situations in which people's explicit beliefs as participants are not necessarily what their beliefs are simpliciter -- and while some of these are very different situations from an experiment, nonetheless people are involved in these experiments as participants. And unless I'm missing something, none of the evidence to which the authors point is sufficiently precise and careful to close this gap.