Moore's Paradox is a well-known epistemological paradox; in it we notice some something wrong with statements of the general form:
I believe such-and-such, but it is not true.
You can certainly believe that other people can believe things that are definitely false; and you can also believe that you yourself probably believe some false things, but if you could identify which ones were false, you couldn't still reasonably believe them. The point, of course, is not so much about psychology -- an ingenious writer could come up with a scenario in which someone could say it and it would make sense in context -- but about reasonable belief -- the ingenious writer's scenario would have to be one in which something strange and unreasonable is going on. And the most obvious implication of this is that belief has something to do with truth by its very nature.
Scott F. Aikin and Robert B. Talisse have a post at "3 Quarks Daily" that suggests that the following have what they call a "quasi-Moorean flavor":
[1] I believe that p, but my evidence has been rigged in order to favor p.
[2] I believe that p, but my sources of information are highly censored by those who favor p.
[3] I believe that p, but my evidence is unreliable and spotty.
[4] I believe that p, but my evidence is consistent with not-p.
[5] I believe that p, but all critics of p have been intimidated into silence or otherwise marginalized.
[6] I believe that p, but I always lose well-conducted arguments with reasonable critics of p.
Except for the first, if it is taken to suggest that I have made up the evidence for my belief, I don't think any of these are quasi-Moorean, in flavor or otherwise. Take [2]: this is entirely consistent with reasonable behavior, because it could very well be that this is just a mistake on the part of others who favor the belief in question. Sometimes, through prejudice, or ignorance, or pressure, people simply do not trust arguments that are actually good arguments. [5] is an uncomfortable situation to be in, socially, but it is entirely consistent with it actually being the case that all the best arguments and reasons support p. [6] is a little complicated because there is no single sense in which one can 'lose' or 'win' an argument; in most arguments you can identify losses and wins, of very different kinds, for both sides. But [6] is also consistent with my being a poor arguer, or my not understanding my reasonable critics' arguments well enough to be able to give them their proper refutations. Arguing with critics can be difficult: sometimes reasonable critics come at a problem from such a different perspective that it's difficult to know what to say to their arguments without much thought (and sometimes much subsidiary argument).
[3] and [4] are somewhat different. For [3] unreliable and spotty evidence may well be the only evidence one has, and may still support p much better than not-p. Historians, I think, are often in this position: there are always areas in historical study in which (current) evidence is pathwork and uncertain, but in which it still supports one side of a dispute better than the other. Historical evidence doesn't just fall into one's lap; it needs to be distilled, sometimes over many years. There may be decades in which the only evidence at all is weak, uncertain, and limited. But during those decades that weak, uncertain, and limited evidence may provide at least some basic support for one position over its alternatives. In [4] I suspect Aikin and Taliss are using 'consistent' in an unusual way; lots of reasonable beliefs are based on evidence consistent with opposing beliefs, because most evidence allows for more than one interpretation. But the fact that evidence allows for alternative interpretations doesn't mean that all the interpretations are equally reasonable.
I think that what worries me about the argument here is that labeling these as quasi-Moorean doesn't take into account the actual nature of inquiry. In actual inquiry, one often has to guess; but not all guesses are equally reasonable, and some guesses turn out later to receive at least some evidential confirmation later, with no such confirmation turning up for rival guesses. In such a situation, why wouldn't one believe one's guess to be right? To be sure, the evidence is weak and limited, and it could very well turn out that new evidence will show later that even our interpretation of the old evidence wasn't quite right -- but even so, the weak and limited evidence we have might so far be unchallenged, the guess may well be a reasonable extrapolation from plausibly analogous cases, and the work showing any flaws in our reasoning might be extraordinarily difficult, or even require some ingenious way of approaching the problem that we have not yet thought up.
I also find it somewhat strange that they come to the conclusion that responsible believing requires "an Open Society of the kind championed by J. S. Mill, Karl Popper, Bertrand Russell, John Dewey, and John Rawls." Their argument suggests that if you don't live in such an Open Society, even through no fault of your own, you can't responsibly believe anything at all. This seems simply absurd. I once wrote a (somewhat mediocre) paper on Longino in grad school, arguing that considerations of good science couldn't be divorced from certain kinds of ethical values, and one of the arguments was that not all social values are equally consistent with effective scientific practice. Although I would approach the matter differently today, I still think that social values play a major role in inquiry, and that the quality of one's inquiry can be affected by the society in which one is conducting it. But this is a very different argument than one suggesting that we can't have responsible inquiry at all unless we have "a social epistemic system marked by norms of free inquiry, freedom of expression, freedom of conscience, and reasonable disagreement." All these things are truly important for inquiry, but clandestine inquiry in an oppressive society may sometimes be quite as responsible as any other; it is simply false to say, as Aikin and Taliss do, that "Proper epistemological practice entails democratic social norms." There is no entailment between the two, no matter how conducive the latter may be to the former.