I've been looking over the Edge Question for 2011. Usually the questions are pretty pointless, but this one was a good one: What scientific concept would improve everybody's cognitive toolkit? Of course, good questions don't automatically get good answers, and, as you might expect, most of the answers are nonsense, being either very dubious philosophy or bad pop psychology. But some of them were interesting.
I've roughly divided them into two groups, tactical (things that are good to consider in approaching problems generally) and technical (how to do or handle specific kinds of things that come up a lot). There are a few, of course, that could be put into either, depending on how precisely one took the answer. It's very noticeable that, with a few exceptions, almost everyone who gave a tactical answer to the question (which was most people) massively overstated the importance and value of their particular favored tactic, sometimes to the point of making obviously untenable claims about it. I've only picked out the ones that seem to me to be remotely plausible, leaving out those where (1) the person's underlying explanation was so utterly wrongheaded as to be useless; (2) the 'cognitive tool' could hardly have much of a role in solving actual practical problems in thinking; (3) the cognitive tool is unlikely to be useful outside of a very narrow range of study, and therefore not by any stretch likely to be useful to everyone. With (3) there were some judgment calls, but I was generous wherever I could see the case actually being made. I did leave out one, Free Jazz, which was a good answer to a cognitive toolkit question, but whose status as a scientific concept -- which was the point of the question -- was more than little unclear.
Tactical
Howard Gardner, How Would You Disprove Your Viewpoint? Popper's account is problematic in a number of ways, but it's certainly true that our points need room to bruise themselves against discoveries, to borrow George Eliot's phrase.
Gino Segre, Gedankenexperiment. The SEP article on Thought Experiments is quite good, for those interested in the underlying philosophical issues; one of the authors is James Robert Brown, whose Laboratory of the Mind, also on thought experiments, is quite good as well.
Rebecca Newberger Goldstein, Inference to the Best Explanation. Very important idea; also very difficult to pin down properly. What counts as the best explanation? Do we use domain-specific or domain-general criteria, and how? There's lots of argument on these questions.
Nicholas Carr, Cognitive Load. Basically this boils down to saying how hard things are to learn, either intrinsically (because of the complexity of the material) or extrinsically (because of the format in which it is presented) or in terms of how much of a person's attention can actually be brought to bear. But I suppose you can't turn in grant proposals for experiments on the different ways things are hard to learn. Still, it's definitely true that there's more to the concept than we usually think, and much of it would be valuable to keep in mind.
Kevin Kelly, The Virtues of Negative Results. Hume somewhere notes that one of the most important features of the human mind is our ability to learn even from our mistakes; and one can certainly broaden that to include not just mistakes but to simple failure to obtain a result.
Lee Smolins, Thinking in Time versus Thinking Outside of Time. Platonism is rather more flexible and sophisticated than anything talked about here; but one gets used to physicists thinking in philosophical cartoons. Smolins seems to be trying to argue that one is better than the other, but the thrust of his examples is in a different direction: that it's odd to expect the two to be competition with each other at all, as long as you recognize the distinction.
Paul Kedrosky, Shifting Baseline Syndrome. You can read the paper by Daniel Pauly from which this term comes online.
John McWhorter, Path Dependence. Scott Page has a good paper (PDF) discussing uses and abuses of this idea.
Jonah Lehrer, Control Your Spotlight. Finally someone who really and truly understood the question. It comes very close to being the only answer given that completely fits the question: scientific in a straightforward sense, genuinely useful to everyone, applicable to solving problems.
Tanya Lombrozo, Defeasibility. Michael Sudduth's article at the IEP gets into some of the details.
Kathryn Schulz, The Pessimistic Meta-Induction from the History of Science. It probably is useful as a guard against too simplistic a view of scientific progress, although how valuable it is for more than that is a controvertible question.
Evgeny Morozov, The Einstellung Effect. New term for an old idea: we try to solve new problems in old ways, especially if the new problem is superficially like old problems we often have met with before. While it's sometimes given a negative tone, this is probably a mistake: it can be argued that most of the time it really improves our reasoning. It's the exceptional cases that are the problem.
Sue Blackmore, Correlation Is Not Causation.
David Eagleman, The Umwelt. John Deely discusses the concept.
Brian Knutson, Replicability. A trickier concept than it seems: it's a modal notion, and raises the question of how similar things have to be to count as replicated, which is only sometimes easy to answer. But replicability is quite significant when it comes to deciding new paths of inquiry and invention.
Timo Hannay, The Controlled Experiment
Richard Dawkins, The Double-Blind Control Experiment. The only overlapping instance of a tactical suggestion. There were others that touched on experiment; these were the two best, although neither provides a genuinely satisfactory account of experimentation. Dawkins is one of a handful who really grasps the meaning of the question, but he ruins it by making the absurd claim that the mere idea of double-blind control experiments will improve everyone's thinking automatically if we just understand it and "revel in its elegance". Really? I mentioned before that most of those who proposed tactical answers massively overstated the importance of their particular answer; compared to Dawkins almost everyone else looks modest in their claims. I doubt that the idea alone would really have much effect, especially if we begin our understanding of it with such a blatant case of magical thinking. In any case, Allan Franklin's article on Experiment in Physics at the SEP deals with some of the philosophical issues involved in the concept of experiment.
Technical
John Allen Paulos, A Probability Distribution. A statistics tutorial on it.
W. Daniel Hillis, Possibility Spaces. Less useful than Hillis suggests, but sometimes essential.
Steven Pinker, Positive-Sum Games. Sometimes pretty much everyone can win; ignoring that possibility shuts down some serious opportunities.
Rob Kurzban, Externalities. Externalities are benefits or harms
that are not compensated in exchange. For instance, if a factory makes something and in the process releases a small amount of pollution in the air, this is a small negative externality to all of us, because the factory does not have to compensate everyone affected by this small amount of pollution. Likewise, if the factory owners clean up the area and put in a park where previously it was desolate, this is a small positive externality for people driving by, because they don't have to pay to see it. Externalities are the things in our lives that are exchanged but invisible to anyone who considers only money and contract. Assessing externalities is a pretty important part of civic life.
Terence Sejnowski, Powers of 10
Carl Page, Power of 10. The only clearly overlapping technical suggestion. A fun Java applet tutorial for powers of 10.
Giulio Boccaletti, Scale Analysis
Keith Devlin, Base Rate.
Diane Halpern, Statistically Significant Difference. Given how pervasive statistics are, it certainly is important for statistical ideas to be more widespread than they are. A brief discussion.
Kevin Hand, The Gibbs Landscape
So, what did you think?