There has been an interesting discussion lately on the ASA list about global warming. As a preface, I should note that I’m not really a global warming skeptic. Based on the evidence I’ve been able to evaluate, I think global warming is a real problem that is significantly caused by human activity. (I am deeply skeptical, however, of the Kyoto treaty). The ASA list discussion morphed into a general discussion of the authority of consensus in peer reviewed scientific literature. Here, I become a bit more hesitant:
A commenter on the ASA list said: “… who are the final arbiters of controversies in scientific debates? I think we need to educate the public about scientific methodology and the need to rely on the scientific publication process as part of authoritative opinion. Without that, there’s no resolution.”
These are my thoughts:
Perhaps its because of my background that I’m not quite ready to apply the word “authoritative” to the scientific publication process or the general progress of professional science. Early in my legal career, I worked on big product liability litigation — asbestos, DES, Prozac, and breast implants — on behalf of pharma companies and other manufacturers. As a result, I think I have some hands-on experience with how the scientific process works in a politically charged context.
We have recently seen reports of how the peer review and publication process with respect to pharmaceuticals has been influenced by the industry in response to regulatory and product liability concerns. Many of the published studies concerning the safety and efficacy of compounds that become blockbuster drugs are directly funded by the pharma companies, the academic centers that produce the research are heavily funded by industry, and the journal peer reviewers often have ties to the industry. This doesn’t mean the science is all bad, but it does mean that it isn’t beyond criticism or even authoritative simply because it has passed peer review. Indeed, although when I was working on the Prozac cases the literature consistently denied any causal link between SSRI-class antidepressants and suicidal thoughts, in recent years the contrarian position has caught the FDA’s attention, at least as to the use of these drugs in adolescents. The evidence presented in the recent Vioxx cases also demonstrates pretty convincingly how the publication and peer review process and the scientific consensus can be captured by special interests.
This illustrates that, while courts must give heavy weight to scientific consensus, there is always room to challenge the consensus (this is what the controversial Daubert v. Merrill Dow case on the admissibility of expert scientific testimony is all about). At the end of the day, a court’s decision is supposed to be based on evidence and reason, not on any expert’s purported authority. Experts assist the court and the jury, but they do not decide the matter. I think this is exactly as it should be in the courts in a free and democratic society. It is also, I think, as it should be in the political process in a free and democratic society. The “final arbiter,” ultimately, is and must be the people, not any one community, scientific or not.
Of course, with respect to global warming, the big money interests are the contrarians, so perhaps that gives us even more reason to trust the literature in this particular instance. However, I think a general principle of “just trust the literature” ultimately is anti-intellectual and dangerous.
Let me further illustrate this with an example from a field that, at least to me, is far more impenetrable than climate science: theoretical physics. Recently I read Lee Smolin’s interesting book “The Trouble With Physics.” Smolin decries the “consensus” among cosmologists that string theory must be correct. Smolin’s chapters entitled “How Do You Fight Sociology,” “What is Science,” and “How Science Really Works’ are well worth the price of the book. Here is how Smolin describes the sociology of the string theory community:
1. Tremendous self-confidence, leading to a sense of entitlement and of belonging to an elite group of experts.
2. An unusually monolithic community, with a strong sense of consensus, whether driven by the evidence or not, and an unusual univormity of views on open questions. These views seem related to the existence of a hierarchical structure in which the ideas of a few leaders dictate the viewpoint, strategy, and direction of the field.
3. In some cases, a sense of identification with the group, akin to identification with a religious faith or political platform.
4. A strong sense of the boundary between the group and other experts.
5. A disregard for and disinterest in the ideas, opinions, and work of experts who are not part of the group, and a perference for talking only with other members of the community.
6. A tendency to interpret evidence optimistically, to believe exaggerated or incorrect statements of results, and to disregard the possibility that the theory might be wrong. This is coupled with a tendency to believe results are true because they are “widely believed,” even if one has not checked (or even seen) the proof oneself.
7. A lack of appreciation for the extent to which a research program ought to involve risk.
(The Trouble With Physics, at 284.) Does this sound familiar? To some extent, I think each of these points could apply to some people in the environmentalist community (and dare I say it, I think they also can apply in many ways to some people in evolutionary biology).
In the chapter “How Science Really Works,” Smolin makes the following observation about university hiring and peer review:
There are certain features of research universities that discourage change. The first is peer review, the system in which decisions about scientists are made by other scientists. Just like tenure, peer review has benefits that explain why it’s universially believed to be essential for the practice of good science. But there are costs, and we need to be aware of them….. An unintended by-product of peer review is that it can easily become a mechanism for older scientists to enforce direction on younger scientists. This is so obvious that I’m surprised at how rarely it is discussed. The system is set up so that we older scientists can reward those we judge worthy with good careers and punish those we judge unworthy with banishment from the community of science. This might be fine if there were clear standards and a clear methodology to ensure our objectivity, but, at least in the part of the academy where I work, there is neither.
(The Trouble With Physics, at p. 333) (I should be clear that Smolin seems to be speaking of “peer review” primarily in terms of departmental hiring decisons, but I think he intends to cover everything from hiring to what constitutes an acceptable reasearch agenda for publication).
At the conclusion of his book, Smolin says the following: “To put it more bluntly: If you are someone whose first reaction when challenged on your scientific beliefs is ‘What does X think?’ or ‘How can you say that? Everybody knows that …., ‘ then you are in danger of no longer being a scientist.” (The Trouble With Physics, at p. 354).
Smolin certainly has a personal axe to grind, since his research agenda swims against the consensus in his field (he rejects string theory and promotes somthing called quantum loop gravity). But, IMHO, his observations are trenchant, particularly when I factor them into my personal experience with a politically charged scientific consensus that directly impacts public policy.
A final point, given the ASA’s faith perspective: IMHO, it’s dangerous to speak in terms of “authority” when dealing with scientific consensus because we must recognize that the scientific community, like every other human community, is deeply affected by sin. I don’t think this implies an anti-science attidude, or YEC thinking or any such thing. It is simply an appropriately Christian epistemic and social realism. The scientific community is a human community, which means it is not entirely objective and free from distorted interests and misplaced priorities.
So I would say this: yes, we must take seriously the consensus of working scientists in any given field as reflected in the peer reviewed literature. However, we must also retain the rational and political freedom to evaluate consensus claims on the merits, being always mindful that the authority of all human communities, including communities of science, is necessarily limited by social dynamics and sin. Because of this, it’s irresponsible to ignore contrarian views, even if they are not a significant part of the peer reviewed literature. This is particularly true where the science in question is critical to public policy and democratic debate. If the contrarian view is clearly wrong, that should be demonstrable based on the rational strength of the consensus view, without resort to arguments from authority.