The March of Science — The True Story

“The human understanding when it has once adopted an opinion…draws all things else to support and agree with it.” Francis Bacon, the “Father of Empiricism,” came to this conclusion in the 17th century, and some 350 years later, three Stanford psychologists confirmed its validity. They recruited participants with strong beliefs about the death penalty and showed them two studies that had used similar methods, one suggesting that capital punishment effectively deters crime and the other suggesting the opposite. Asked to evaluate the evidence’s quality and persuasiveness, participants rated research that contradicted their prior beliefs poorly in both respects, and unexpectedly, exposure to it resulted in more, not less, polarization between the two groups. Speculating about the mechanisms of such “biased assimilation,” the authors noted that we may interpret weakness of disconfirming evidence as proof of our own beliefs and cling to “any information that suggests less damaging ‘alternative interpretations.’”
In an era when alternative interpretations are degenerating into “alternative facts,” I was reminded of the Stanford study during Boston’s March for Science. Tens of thousands of people in some 600 cities around the world marched and rallied to remind the public of science’s importance, demand science-informed policy, object to science denialism in matters such as climate change and vaccines, and advocate for sustained science funding. But in a polarized society, what we really need to resist may be human nature — this impulse to believe what we want to believe.
Whether or not marching for science will affect policy or public perception, some fundamental questions are raised by the rallies, the current political climate, and the evolving dynamics of science communication. What does trust in science mean? And given the uncertainty and error in the inevitably stuttering scientific process, how do we avoid further fueling distrust?

WHERE TRUST BREAKS DOWN
The belief that distrust in science is widespread is actually somewhat unscientific itself. Cary Funk of the Pew Research Center tells me that public trust in science has in fact remained stable for decades, according to one well-known indicator that tracks attitudes over time. Recent survey data reveal that people trust scientists more than any other group except the military to act in the public’s interest, and surveys suggest that about 7 in 10 Americans believe “the effects of scientific research are more positive than negative for society.” Where trust breaks down is around specific topics — most notably, climate science and the safety of genetically modified foods, about which less than half the people surveyed trust information from scientists “a lot.”= But the topics on which scientific consensus is rejected are many, ranging from organic food’s lack of nutritional superiority to alternative medicine’s many unproven benefits. Though people may trust science in the abstract, when faced with facts they don’t want to believe, they seek to “prove” that the process that generated those facts is untrustworthy.
So are there particular “pain points” in the scientific process that people invoke to dismiss scientific findings they dislike? As Harvard psychologist Daniel Gilbert told me, “Just the phrase ‘scientific fact’ is a bad beginning.” Recalling the cautionary teaching from his first psychology class, Gilbert repeated the warning many of us hear on the first day of medical school: “Half of what we’re going to teach you is wrong — the problem is we don’t yet know which half.” Just as Winston Churchill observed that democracy is the worst form of government except for all the others, Gilbert notes that science is the worst way to find truth — except for every other option. He emphasizes that what is often perceived as a failure of science is in a sense its greatest strength: “Only by being in the business of constantly changing our minds are we getting closer and closer to truth.”
Yet it is precisely this fickleness that is often invoked in dismissing evidence-based recommendations. Nutrition science may be the area that provides the most ammunition for distrust, given the combination of uncertainty, public interest, and powerful preferences. Indeed, skepticism of most nutrition science is warranted, given the often insurmountable challenges of controlled, blinded experimentation. But the “science is hard” justification is unsatisfying to many people who are seeking guidance and are infuriated by conflicting “facts.” Nutrition science has unique salience because we all eat, and it’s upsetting to hear that a food we love may cause Alzheimer’s disease or stroke, especially if we’d previously been assured of its safety. The confluence of these factors creates fertile ground for the logic often invoked to condemn the scientific process more generally: Why should I believe evidence about x when you people are always changing your minds? The fact that we are and that that’s our job seems to provide little solace to a weary public. Can we do better?
IMPROVING SCIENCE COMMUNICATION
As tempting as it is to call for better education, I’m not sure how effectively that serves us in real time. I’m familiar with the scientific process, for instance, and still believe evidence on the benefits of chocolate and procrastination, while dismissing anything that calls into question my way of life. But when we present specific scientific findings to the public, I think we could frame them more effectively to signal their degree of uncertainty and thus enduring credibility. As Tim Caulfield, an expert in science communication at the University of Alberta, has suggested, the media could preface any new finding with what the literature says, on balance, about the topic in question; readers might then understand that any marked aberration is less likely to be true.
Another factor often lost in translation is evidence quality. Just as published clinical guidelines indicate the level of evidence supporting them, perhaps similar background on the hierarchy of evidence could accompany reports of new findings. Observational studies, which are more abundant and often more provocative than randomized, controlled trials, tend to be widely covered in the media. But whereas industry sponsorship of trials is frequently emphasized and used to call findings into question, no warning accompanies database analyses in which causality can be misleadingly implied.
Relatedly, in Caulfield’s experience, the justification people most frequently invoke for dismissing scientific consensus that contradicts their beliefs is that science is corrupted — by political meddling, scientists’ ambitions, and industry funding. Yet, illogically, research published by a mindfulness practitioner is often believed, whereas a consensus from the National Academy of Sciences on genetically modified organisms isn’t. Unfortunately, when we are told our views are illogical, we don’t generally respond with more logical beliefs. Moreover, perceptions of corruption often arise from stories that, even if rare, are true.
If we are ever to change perceptions, it is critical to recognize the power of such narratives in fueling distrust of science. The disproportionate representation of science’s warts typifies a broader “science is broken” narrative that emphasizes the ways science “isn’t working” at the expense of the ways that it is. We hear about experiments that can’t be replicated, negative findings that remain unpublished, and the ubiquity of bias; much of this criticism arises from within our own ranks. Academia is lambasted for an incentive structure favoring quantity over quality, secrecy over transparency, and exaggeration of the significance of our results. Meanwhile, remarkable gains in human longevity are just one manifestation of science’s success — but as a reporter once told me, “No one wants to hear about the plane that lands.”
This preference for exposed folly, in a world where social media rewards those who speak loudest and with the most moral certitude, may foster a phenomenon social psychologists call pluralistic ignorance, in which most members of a group disagree with a norm or idea but think everyone else believes in it and so don’t speak up. Gilbert thinks a similar dynamic may be at play in the debate among psychologists regarding the field’s “replication crisis.” In 2015, a group of prominent psychologists published a study, widely covered in the media, concluding that over half of psychology experiments had failed to replicate. Gilbert and colleagues then published a letter pointing out three key flaws in the study’s own methods, suggesting that it therefore didn’t clarify the true frequency of failed replication. Unsurprisingly, the article saying psychology is in crisis received far more attention than the letter that said actually, we don’t really know. The letter did receive significant attention from psychology researchers, however, many of whom wrote to Gilbert, saying they agreed with him but had been afraid to speak up.
Gilbert attributes that fear to a shift in the tone of public discussions of science, which I suspect contributes to broader conclusions that science is corrupt and thus can legitimately be ignored. Whereas people debating different viewpoints, a process that is critical to the advance of science, might once have concluded that “Dan Gilbert is wrong,” notes Gilbert, they now conclude that “Dan Gilbert is evil.” The fear of venturing into the fray means that the public hears far more from science’s critics than its champions. This imbalance contributes to “science is broken” narratives ranging from claims about the pervasiveness of medical error to the insistence that benefits of our treatments are always overhyped and their risks underplayed. The real uncertainty, if not frank falsehood, of many of these claims is thus obscured. Meanwhile, the consequent impressions of scientific foul play are easily generalized to the entire scientific enterprise the next time people encounter evidence they’d rather not believe.
CHANGING THE NARRATIVE
In this charged environment, how do we communicate that science, by its nature, has breaks but isn’t broken? Arguing against marching for science, Robert Young, a coastal geologist who is concerned about worsening politicization of issues such as climate science, urged scientists instead to go into their communities and familiarize people with the scientific process. “We need storytellers,” he wrote, “not marchers.” While I remain sympathetic to the marchers’ impulse, I agree fundamentally with Young: we have lost control of the narrative about science and need to find ways to retell it.
The renowned psychologist Daniel Wegner, who died of amyotrophic lateral sclerosis in 2013, offered one such narrative, a theory he developed at age 11 about the two types of scientists — the bumblers, who plod along, only once in a while accomplishing something but enjoying the process even if they often end up being wrong, and the pointers, who do only one thing: point out that the bumblers are bumbling. Though Wegner noted that when a bumbler bumbles, the pointers “announce it so widely and enthusiastically that the typical bumbler is paralyzed in shame for quite some time,” he also emphasized the pointers’ necessity. Citing William James, Wegner described two fundamental impulses driving scientific progress: “We must know the truth” and “We must avoid error.” Wegner concluded that, “We need both bumbling and pointing, grinning credulity and glowering skepticism, if we are ever to establish knowledge. If we go overboard in either direction, though, we risk a field that is not knowledgeable at all.”
Twenty-five years ago, Wegner worried that psychological science was shifting too far toward error avoidance, at the expense of novel, albeit potentially wrong, insights. Today, the metaphor seemingly extends beyond knowledge’s genesis to its communication. Striking the right balance between truth seeking and skepticism is critical to both our process and how we frame its findings. Our current climate of disbelief, I suspect, reflects less an increase in scientific error or uncertainty than a communication environment in which the pointers have seized the megaphones. Being loud is easily perceived as being representative.
As we strategize about changing the narrative, Wegner’s better-known work on thought suppression may be equally germane. As he famously demonstrated, when people are told not to think of a white bear, they find themselves unable to think of anything else. Moreover, there is a rebound effect: if we are initially trying to suppress a thought and are then given permission to indulge it, we focus on the thought far more than if it had never been forbidden in the first place. So although communicating science’s dynamic by focusing heavily on its failings risks heightening public disbelief, the remedy is not to hide our errors. Such suppression will “rebound” and undoubtedly fuel further distrust. Instead, I think we have to learn to tell stories that emphasize that what makes science right is the enduring capacity to admit we are wrong. Such is the slow, imperfect march of science.

http://www.nejm.org/doi/full/10.1056/NEJMms1706087