Don't change your worldview based on one study

In the past few weeks, the blogosphere has been abuzz about the dangers of non-replication and the “decline” effect, triggered by Jonah Lehrer’s interesting piece in the New Yorker (mostly behind a firewall). The central claim in the piece is that initially strong or provocative findings diminish in strength over time. The decline might well come from more stringent methodology or better experimental controls rather than via mysterious forces, but that’s not what concerns me today.

My concern is about media reporting and even blogging about new and provocative scientific findings, the very findings that tend to decline. Following a murder, the arrest of a suspect is broadcast on the front pages, but when that suspect is exonerated, the correction ends up on the back of the local section months later (if it appears at all). The same problem holds for flawed scientific claims. The thoroughly debunked Mozart Effect still receives media coverage, just as other unsupported findings remain part of the popular consciousness despite a lack of replicability.

Part of the problem is the rush to publicize unusual or unexpected positive findings, particularly when they run counter to decades of established science. That excitement about a new result is palpable and understandable. Who wants to write about the boring old stuff? The media loves controversy, and new results that counter the establishment are inherently interesting. Scientists strive for such controversy as well—what scientist doesn’t relish the idea of overhauling an accepted theory?

Scientists understand that initially provocative claims don’t always hold up to scrutiny, but media coverage rarely withholds judgment. If well-established ideas can be shot down by a single study, and that single study gets extensive media coverage, the public understandably won’t know what to trust. The result, from the perspective of a consumer of science, is that science itself appears unstable. It gives people license to doubt non-controversial claims and theories (e.g., evolution). To the public eye, a single contradictory study has the same standing as established theory.

Over the past few days, a paper in PLoS has received extensive attention in the media and on science blogs. The paper reports a study in which patients showed a placebo effect even when they knew they were receiving a placebo. If true, the result would undermine the idea that placebos are effective because people think they are getting the real treatment. The result is shocking and intriguing. It inspired headlines like “Placebos Work Even When You Know and “Sugar Pills Help, Even When Patients Are Aware of Them.”

The study is small in scope (80 patients), and some bloggers have already begun raising concerns about the method (e.g., Orac, Ed Yong). The bigger issue, though, is that the paper runs counter to long-established theories about the nature of placebo effects. That alone should inspire caution rather than exuberance. This one study, essentially a pilot study, should not lead anyone to reject a long-established empirical tradition. Sure, it can raise questions about the established idea, and it should trigger further research with larger samples and alternative methods. Critically, scientists know that new claims like this one are more likely to “decline” with replication than are well-established results, and they know that such preliminary results require further study. The media, though, gives the same weight to a pilot study like this one as to a larger body of research. Controversial results are reported as the new truth, meaning that scientific “facts” change with each new study.

When facts are so easily undermined in the public presentation of science, the public justifiably distrusts scientific claims. Ironically, conveying uncertainty when reporting new results, particularly those that run counter to well-established findings, might increase the public’s confidence in science over time. Acknowledging the tentativeness of new findings avoids the danger of having the “facts” change with each new result. It avoids having the truth wear off.

Sources Cited:
Chabris CF (1999). Prelude or requiem for the ‘Mozart effect’? Nature, 400 (6747), 826-827 PMID: 10476958

Kaptchuk, T. J, Friedlander, E., Kelley, J. M., Sanchez, M. N., Kokkotou, E., Singer, J. P., Kowalczykowski, M., Miller, F. G., Kirsch, I., & Lembo, A. J. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS One, 5 (12)

8 comments to Don’t change your worldview based on one study

  • […] http://theinvisiblegorilla.com/blog/2010/12/24/dont-change-your-worldview-based-on-one-study/ – interesting examination of the importance of not taking scientific studies at face value. […]

  • Thank you for your book and your blog. It’s a great relief to find a reliable source to help me sort out the cognitive wheat and chaff. (For example, why did I seem to be the only person to find Gladwell’s logic flawed in Blink.)

    I hope you will write more about Lehrer’s article in the New Yorker.

    In the meantime, we cannot overly blame “the media” for mis-reporting scientific studies. They surely could do a better job. Yet, in my work following ADHD-related research, for example, I see time and time again that it is actually the researchers or their institutions who overstate the findings. This makes it doubly dangerous for reporters to write from the press release, as seems to be the accepted strategy nowadays.

    For example, consider the NIMH study showing that ADHD is associated with slowed cortical growth. NIMH and one of the investigators strongly implied that these kids catch up over time; in other words, they “normalize.” I saw the consequences immediately: Parents would stop giving their children medication for ADHD, feeling confident they would in fact grow out of it. (The longitudinal studies, however, suggest that up to 90 percent of children with ADHD do NOT outgrow it.) And, once again, ADHD would come under fire by the skeptics.

    I could not fathom the logic behind the press release or the reporting. I’m no brain scientist but surely cortical growth could not be the entire story; what about the myriad neural connections that must take place at key times during development and would not, given this delayed cortical growth?

    When I contacted one of the co-investigators at the Montreal Institute, he admirably refused to follow his American colleagues’ lead in hyping the findings; essentially, he confirmed my reservations.

    Why did no other American journalist (to my knowledge) ask this question? Is it lack of science education? Or is it the all-too-common tendency to write from the press release? Perhaps all of the above. But 20 years ago, when I left the newsroom, it was with increasing disgust at too many newspapers writing from the press release, sometimes almost verbatim! We can blame tight deadlines and the corporatization of the media. But we can also blame laziness and a lack of intellectual curiosity.

    A similar story happened more recently with The Lancet, which over-hyped a controversial study as the first direct proof of ADHD’s genetic underpinnings. However flawed The Lancet’s press release, the media should have asked better questions of better sources. Here’s a smattering of the coverage:

    http://adhdrollercoaster.org/the-basics/adhds-genetic-basis-old-news-real-news-from-recent-research-first-direct-genetic-link/

    And here is the only critical assessment of the study I’ve yet to read (one I had to personally solicit):

    http://adhdrollercoaster.org/the-basics/and-now-the-rest-of-the-story/

    Please keep up the great work, and have a wonderful 2011!

    Gina Pera

  • […] Invisible Gorilla blog has an article responding to the hype of a recently published study about the effects of placebos.  I think they are missing a major […]

  • […] Don’t change your worldview based on one study […]

  • Just to clarify, I have no specific expertise in the field of medical placebos. I do have expertise in the design of appropriate control conditions when testing the effectiveness of an experimental manipulation. In psychology, a central concern when designing an appropriate control condition is making sure that the experimental group (the treatment) doesn’t have any particular motivation to outperform the control (placebo) group. One way to do that is to make it difficult for participants to tell which condition they’re in. That has been the tradition in medical experimentation as well — if people know they are in the experimental group (or the placebo group), then there is a danger that any differences between the experiment and control condition could be due to the effects of that knowledge. That’s why researchers often worry about using an appropriate placebo, one that can fool the subject into thinking they are actually in an experimental condition. Belief in the efficacy of a treatment can have powerful effects on the perceived effectiveness of that treatment (and possibly on the effectiveness of the treatment as well).

    More broadly, my point in the post has nothing to do with placebo effects in particular. Rather, it is on the tendency to treat any new report that runs counter to the established method or procedure as if it had the same force as the established literature. With the accumulation of additional evidence, replication, and experimentation, the “established” ideas might eventually be overturned. But, people should not treat a single new result as definitive evidence that an established view is wrong. Science can produce the occasional contradictory result by chance or fluke, and it’s the accumulation of evidence, with repeated replication, that can lead to a change in theories. I used this placebo paper as an example not because I have any particular belief about it as a study, but because media coverage of it treated it as definitive. It is an interesting finding, but that doesn’t mean we should throw out the accumulated evidence that it contradicts. Rather, we should withhold judgment unless or until sufficient evidence accumulates to overturn an established view. Established views can be wrong, but in some cases, they are established views for a reason.

  • G.D.

    “The bigger issue, though, is that the paper runs counter to long-established theories about the nature of placebo effects.”

    I don’t know very much about the field of medicine, and I have wondered before to what extent a placebo effect requires that you believe that the purported treatment will work (although, to be fair, I do not know if this is what constitutes the “long-established theories” or what those theories are or how well confirmed they are). I cannot, as a layperson, really see any good reason why the effect should depend on a person’s conscious beliefs about the treatment. In fact, if the efficacy of a treatment (including the placebo) depended on the subject’s conscious belief that the treatment works, why does it not follow that gullible people in general respond better to medical treatments than non-gullible people? And if it does follow, are gullible people in fact generally less sick than non-gullible people? Or are gullible people perhaps more prone to get sick in the first place?

    Forgive me if I am making some banal error here, but I’ve really wanted to ask someone knowledgable about it for some time.

  • On the other hand, if your world view happens to be wrong, you should change it. Those “long established theories”? They happen to be wrong and not consistent with the data, even before this study.

    http://daedalus2u.blogspot.com/2007/04/placebo-and-nocebo-effects.html

  • […] Via one of my favorite sites, The Invisible Gorilla My concern is about media reporting and even blogging about new and provocative scientific findings, the very findings that tend to decline. Following a murder, the arrest of a suspect is broadcast on the front pages, but when that suspect is exonerated, the correction ends up on the back of the local section months later (if it appears at all). The same problem holds for flawed scientific claims. The thoroughly debunked Mozart Effect still receives media coverage, just as other unsupported findings remain part of the popular consciousness despite a lack of replicability. […]

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>