Silly humans think they can do two things at once despite overwhelming evidence that they can barely do one thing at once. If you can’t walk and chew gum at the same time, what makes you think you can walk and text simultaneously? Read on to see some dramatic examples. Continue reading The Danger of Texting: Some examples
I was interviewed for a brief segment on CBS Sunday Morning yesterday. The segment was entitled “The Truth about Lies” and was part of a feature called “The Fast Draw” by Josh Landis and Mitch Butler. The impetus for the interview was last month’s allegation by journalist Brian Deer that some of the evidence Andrew Wakefield used in promoting a vaccine/autism link was fraudulent. As I and many others have noted, there is no evidence for a link between vaccines and autism. But, the lack of evidence hasn’t dissuaded people from believing that there is a link. The question is why people hold onto beliefs about perceived causes and risks when there is little evidence.
The segment is about 2.5 minutes long, and features an animated version of me (apparently, television adds a few pounds and some facial hair, even for animated people). Overall, I thought the did a nice job (excepting, perhaps, some of the neurobabble).
In case the embed code doesn’t work, here’s the link: http://www.cbsnews.com/video/watch/?id=7299406n
This semester at the University of Illinois, I am teaching a graduate seminar on speaking and writing for a general audience. The class has advanced graduate students from all sub-disciplines of psychology, and each week, they will be writing about scientific research in psychology. I have started a new blog site, ionpsych, which will feature their blogging and some of my own. Whenever an ionpsych post touches on topics related to intuitions about the mind, I will link to it from The Invisible Gorilla. Please check out the site — it launched Wednesday, and new posts on a wide range of psychology-related topics will be appearing regularly.
The ionpsych.com website eventually will add another feature: links to particularly insightful or helpful blog posts about psychology from around the blogosphere. I’m especially interested in gathering together posts that could be used when teaching an introductory psychology class. If you know of or have written posts that discuss classic experiments, new findings to older research and theory, generalize research to daily life, provide historical overviews, etc., please email me directly or post links in the comments. Once that part of the the ionpsych.com website goes live, I will blog about here as well as there.
Imagine you’re taking an introductory psychology class and you have to study for your first test. You’ve read the assigned text, and now you three more days to prepare. What should you do?
- Re-read the text once more each day
- Spend each day studying the text to identify critical concepts and the links among them
- Quiz yourself the first day, reread the text the second day, and quiz yourself again the third day
Do you think you know the answer?
Students in my introductory psychology class regularly come to my office hour after failing the first exam to ask what they did wrong. Some even claim to have spent hours re-reading the text, highlighting important concepts, and even taking notes. Where did they go wrong?
In The Invisible Gorilla, Chris Chabris and I argue that these students fell victim to the Illusion of Knowledge—they thought they had a deeper understanding of the material than they actually did. But why did they have that mistaken intuition? The answer seems to be that they mistook familiarity and fluency for real understanding.
The same principle explains why you might think you know how a toilet works when all you really understand is how to work a toilet—your familiarity with using a toilet leads you to the false impression that you know far more than you actually do. What’s most remarkable about the illusion of knowledge is how easily we can overcome it. What’s most disturbing is how rarely we actually do.
To determine whether you have genuine knowledge about toilets, just ask yourself a few diagnostic questions and force yourself to answer. For example, how does water fill up the bowl? What causes the water to leave the bowl? Why does water leave the tank? Each time you can produce the correct answer, ask yourself a slightly deeper, next-step question. Eventually, you will reach the limits of your knowledge. You’ll know what you don’t know.
The same principle applies to studying and learning a text. If you read the text over repeatedly, you will familiarize yourself with it, but you won’t know the limits of your knowledge. Only by testing whether you can produce the answers yourself can you verify what you know. And, in a study just published online in Science, Karpicke and Blunt find that testing yourself leads to more effective learning and retention than does re-reading the text repeatedly or even mapping the core concepts of the text. It works because it overcomes the illusion of knowledge. Forcing yourself to test your knowledge is the most reliable way to identify the limits of your knowledge.
The reason my students come to me after failing their exam is that they have the wrong intuitions about what makes for effective learning. They thought that reading the text repeatedly would engender the best learning, and it apparently never occurred to them to check their own understanding. The same was true for Karpicke & Blunt’s subjects. They predicted that repeated studying would lead to better learning than would trying to retrieve what they had already learned. That is, they favored the approach that would lead to illusory knowledge rather than real knowledge.
Karpicke JD, & Blunt JR (2011). Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping. Science (New York, N.Y.) PMID: 21252317
Deadly Choices explores the history of the introduction of new vaccines and the anti-vaccine movements that tend to follow. Offit is a prominent virologist and he doesn’t hide his scorn for some in the anti-vaccine movement. His book is unlikely to change the mind of anyone who is firmly within that movement, but I’m not sure what would. The book is a must read for anyone interested in a detailed, well-written, and thoroughly sourced discussion of the scientific basis of vaccines, the real and imagined risks of vaccination, and the consequences of the choices we make about vaccines.
Given that we wrote about the vaccine/autism debate in The Invisible Gorilla in our chapter on the Illusion of Cause (people tend to draw causal inferences from anecdotes, narrative, and temporal associations), I requested and received a review copy of Offit’s new book from the publisher.
Not surprisingly, the book thoroughly documents some of the unfounded claims that the anti-vaccine movement has made and explains the biological reasons why some of the perceived risks of vaccines either are not a risk or physiologically CAN’T be a risk. For example, many of the “green our vaccines” campaigns are based on the concern that there are nasty chemicals in vaccines, which is true. As Offit notes, though, it’s not the substance that’s the problem, it’s the dose. Even water is toxic when taken in a large enough dose (on occasion, college students die during frat hazings when required to drink too much water at once). Most of the substances that scare people away from vaccines (e.g., aluminum and formaldehyde) are in our bodies and blood stream all the time. Our foods contain them, and the quantities in vaccines are relatively negligible. Similarly, babies are exposed to countless bacteria and viruses, so the fact that children get a seemingly large number of vaccines does not mean that those vaccines tax the immune system at all (Offit also notes that it’s not the number of vaccines, but the number of elements within those vaccines that require an immune response).
The book is not one-sided. The first chapters discuss all of the well-documented cases of actual vaccine injury and the real side effects of vaccines (e.g., the live polio vaccine could cause polio, although most vaccines don’t use live viruses). The book uses these tragic cases to document how the CDC and regulatory agencies now catch really rare side effects that didn’t show up in the large-scale testing necessary for approval (side effects that are 1 in a million sometimes don’t show up in testing with 50,000 people). Vaccines undergo more rigorous testing than other drugs, and the mechanisms in place to detect rare side effects work far more effectively than they do for other drugs. In order to introduce a new vaccine into the recommended schedule, testing must show that it doesn’t interact in any way with the remainder of the schedule.
Offit also discusses some of the things that vaccine safety advocates could do (but haven’t done) to help make vaccines safer. For example, people who have egg allergies cannot get vaccines that are made using chicken eggs (e.g., flu vaccine). There might well be alternative ways to make such vaccines, but the pharmaceutical industry has no financial or government-initiated incentives to develop those alternatives. Vaccine safety advocates could push them to do so.
Offit makes the case that anti-vaccine movements raise fears of vaccines that are inconsistent with the science. In so doing, he draws parallels between current anti-vaccine claims and those made over a century ago after the introduction of the smallpox vaccine. Many of the fears of that vaccine are laughable by today’s standards (e.g., that children would develop cow-like facial features because the vaccine was initially taken from cows infected by cowpox). But Offit argues, fairly convincingly, that the logic and nature of current anti-vaccine scares are largely the same as those raised over a century ago and in each subsequent anti-vaccine movement. He also shows that most of the anti-vaccine proponents as well as self-identified vaccine safety advocates (including Dr. Bob) lack any relevant background in virology, epidemiology, or statistics, and that they typically lack the training to evaluate the actual risks of vaccines.
The most compelling chapter is the last one, in which Offit describes what happens when someone who could not be vaccinated (because they were too young) comes into contact with an infected child whose parents decided not to vaccinate. The choice not to vaccinate affects people other than your own child–it puts young infants and others whose bodies lack a typical immune response can’t be vaccinated at risk. The chapter is reminiscent of the PBS Frontline’s unforgettable documentary The Vaccine War, which depicted the impact of vaccine-preventable diseases like pertussis (full show embedded below).
At times, the book can be a bit heavy handed in its tone–Offit’s perspective is clear throughout, and he doesn’t hold his punches. Sometimes his parallels between historical anti-vaccine movements and current ones are a little forced, and in a few cases, the book is perhaps a bit more dismissive than is necessary. For example, in passing Offit implies that all chiropractors reject the germ theory of disease. Although a rejection of the germ theory might have motivated chiropractors at the start, I’d hazard a guess that most present-day chiropractors accept the germ theory of disease. Overall, though, the book presents the scientific evidence in a compelling, comprehensive, thoroughly documented, and engaging way.
For prospective parents whose prior information about vaccines comes from friends, the internet, or even their pediatrician, this book is a must read by one of the top scientific experts in the field. It provides the background and evidence you need to evaluate claims about the dangers and benefits of vaccines and to make the best choice for your children AND your community.
My review is based on a pre-release version of Deadly Choices. I requested and received a review copy from the publisher. I have corresponded with Offit on several occasions since the publication of our own book, mostly to discuss media coverage of vaccine science and the anti-vaccine movement. A slightly different version of this review was posted on Amazon.com.
In the past few weeks, the blogosphere has been abuzz about the dangers of non-replication and the “decline” effect, triggered by Jonah Lehrer’s interesting piece in the New Yorker (mostly behind a firewall). The central claim in the piece is that initially strong or provocative findings diminish in strength over time. The decline might well come from more stringent methodology or better experimental controls rather than via mysterious forces, but that’s not what concerns me today.
My concern is about media reporting and even blogging about new and provocative scientific findings, the very findings that tend to decline. Following a murder, the arrest of a suspect is broadcast on the front pages, but when that suspect is exonerated, the correction ends up on the back of the local section months later (if it appears at all). The same problem holds for flawed scientific claims. The thoroughly debunked Mozart Effect still receives media coverage, just as other unsupported findings remain part of the popular consciousness despite a lack of replicability.
Part of the problem is the rush to publicize unusual or unexpected positive findings, particularly when they run counter to decades of established science. That excitement about a new result is palpable and understandable. Who wants to write about the boring old stuff? The media loves controversy, and new results that counter the establishment are inherently interesting. Scientists strive for such controversy as well—what scientist doesn’t relish the idea of overhauling an accepted theory?
Scientists understand that initially provocative claims don’t always hold up to scrutiny, but media coverage rarely withholds judgment. If well-established ideas can be shot down by a single study, and that single study gets extensive media coverage, the public understandably won’t know what to trust. The result, from the perspective of a consumer of science, is that science itself appears unstable. It gives people license to doubt non-controversial claims and theories (e.g., evolution). To the public eye, a single contradictory study has the same standing as established theory.
Over the past few days, a paper in PLoS has received extensive attention in the media and on science blogs. The paper reports a study in which patients showed a placebo effect even when they knew they were receiving a placebo. If true, the result would undermine the idea that placebos are effective because people think they are getting the real treatment. The result is shocking and intriguing. It inspired headlines like “Placebos Work Even When You Know and “Sugar Pills Help, Even When Patients Are Aware of Them.”
The study is small in scope (80 patients), and some bloggers have already begun raising concerns about the method (e.g., Orac, Ed Yong). The bigger issue, though, is that the paper runs counter to long-established theories about the nature of placebo effects. That alone should inspire caution rather than exuberance. This one study, essentially a pilot study, should not lead anyone to reject a long-established empirical tradition. Sure, it can raise questions about the established idea, and it should trigger further research with larger samples and alternative methods. Critically, scientists know that new claims like this one are more likely to “decline” with replication than are well-established results, and they know that such preliminary results require further study. The media, though, gives the same weight to a pilot study like this one as to a larger body of research. Controversial results are reported as the new truth, meaning that scientific “facts” change with each new study.
When facts are so easily undermined in the public presentation of science, the public justifiably distrusts scientific claims. Ironically, conveying uncertainty when reporting new results, particularly those that run counter to well-established findings, might increase the public’s confidence in science over time. Acknowledging the tentativeness of new findings avoids the danger of having the “facts” change with each new result. It avoids having the truth wear off.
Chabris CF (1999). Prelude or requiem for the ‘Mozart effect’? Nature, 400 (6747), 826-827 PMID: 10476958
Kaptchuk, T. J, Friedlander, E., Kelley, J. M., Sanchez, M. N., Kokkotou, E., Singer, J. P., Kowalczykowski, M., Miller, F. G., Kirsch, I., & Lembo, A. J. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS One, 5 (12)
During the summer of 2010, the California Office of Traffic Safety conducted a survey of 1671 drivers at gas stations throughout California. The survey asked drivers about their own driving behavior and perceptions of driving risks. Earlier this year I posted about the apparent contradiction between what we know and what we do—people continue to talk and text while driving despite awareness of the dangers. The California survey results (pdf) reinforce that conclusion.
59.5% of respondents listed talking on a phone (hand held or hands free) as the most serious distraction for drivers. In fact, 45.8% of respondents admitted to making a mistake while driving and talking on a phone, and 54.6 claimed to have been hit or almost hit by someone talking on a phone. People are increasingly aware of the dangers. As David Strayer has shown, talking on a phone while driving is roughly comparable to driving under the influence of alcohol (pdf). Yet, people continue to talk on the phone while driving.
Unlike some earlier surveys that only asked general questions about phone use, this one asked how often the respondents talked on a phone in the past 30 days. 14.0% report regularly talking on a hand-held phone (now illegal) and another 29.4% report regularly talking on a hands-free phone. Fewer than 50% report never talking on a hands free phone while driving (and only 52.8% report never talking on hand-held phones). People know that they are doing something dangerous, but they do it anyway (at least sometimes).
Fewer people report texting while driving than talking while driving: 9.4% do so regularly, 10.4% do so sometimes, and another 10.6% do so rarely. In other words, more than 30% of subjects still text while driving, at least on occasion, even though texting is much more distracting than talking and is substantially worse than driving under the influence.
68% of respondents thought that a hands-free conversation is safer than a hand-held one, a mistaken but unfortunately common belief. The misconception is understandable given that almost all laws regulating cell phones while driving focus on hand-held phones. The research consistently shows little if any benefit from using a hands-free phone—the distraction is in your head, not your hands.
Fortunately, there is hope that education (and perhaps regulation) can help. The extensive education campaigns about mandatory seatbelt use and the dangers of drunk driving have had an effect over the years: 95.8% report always using a seat belt, and only 1% report never wearing a seatbelt. Only 5.9% reported having driven when they thought they had already had too much alcohol to drive safely.
Strayer, D., Drews, F., & Crouch, D. (2006). A Comparison of the Cell Phone Driver and the Drunk Driver Human Factors: The Journal of the Human Factors and Ergonomics Society, 48 (2), 381-391 DOI: 10.1518/001872006777724471
I just learned from a friend that staffers at the Association for Psychological Science (APS) in Washington have formed a soccer team for a DC-area league named Invisible Gorillas. Fantastic! They’ll have a great advantage given all the additional players they’ll be able to sneak onto the field during the game.
Reto Schneider has a great post about a study by Youngme Moon. The study took advantage of the well-known reciprocity effect. People are more likely to provide information, money, assistance, etc. to someone who has helped them. The reciprocity effect is central to many methods of persuasion. If you do someone a favor, even a small one, they will feel pressure to reciprocate. That’s why fundraisers often send you “free” return address labels — they hope that reciprocity will lead you to make a donation that far exceeds the cost of the labels (which you never wanted in the first place).
In the studies, people responded to questions about themselves on a computer. For example, the computer might ask “What do you dislike about your appearance?” or “What have you done in your life that you feel most guilty about?” Getting people to provide detailed responses to such personal questions can be challenging, but Moon found that engaging reciprocity helped. If the computer revealed something “personal” about itself, people were more likely to do the same. For example, the computer might state:
“You may have noticed that this computer looks just like most other PCs on campus. In fact, 90% of all computers are beige, so this computer is not very distinctive in its appearance. What do you dislike about your physical appearance?”
People were much more likely to reveal personal details about themselves when they were responding to the computer’s revelation. By revealing something about itself, the computer led people to reciprocate. What’s most interesting about this case is that people were affected by the reciprocity effect even though they weren’t interacting with another person — they’re responding to questions on a survey.
This approach provides a great tool for researchers who need their participants to provide personal information. But it could also be used for more nefarious purposes. Telemarketers could use it to solicit personal information. I would bet that people would be more likely to give up their online banking password in response to a reciprocity prompt as well.
Moon, Y. (2000). Using Computers to Elicit Self-Disclosure from Consumers The Journal of Consumer Research, 26 (4), 232-339