Remarkable false memories

Update: Fixed some typos.

You probably think that you remember how you first heard about the attacks on the World Trade Center on September 11, 2001. When I think back to the events of that day, I feel like I’m playing back a video recording of the events as I experienced them. That experience of vividness is misleading, though. Highly emotional and meaningful events produce memories that are more vivid, but those memories are subject to the same distortions that affect more run-of-the-mill memories. In The Invisible Gorilla, we reported a test of my memories for 9/11. I wrote out a detailed description of everything I could recall: who was with me, what I was doing, where I was, what I did after first hearing about it, what I did for the rest of that day. The narrative I produced was extensive and detailed. I then emailed each of the people I remembered having been with me that day and asked them to recall their own memories for that day’s events. I told them nothing about what I remembered. As it turns out, 2 of the people I remembered being there had definitive evidence that they weren’t. And, I hadn’t remembered the person who was actually in the room with me at the time we first heard and was there for the entire morning!

One of the main reasons we suffer from the Illusion of Memory is that we rarely have the opportunity to test the accuracy of our vivid memories. We just trust that those rich details we can remember must reflect what actually happened. I encourage you to try this test for yourself — you might be surprised by the discrepancies between your own memories and those of the people who were with you.

Although most people don’t get regular evidence of memory distortions, celebrities and politicians do. Yet, the media often examines the claims politicians make about their past. For example, Hilary Clinton famously claimed during the 2008 presidential primary to have come under sniper fire when landing in Bosnia some years earlier. Yet, contemporary media coverage revealed a greeting ceremony on the tarmac in which a Bosnian child read her a poem — no snipers. We can’t say for certain if Hilary was lying or misremembering, but what was striking during the primary was the assumption that she must have been lying. The sort of recollection would be entirely consistent with more typical memory distortion.

Thanks to a tip from a reader of our blog, I encountered an even more dramatic example of proven memory distortion. In a 1995 film, Captain Robert Daniell was interviewed about his experiences with the British army when he helped liberate the Belsen concentration camp. The details of the interview are documented in an edited volume about the Belson camp (Reilly et al, 1997):

Daniell recounted how he was the first British soldier to go into Belsen and how he ’saw the gas ovens, which had been cleaned out because there was no fuel to run them. This was why there were so many corpses lying around … It was pathetic. There were worn paths to each of the gas chambers and on the side a pile of spectacles at least 6ft high.’”

Daniell reported that “it is as clear to me now as it was then.” The only problem is that Belsen didn’t have gas chambers. Daniell’s memory conflated his personal experiences with later coverage of other extermination camps as well as inaccurate popular media coverage of Belsen. Think about that for a moment — most of us would assume that our memory for liberating a Nazi camp would be indelibly printed in our mind. It would be something we couldn’t forget. How could you forget it. But, memory doesn’t work like a video recording, and our memories can (and often do) change over time.

Unfortunately, people use mistaken memories like Daniell’s to support revisionist conspiracies about what actually occurred in Nazi Germany. They assume that he was lying about his experiences to cover up some conspiracy. Rather than supporting conspiracies, the example illustrates why we shouldn’t rely solely on individual memory to document historical events. Even if it feels as clear now as it did at the time, what we remember is not necessarily what happened.

As we discuss in The Invisible Gorilla, many conspiracy theories capitalize on memory distortions. A similar pattern emerged from George W. Bush’s mistaken recollection of having seen the first plane hit the World Trade Center (footage of that plane hitting the towers didn’t exist at the time — some footage was discovered months later by a documentary film crew that had accidentally recorded it). Conspiracy theorists assumed that Bush must be remembering correctly, which meant that he must have know about the attack in advance. Far more likely is that his memory was distorted via the same mechanisms that lead to my memory distortion as well as those of politicians and soldiers.

The next time you hear a politician or celebrity make a false claim about what they remember, keep in mind that they might not be lying maliciously. They might not even realize their memory is wrong (and if you tell them, they might not believe you).

Big chest thump to Michael Stopp for directing me to the quotes from Daniell.

Sources cited:
Reilly, J., Cesarani, D., Kushner, T., & Richmond, C. (1997). Belsen in history and memory. Frank Cass: London.

Other Sources:
Just about anything written by Elizabeth Loftus about memory distortion.

more gorilla hyjinx

People in gorilla suits are tarnishing my good name and reputation. First, they stand idly by while a neighbor’s house burns.

gorilla watching house fire

Firefighters wrap up at the scene of a fire at 15 M. St. in Hampton this morning, where a home was gutted by two fires this morning. Right, Wayne McGowen, who was sleeping in the basement of the house when it caught fire, watches firefighters at the scene along with neighbor Kali Burns, who was dressed as a gorilla. (JASON SCHREIBER)

That’s bad enough, but then on Halloween, a person in a gorilla suit stabbed someone.

And now, two people in gorilla suits (and one in a chicken suit) punched a kid in the head and then stole his bike. At least it was the chicken-man who stole the bike — the gorillas ran away. Not very gorilla-like of them. Even worse, why are they keeping company with evil chickens?

I hope this isn’t a trend. It gives gorillas a bad name. More importantly, this might be the tip of the iceberg. These are just the gorilla crimes that were noticed. The smart gorillas are probably getting away with white-collar crimes.

The first study of inattentional blindness?

Over on our Gorilla Guys blog at Psychology Today, I’ve just added a new post on what might well be the first series of studies documenting inattentional blindness. It comes from an unlikely source, but it’s seasonally appropriate. Check it out.

Science or Sciencey [afterward]

On 1 October 2010, I completed an extended series of posts examining the interplay between science and marketing. In the piece, I used a blog post on the Posit Science website as a case study of the ways in which appeals to science may lead to effective marketing but might oversell the science itself. After my third post in the series, the Posit Science post’s author, Peter Delahunt, wrote a comment citing some additional evidence in support of the claims he and the Posit Science website have made. In this afterward, I discuss that evidence.

I would first like to again thank Peter for engaging in a discussion about these issues and for putting me in touch with the authors of one of the papers that is not yet in print (thanks to both of them for sending it as well). I hope this post series will lead to a convergence of marketing and science in which claims about training are more directly based on what can be concluded from the research. Ideally, it will lead to additional studies that might more directly support the claims used in marketing the products.

In his comment, Peter notes about my post series that:

you have focused your attention on the Roenker (2003) paper, suggesting that this is the main evidence for Posit Science’s claims that DriveSharp can make you a safer driver.

That’s accurate, but I focused on the Roenker et al (2003) paper for two reasons: It the only study to directly measure the effects of cognitive training on driving performance and it was the basis for the following strong claims about the effectiveness of DriveSharp:

Training allows drivers to react faster providing an additional 22 feet of stopping distance at 55 mph.”

Training reduces dangerous driving maneuvers by 36%.

As I documented, the first claim is unsupported by the actual study—there is no published evidence that training has any effect on actual stopping distances in driving. I also argued that the second claim is not nearly as impressive as it sounds because: (a) the number of such maneuvers was small, (b) the paper provided no evidence about the distributions of such maneuvers across subjects, (c) the coding may have been subjective, and (d) the coders likely were not entirely blind to the training condition. Peter’s comment did not provide any further evidence in support of these claims. Nor did it challenge my claim that any conclusions from those studies cannot be applied to unimpaired drivers or younger drivers. There is no published evidence that DriveSharp or other cognitive training programs improve driving for unimpaired younger drivers, but the Posit Science website does not mention this limitation on the generalizability of their claims.

Peter’s comment, cites a few additional papers that, he argues, provide converging evidence in support of the effectiveness of DriveSharp training. My final post in the series already addressed one of those papers (Edwards et al, 2009b). That paper showed that older drivers who underwent training were more likely to still be driving 3 years later. As I noted, that finding says nothing about driving ability. In fact, it could reflect overconfidence by those participants in the effectiveness of training. If so, then it might mean that training actually leads to more dangerous driving by encouraging impaired drivers to stay on the road longer!

A second Edwards et al (2009a) paper also used 10 hours of speed training and obtained self-report measures of mobility, driving exposure, and driving difficulty 3 years later (no new training was conducted as part of this study, actually — the data are from two larger training studies). Edwards et al (2009a) noted that, “a limitation of this study is the use of self-report to assess driving mobility outcomes.” As for the other Edwards et al paper, people might think that training helps even if it objectively doesn’t help at all. The control group in this second paper was an internet training session in which older impaired adults were taught to use computers and set up email accounts. The training group received speed training on a task that involved spotting vehicles. The central finding was that for some of the measures, the speed training group showed less of a decline than did the computer-training group. Unfortunately, the study did not report the most crucial statistical test: Did the speed training group decline significantly less than the computer training group? Instead, each training group was compared to a reference group of unimpaired older subjects. The rate of decline for the speed training group did not differ significantly from the reference group, but the rate of decline for the computer group did. That test is inadequate to determine whether the two training groups differed from each other. From the graphs, it appears that they might not have differed significantly on many of the measures. If they didn’t, it’s hard to argue that speed training in particular did anything for their driving.

Finally, Peter cited an in-press paper based on the large-scale ACTIVE Trial (Ball et al, in press). In that study, elderly people received 10 hours of training in the late 1990s, and in the years since then, they have answered questions and completed followup studies (including those in the Edwards et al, 2009a paper). What did the study show? Speed of processing training doubled the risk of being hit by another car.

Yup. You read that right. Older participants in the speed training group were twice as likely to have other cars hit them. That’s not the conclusion that Ball et al (in press) draw from their study or what the Posit Science website claims, but it follows just as logically from the actual results.

Here is what Peter claimed about that paper in his comment:

The researchers found that drivers who did the cognitive training included in DriveSharp had at-fault crash rates that were almost half of the non-trained control group over the 5-year period following training.

Peter’s conclusion is also consistent with the results, but both my conclusion and Peter’s are misleading for the same reason: There was no significant difference in the overall rates of accidents between the training group and the control group. That’s right. There was no difference in accident rates as a result of training. About 22.5% of the subjects in the control group and 19.6% in the training group had an accident, not a reliable difference. (Note that the Posit Science website incorrectly hypes the result that training “cuts risk of a car accident by 50%.” That’s not just imprecise. It’s wrong. The study didn’t show any difference in the overall accident rate as a result of training. The version in Peter’s comment is more precise.)

The doubling (or halving) of accident rates for the trained subjects only appeared when analyzing specific subsets of the accidents, non-fault accidents for my conclusion and at-fault accidents for Peter’s conclusion. The average at-fault accident rate was 18.34% (75/409) in the control group and 10.65% (18/169) in the training group. Yes, that difference is statistically significant and could be viewed as a 50% reduction. The average non-fault accident rate was 4.2% (17/409) for the control group and 9.5% (17/169) for the trained group. So, training makes people more than twice as likely to be hit by someone else! Both claims are equally (in)valid, but really, neither is an appropriate conclusion without acknowledging the alternative conclusion. These conclusions illustrate the danger of breaking down a non-significant difference into sub-components.

If training really helps driving, people should be less likely to be in accidents. Period. They should be better able to avoid situations that put them at risk. The results show that they aren’t able to avoid such situations as they are in accidents at the same rate as the control group. They are not 50% safer. They are 50% less likely to cause an accident, but by the same token, they are twice as likely to be in an accident that wasn’t their fault.

Peter’s comment ends with another bit of marketing: “However, in aggregate across multiple studies, true scientific results emerge.  That is certainly the case for DriveSharp training, where across multiple studies and thousands of participants, we reliably see improvements in multiple measures of driving safety.”

First, these studies did not have thousands of participants. Although the ACTIVE study did have thousands of participants in total, the critical training group in the Ball et al study had only 179. Edwards et al (2009a) had 66 subjects in the training group. Edwards et al (2009b) had 276, but those were the same trained subjects as in the other two papers and this paper just involved analyzing a different outcome measure from the same training group. The Roenker paper included 44 subjects in the speed group. So, in total, fewer than 350 speed-trained subjects form the basis for the conclusions, not thousands of participants.

The second part of this claim, that these studies reliably reveal improvements in multiple measures, is more sciencey. Reliable measures are those that can be replicated across studies, but none of these studies have been replicated (to my knowledge, nobody has tried to replicate any of them directly), and none of the outcome measures are repeated across multiple experiments. These papers also did not constitute independent replications as would be needed for strong converging evidence. The same trained subjects were used for different analyses across papers. (Although it likely had no impact on the results, it’s also worth noting that the authors of these papers are stockholders in Posit Science and explicitly acknowledge their conflict of interest in the paper.) Another interpretation of “reliably” is that training consistently produces better performance, but that’s not true either. In the Roenker et al study, only one outcome measure showed any benefit of speed training.

Peter concludes his comment by noting that Posit Science is “very comfortable with the statements we have made regarding the efficacy of our programs.” The fact that Posit is comfortable making such strong claims for direct benefits of training despite fairly minimal (albeit suggestive and encouraging) evidence is sciencey marketing, not science.

The scientific (rather than sciencey) response would be to remove unsubstantiated claims and qualify overstated ones on the website and in marketing materials. Yet, at the time of this posting (nearly 4 weeks after my original post), the Posit Science website continues to make all of the same claims with no additional qualifications ( see the clinical proof tab). In fact, it even distorts the results of Ball et al (in press) to make training seem more potent: The site incorrectly claims that training “cuts risk of a car accident by 50%” when there was no difference in overall accident rates as a function of training (see above).

Again, I hope that science will bear out many of the claims on the Posit Science site and that their training program will eventually be proven successful. That would be a boon to society. In the meantime, though, marketing claims should be taken skeptically. Several of them are entirely unsupported by the existing evidence and others do not qualify claims that need some qualification. It will be interesting to see if some of these sciencey claims are reigned in so that Posit Science can more justifiably support their claim to be science-based.

Sources Cited:

Ball, CK, Edwards, CJ, Ross, CL, & McGwin, CG (2010). Cognitive Training Decreases Motor Vehicle Involvement Among Older Drivers Journal of the American Geriatric Society

Edwards JD, Myers C, Ross LA, Roenker DL, Cissell GM, McLaughlin AM, & Ball KK (2009). The longitudinal impact of cognitive speed of processing training on driving mobility. The Gerontologist, 49 (4), 485-94 PMID: 19491362

Edwards JD, Delahunt PB, & Mahncke HW (2009). Cognitive speed of processing training delays driving cessation. The journals of gerontology. Series A, Biological sciences and medical sciences, 64 (12), 1262-7 PMID: 19726665

Roenker DL, Cissell GM, Ball KK, Wadley VG, & Edwards JD (2003). Speed-of-processing and driving simulator training result in improved driving performance. Human factors, 45 (2), 218-33 PMID: 14529195

Don't text and fly

A halloween warning (sent to me by my Auntie M.):

Don't text and fly (image of witch hitting post

Don't Text and Fly

summary - 'invisible gorillas' in your life

Thank you to everyone who submitted a comment or sent me email describing your own ‘invisible gorilla’ experiences. In addition to the many great comments on original post, many people emailed me their personal examples of in which they misperceived or misremembered the world around them. A few of you also sent in suggestions of ways that other people might have succumbed to everyday illusions. Those are great too. I’ve posted a few of the emailed examples below (with permission). Please keep sending in your examples!

As promised, I have randomly selected one submitter to receive a free copy of the Audio CD version of the Invisible Gorilla, and the winner is … drumroll … Jeremy. [DAN: I will contact you directly, Jeremy]. We’ll have more giveaways in the near future.

A few examples of ‘invisible gorillas’ I’ve gotten by email:

  • From Cathleen Moore: I have a funny story to tell you! I was driving to a Dr.’s appointment during the day a few weeks ago and you were on Science Friday with Ira Flato. That was cool enough. But I was intently listening to how you guys talk about this stuff…and driving…and I totally missed my turn! Isn’t that awesome?! I can’t even drive and listen to the radio at the same time!
  • From Tom Rhoads: I failed to notice an invisible gorilla late last week on my way home from work. I live in Vermont and have a beautiful drive through farm country on winding country roads. There is about a one mile straight stretch during the drive and everyone drives a little over the 50 mph limit there. I was driving along on my way home on that stretch with a car with an out of state license well ahead of me and driving faster that I was. The leaves have started to change here, and on that night there was a very impressive rainbow clearly visible behind the trees. There was no room to pull over, so the car in front of me stopped dead in the road, probably to get some great photographs. I had plenty of time to stop, but I didn’t even notice that I was closing on the car until I got right behind it. I still don’t know how I missed it, but I had to swerve into the left lane and barely made it around without going off the road or hitting the other car. Luckily there were no cars coming the other way so once I got around the other car my drive was uneventful the rest of the way home. I still have no idea how I missed seeing the car in front of me. I might have been thinking about work or about how lucky I am to live is such a lovely place. I’m just glad I was able to act quickly and glad I was lucky.
  • Anonymous: I thought of you as we passed a sign urging drivers to “Report impaired driving. Dial 911 on your cellular phone.” [EDITOR: I wonder if you would then need to call 911 to report yourself for calling 911 while driving...]
  • From Julie: My “invisible gorillas” often pop up when I’m walking around with friends. Every few minutes, someone will point out an attractive girl (or guy), an interesting outfit, or some other fact that they considered remarkable… and I’ll have completely missed it. Every time. I don’t know whether they’ve just got a wider attentive span than I do, or if I’m just focusing too much on maintaining my conversation with them, or what, but it seems that I’m constantly out of the loop when it comes to interesting social phenomena.

invisible gorillas in your life

I’ve gotten a number of great examples of “invisible gorillas” by email and in comments on my earlier post. Please continue sending in your personal examples. I’m looking for any examples in which your intuitions about your own mind were wrong, ideally in dramatic fashion. The don’t need to be cases in which you failed to see something obvious — they can include any case in which your intuitions about your mind (attention, perception, memory, confidence, knowledge, etc) diverged from the reality.

You can send your experiences directly to me by email ( or even better, post them as a comment on my earlier post. In a week or so, I will compile all the comments and will randomly select one submitter to receive a free copy of our book on CD.

'Invisible gorillas' in your life?

Have you ever failed to notice an “invisible gorilla” in your daily life? Perhaps you were in a car accident and never saw the other car. Maybe you were in a bicycle or motorcycle accident in which a car driver never saw you coming. Or, maybe you missed an obvious business opportunity that was staring you in the face because you weren’t expecting it and were focused on something else. Or, perhaps you missed an “obvious” opportunity in your personal life.

We’d like to hear about your experiences with those metaphorical ‘invisible gorillas’ in your own life. If you can think of a way that invisible gorillas have affected you, please email me about it ( or post a comment on this blog post. I’ll compile the examples, and then I’ll randomly select one submitter to receive a free copy of the audiobook version of The Invisible Gorilla.

Let’s hear how invisible gorillas have affected you!

Texting while driving -- really, folks?!

In just the past week, this humble primate has received multiple emails, twitter messages (and followers), and blog comments promoting wonderful new products that claim to permit safe texting while driving. I think most are in response to my post entitled “silly ideas about safe texting.” I’m guessing most of these are just spammers who see something about “safe texting” and think I’m sympathetic to their promotions. Silly hairless apes.

Here are a couple examples: (posted the following comment to my previous post that I subsequently blocked as spam):

With 39 States having “No Phone or Texting While Driving” Laws in effect by October 1, 2010 you can save yourself the embarrassment and humiliation by getting a new HandsFree way to make phone calls and texting while driving assistance from “Kylee” your new Virtual Assistant provided here.

Oooh… wouldn’t want to be humiliated, would we.

My tweets (@invisgorilla) are now being followed by VoiceAssist, whose website has a big banner stating:

“Keep your hands on the wheel and eyes on the road. Text, call and email all by voice command.”

To their credit, many of their tweets are about the dangers of distracted driving. But, the danger from talking on a phone has nothing to do with keeping your hands on the wheel, and people can miss critical events on the road even when looking right at them. The problem is with your head, not your eyes.

The idea behind all products like these is fundamentally flawed — the premise is that texting while driving can somehow be made safe by using voice recognition. But, just as hands-free phones aren’t really safer than hand-held phones, voice recognition won’t solve the problem for texters. (It might help a tiny bit for bad texters, but texting will still be even more dangerous than driving while talking on a phone.) The problem is that generating a text message is hard — it requires your attention and your focus. If you have ever used dictation software, you know how challenging it can be to generate a concise, clear message verbally. That’s much harder than holding a natural conversation in which your meaning can be clarified and your grammar isn’t as critical. And, holding a hands-free conversation is comparable to driving under the influence of alcohol. Texting is many times worse than that! If you’re focusing on generating a message, you’re not focusing on the road. That’s where the danger comes from, even more than from typing.

These sorts of claims have dangerous implications. If people believe the hype and think that they can safely text and talk on phones by using voice recognition, they will be even more overconfident in their ability to do so. And, we know that people don’t tend to realize how distracted they are. Anything that makes people more likely to use their phones while driving is a bad thing. Need some evidence? Check out this white paper by the NSC.

I throw 5 really rotten, stinky bananas at companies hawking products that encourage people to text and drive.

Important discovery by Muppet Labs

Once again, Dr. Honeydew of Muppet Labs solves a pressing real-world problem.