Science or Sciencey [part 4]

The final part of a 4-part series examining what happens when science is used for marketing (using brain-training software as the central example).

[part 1 | part 2 | part 3 | part 4]

[Full disclosure: I am a co-PI on federal grants that examine transfer of training from video games to cognitive performance. I am also a co-PI on a project sponsored by a cognitive training company (not Posit Science) to evaluate the effectiveness of their driver training software. My contribution to that project was to help design an alternative to their training task based on research on change detection. Neither I nor anyone in my laboratory receives any funding from the contract, and the project is run by another laboratory at the Beckman Institute at the University of Illinois. My own prediction for the study is that neither the training software nor our alternative program will enhance driving performance in a high fidelity driving simulator.]

The Posit Science post on the effectiveness of DriveSharp emphasized the scientific backing for their training regimen. Over the past few days, I have been examining the claims and the research underlying them. In this final post of the series, I will attempt to draw some broader conclusions from this analysis of science and the use of science in marketing. Next week, I will add an “afterward” discussing some of the comments I receive about the post series and also examining a couple additional papers on the topic.

I greatly admire attempts to examine the ways in which scientific tasks can be brought to bear on real world problems, and the Roenker et al (2003) paper in Human Factors was an important first attempt to test whether training on a basic, laboratory measure of attention and perception (the UFOV) could enhance real-world driving performance. The study was promising, showing what appears to be a reduction in dangerous maneuvers by subjects who were trained to improve their speed of processing. Without such clinical-trial training studies, there is no way to determine whether improvements on laboratory tasks generalize to real-world problems. That is why clinical-trials with double-blind assignment to conditions are the gold-standard for determining the efficacy of any treatment, including new drug therapies and medical interventions. Without meeting those rigorous experimental conditions, claims that a treatment definitively causes an effect are unsupported. Few studies in the brain-training literature even attempt to meet these requirements, and even fewer actually succeed. The Roenker et al (2003) study was a good first attempt, even though it had some shortcomings.

Any clinical trial, of course, has limitations that can affect whether or not causal claims are merited and also whether the results generalize to other situations or populations. This initial study was limited in scope, tested only elderly and already-impaired subjects, and was subject to some possible concerns about coding objectivity and subject motivation. None of those limitations are surprising, but they do raise the need for further study. That’s especially true given that relatively few studies in the training literature show any transfer at all beyond the specific tasks trained, and even fewer studies show generalization from laboratory tasks to practical, real-world performance on untrained tasks.

The problem, here, isn’t with the science. Science is a work in progress and any study has its limitations. The problem comes when an initial, promising result is taken to be definitive “proof” or when speculative claims are treated as scientific fact. The Posit Science blog took these results as proof, without any qualifications or discussions of the limited scope and generalizability of the findings. It also presented what was a speculative analogy illustrating the potential importance of faster responses (faster choice response time translated into shorter stopping distances) as evidence for actually improved stopping distances.

Could training on the UFOV improve driving performance in general? Possibly, but that’s not what the study showed. It showed improvement on just one of many outcome measures for one type of subject. If training has such a big effect on driving, it’s actually somewhat surprising that only one of the measures showed any benefit at all from training. That’s a far cry from proving the benefits of training for driving in general or for the population at large.

I have written about this example of marketing primarily because it comes from a company, Posit Science, that prides itself on backing its programs with science. And, the fact that most of their programs are connected in some way to published research and award-winning researchers lends an air of scientific credibility to their marketing claims. It makes them sciencey. People who lack the training (or time) to examine the underlying science will be inclined to trust the claims of organizations with the imprimatur of scientific credentials in the same way that people tend to believe that studies with pretty pictures of brains are more scientific. The science might eventually back some of the stronger claims, and I truly hope that it does, but unlike science, marketing typically does not mention the limitations, problems, or shortcomings of the studies.

People are already inclined to believe that simple interventions can produce dramatic results (the illusion of potential that we discuss in The Invisible Gorilla), and they are primed to believe claims like “about 10 hours of DriveSharp training will improve your ability to spot potential hazards, react faster and reduce your risk of accidents.” Using claims of scientific proof in marketing this sort of quick fix capitalizes on the illusion and can be particularly persuasive.

If people take these claims of the power of training to heart, the results could actually increase the danger to drivers. Another paper, co-authored by the blog post author Peter Delahunt, found that elderly subjects with cognitive impairments who underwent speed-of-processing training were more likely to still be driving 3 years later (14% of untrained subjects stopped driving but only 9% of trained subjects did). If training actually improved driving performance, that could be a great outcome. But think about what it means if the training didn’t actually improve driving performance. If the subjects in these studies are convinced that the training helped them, that the benefits of cognitive training for driving are proven, they might believe that the training they underwent justifies remaining on the road. People often lack insight into their own driving performance. That’s one reason people keep talking on their phones while driving—when distracted, we don’t realize how poorly we’re driving. If they believe that training improved their driving even if it actually didn’t, they might not notice their driving problems and they might become unjustifiably confident in their ability to drive well, leading them to continue driving longer than they should!

DriveSharp and programs like it might well produce some benefits, and scientific study is the only way to test whether they do. I truly hope that future clinical studies replicate the Roenker et al (2003) result and show sustained benefits of training on driving performance. That would be a boon for drivers and for society. I credit Posit Science with conducting and supporting research that could test the effectiveness of such products (and I am thankful to Peter Delahunt for reading and commenting on this discussion and engaging in a dialog about the link between scientific results and marketing of those results).

Before the evidence of benefits is conclusive, sciencey marketing can be harmful, potentially giving people an unjustified confidence in their own abilities. And, if future studies fail to find benefits of training, it may be hard to counter the firmly held beliefs in the efficacy of training that result from such persuasive marketing. Sciencey marketing conveys a level of certainty that often isn’t merited by the underlying science.

More broadly, sciencey marketing claims contribute to the persistence of the illusion of potential. If people trust the claim that just 10 hours of training on what appears to them be an arbitrary computer task can lead to dramatic improvements on something as important as driving, why shouldn’t they also believe that playing arbitrary brain games can help them remember their friend’s name, that listening to Mozart could increase their IQ, or that they have other hidden powers just waiting to be released by the right arbitrary task. Quick fixes rarely are genuine, and strong scientific claims often must be reigned in later. Yes, tempered claims make for less enticing marketing, and a blog post stating that “preliminary evidence suggests a possible benefit of cognitive training for driving performance in impaired older drivers” might sell fewer products. But that would be a scientific claim rather than a sciencey one.

Sources Cited:

McCabe DP, & Castel AD (2008). Seeing is believing: the effect of brain images on judgments of scientific reasoning. Cognition, 107 (1), 343-352 PMID: 17803985

Edwards JD, Delahunt PB, & Mahncke HW (2009). Cognitive speed of processing training delays driving cessation. The journal of gerontology. Series A, Biological sciences and medical sciences, 64 (12), 1262-1267 PMID: 19726665

Roenker DL, Cissell GM, Ball KK, Wadley VG, & Edwards JD (2003). Speed-of-processing and driving simulator training result in improved driving performance. Human factors, 45 (2), 218-233 PMID: 14529195

Chabris, C., & Simons, D. (2010). The Invisible Gorilla, and Other Ways Our Intuitions Deceive Us. New York: Crown.

4 comments to Science or Sciencey [part 4]

  • [...] he argues, provide converging evidence in support of the effectiveness of DriveSharp training. My final post in the series already addressed one of those papers (Edwards et al, 2009b). That paper showed that [...]

  • [...] 1 | part 2 | part 3 | part 4] [Full disclosure: I am a co-PI on federal grants that examine transfer of training from video [...]

  • [...] Science or Sciencey [part 4]    Science or Sciencey [part 2] [...]

  • Justin Sipe

    Forget the invisible monkey the problem with this experiment is the people controlling the experiment have a problem counting. These so called scientists need to recheck their video, for the simple reason that there are sixteen passes of the basketball between the players in white not fifteen you idiots.

    [EDITOR -- This comment is off-topic and entirely inappropriate, but I have kept it here and added this note because it is also instructive. It's sad that people can be so certain of themselves that they will actually insult people rather than just asking politely for an explanation. Ironically, it is the people who are so confident who also are often wrong. In this case, the YouTube version of the video actually does have 15 passes. One pass is ambiguous, and some people interpret it as 2 passes, but it's not. You can tell by noticing that the passes always go in the same sequence from one person to the next (few people pick up on the pattern, another interesting aspect of the task). This is one of many comments and emails I have received, almost all with the same sort of certainty, although rarely as obnoxious as this one. In the Invisible Gorilla, we discuss the illusion of confidence and the finding that the most overconfident people often are the least competent. I'd encourage the commenter to read that section of our book.]

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>