Science or Sciencey [part 3]

Part three of a 4-part series examining what happens when science is used for marketing (using brain-training software as the central example).

[part 1 | part 2 | part 3 | part 4]

[Full disclosure: I am a co-PI on federal grants that examine transfer of training from video games to cognitive performance. I am also a co-PI on a project sponsored by a cognitive training company (not Posit Science) to evaluate the effectiveness of their driver training software. My contribution to that project was to help design an alternative to their training task based on research on change detection. Neither I nor anyone in my laboratory receives any funding from the contract, and the project is run by another laboratory at the Beckman Institute at the University of Illinois. My own prediction for the study is that neither the training software nor our alternative program will enhance driving performance in a high fidelity driving simulator.]

In my last post, I examined some of the claims on the Posit Science blog to see what science they were using as the basis for the claims. Posit Science prides itself on being rooted in science, and unlike most claims for brain training, they actually can point to published scientific results in support of some of their claims. The primary evidence used to support the claim that DriveSharp training can improve driving appear to be based on a 2003 paper by Roenker et al that examined the effects of training on the UFOV. Today I will examine what that article actually showed to see what sorts of claims are justified.

First, I should say that the Roenker et al (2003) study is an excellent first attempt to study transfer of training from a simple laboratory task to real-world performance. It used performance measures during real-world driving, and was far more systematic than most road-test studies of this sort. As with any such study, though, it is limited in scope to the experimental conditions and subjects it tested. It also had several methodological shortcomings that somewhat weaken the conclusion that training transfers to untrained driving tasks. Here are some characteristics of the study that you might not know if you relied solely on the description of the scientific findings touted on the Posit Science site:

1. The subjects were all 55 or older and were recruited specifically because they might benefit from training. At least some (we don’t know how many) were recruited because they were involved in crashes. Screening criteria excluded participants who performed normally on the UFOV, so these were older participants with driving problems who had existing impairments on a demanding perception/attention task. Given that this transfer-of-training study tested only impaired older drivers, don’t count on any benefits if you are unimpaired, a good driver, under age 55, etc. The claims on the Posit Science website don’t mention these potentially important limitations.

2. The study involved 3 conditions: (a) the critical “speed of processing” training group, (b) a simulator training group, and (c) a relatively unimpaired control group. Not surprisingly, the two training groups tended to improve on the tasks that were specifically trained. The simulator training was a more standard driver training program, and those subjects showed improvements on the same tasks that were emphasized in the training (e.g., proper turning into a lane and using a signal). The critical “speed of processing” group showed no improvements on signaling or turning. Not surprisingly, though, their UFOV performance improved. That’s effectively what their training task was. Similarly, the speed training group responded faster in a choice response time task. Again, these sorts of task-specific benefits are not surprising because we know that training tends to improve performance on the trained task.

Even if training didn’t improve performance on the trained task, we might still find improvements if people thought the training should help. Subjects in the simulator condition knew that they were being trained to use their signal correctly and to turn into their lane appropriately, so they would be highly motivated to perform well for those aspects of the driving test (and some even said that they worked hard on doing those tasks well in the post-test). Similarly, a group trained to respond quickly would be motivated to respond quickly on other tasks.

There also was a lot of variability in the outcome measures, and in some cases, the speed trained group underperformed the other groups 18 months later (e.g., on the position in traffic composite measure). Given the number of statistical tests involved (3 training conditions, about 10 outcome measures, multiple follow-up tests), some of the statistically significant differences are likely to be spurious in any case.

3. In the pre-training driving segment, the raters were blind to the condition. However, following training, one or both of the coders knew the training condition. Even if the coders weren’t told the training condition, they might well have been able to tell which condition a subject was from anyway. Given that the training subjects were impaired to start with, the differences between them and control subjects might have been apparent in their driving performance. The paper provided no evidence that coders actually were unaware of the condition or that they couldn’t guess the condition. More importantly, the coders apparently were informed that the subject was in either the critical training group or the unimpaired group (that is, they knew the subject wasn’t in the other training condition). Why does that matter? If the coders knew that subjects were in the speed training condition and they believed that the training might improve some aspects of driving performance, then any subjective measures of driving could be affected by their expectations. If the coders were not truly blind to the condition, then their subjective judgments in coding the events might be biased by their knowledge and expectations about the training condition.

4. The one significant benefit of speed training found in the paper was a reduced number of dangerous maneuvers. Recall the claim that training reduced dangerous maneuvers by 36%. As I noted in the second post of this series, the judgment of what is dangerous could be somewhat subjective. It would be interesting to see the data on what constituted a dangerous maneuver – did the raters spot the same dangerous maneuvers or did they just come up with the same overall number of dangerous maneuvers. Were the two raters blind to each other? That is, could they see each other taking notes about what was dangerous? Either of these sorts of factors, in addition to the possibility that they knew the training condition, could lead to a spurious claim of improvement. Given that such events were rare in the study, a slight bias to code something as a dangerous maneuver or to treat it as safe could lead to what looks like a large relative improvement in performance.

Summary
These criticisms are not intended to cast aspersions on the Roenker et al (2003) study. I actually found the study to be quite impressive. If I had been a reviewer of this study, I would have raised some of these concerns, but I likely would have recommended publication (after requesting some weakening of the claims). It is an important first attempt to study transfer of training from the laboratory to actual driving, a topic that deserves further study. What I find problematic is not the science itself, but the way in which the science is applied in marketing the effectiveness of training more generally. The DriveSharp post stated the claim that training improves driving, and made no mention of these limitations and qualifications. Someone reading the post or the Posit Science website might conclude that training has a proven effect on driving for all people, when the effects are limited to one measure in an already impaired older population. Untrained readers might not delve into the paper itself to see what other limitations the study had. In the final part of this series, I will return to the DriveSharp blog post and will briefly discuss the possible negative consequences of sciencey marketing.

Sources Cited:

Roenker DL, Cissell GM, Ball KK, Wadley VG, & Edwards JD (2003). Speed-of-processing and driving simulator training result in improved driving performance. Human factors, 45 (2), 218-233 PMID: 14529195

4 comments to Science or Sciencey [part 3]

  • [...] superb series of posts from Daniel Simons on the problems with sciencey marketing (part 1, part 2, part 3, part [...]

  • [...] After my third post in the series, the Posit Science post’s author, Peter Delahunt, wrote a comment citing some additional evidence in support of the claims he and the Posit Science website have [...]

  • Thanks for your comments, Peter. The last post in the series is already written and will appear Friday. After people have had a chance to read and comment, I will post an “afterward” in which I discuss your comment and any other comments on the series. I hope that these posts and comments will inspire an ongoing dialog about the nature of scientific evidence and the interplay between scientific communication and marketing.

    If anyone reading the series has thoughts or input on that broader theme (or on specific points), please don’t hesitate to post them as comments or just email me directly and I’ll make sure to incorporate them into my discussion at the end of the series (if you would like your comments to be addressed but you would like to remain anonymous, just email me your thoughts and say so).

    -Dan

  • Hi Dan:

    My name is Peter Delahunt and I am a scientist at Posit Science. I wrote the blog post that helped spark this topic. Thanks for your interest in our work – we think it is really important that the science of brain training be subject to intensive review. I would like to make a few comments on your recent postings:

    First, I would like to clear up that it was not my intention to imply that our software would help overcome ‘inattentional blindness’. I often use your video to demonstrate how important attention is in visual perception. As you no doubt know, most people are completely unaware of this fact and I admire the work you have done to educate the general public.

    Second, you have focused your attention on the Roenker (2003) paper, suggesting that this is the main evidence for Posit Science’s claims that DriveSharp can make you a safer driver. However, this is not the case. Any broad claim like this must come from a constellation of reinforcing data, rather than any single paper. The Roenker paper is an important first step in showing that DriveSharp training can make you a better driver by reducing the number of dangerous driving maneuvers performed. The specific data in that paper is reinforced by results from other studies showing that DriveSharp helps older drivers continue to drive in difficult circumstances such as at night and in heavy traffic (Edwards et al, 2009a) and also helps reduce the risk of having to give up driving (Edwards et al, 2009b). Most recently, this pattern of results has been further reinforced by data from Karlene Ball and her colleagues. They tracked actual crash rates for drivers in the ACTIVE study, a large government funded randomized controlled clinical trial set up to test the effect of cognitive training on a variety of outcomes. Participants in the ACTIVE study were specifically recruited to be broadly similar to the population at large, and did not have specific visual attention impairments. The researchers found that drivers who did the cognitive training included in DriveSharp had at-fault crash rates that were almost half of the non-trained control group over the 5-year period following training. The results were first presented at the annual Transportation Research Board meeting in January 2009, and have now been peer-reviewed and accepted for publication in the Journal of the American Geriatrics. I am happy to provide you with an ‘in-press’ copy if you like.

    It is quite typical in science that no one study stands on its own because every individual study has strengths and weaknesses. However, in aggregate across multiple studies, true scientific results emerge. That is certainly the case for DriveSharp training, where across multiple studies and thousands of participants, we reliably see improvements in multiple measures of driving safety. Based on this published scientific record, we are very comfortable with the statements we have made regarding the efficacy of our programs

    Sincerely,

    Peter Delahunt, Ph.D.
    Research Scientist,
    Posit Science

    References

    Ball, K., Edwards, J. D., Ross, L. A., McGwin, G. (in press). Cognitive Training Decreases Motor Vehicle Involvement Among Older Drivers. Journal of the American Geriatrics Society.

    Edwards, J. D., C. Myers, L. A. Ross, D. L. Roenker, G. M. Cissell, A. M. McLaughlin and K. K. Ball (2009). “The Longitudinal Impact of Cognitive Speed of Processing Training on Driving Mobility.”, Gerontologist., 49(4):485:94, (2009a).

    Edwards, J., P. B. Delahunt and H. W. Mahncke (2009). “Cognitive Speed of Processing Training Delays Driving Cessation.” Journal of Gerontology: Medical Sciences, 64A, 12, 1262-1267 (2009b).

    Roenker, D. L., G. M. Cissell, K. K. Ball, V. G. Wadley and J. D. Edwards (2003). “Speed-of-processing and driving simulator training result in improved driving performance.” Hum Factors 45(2): 218-33.

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>