Science or Sciencey [afterward]

On 1 October 2010, I completed an extended series of posts examining the interplay between science and marketing. In the piece, I used a blog post on the Posit Science website as a case study of the ways in which appeals to science may lead to effective marketing but might oversell the science itself. After my third post in the series, the Posit Science post’s author, Peter Delahunt, wrote a comment citing some additional evidence in support of the claims he and the Posit Science website have made. In this afterward, I discuss that evidence.

I would first like to again thank Peter for engaging in a discussion about these issues and for putting me in touch with the authors of one of the papers that is not yet in print (thanks to both of them for sending it as well). I hope this post series will lead to a convergence of marketing and science in which claims about training are more directly based on what can be concluded from the research. Ideally, it will lead to additional studies that might more directly support the claims used in marketing the products.

In his comment, Peter notes about my post series that:

you have focused your attention on the Roenker (2003) paper, suggesting that this is the main evidence for Posit Science’s claims that DriveSharp can make you a safer driver.

That’s accurate, but I focused on the Roenker et al (2003) paper for two reasons: It the only study to directly measure the effects of cognitive training on driving performance and it was the basis for the following strong claims about the effectiveness of DriveSharp:

Training allows drivers to react faster providing an additional 22 feet of stopping distance at 55 mph.”

Training reduces dangerous driving maneuvers by 36%.

As I documented, the first claim is unsupported by the actual study—there is no published evidence that training has any effect on actual stopping distances in driving. I also argued that the second claim is not nearly as impressive as it sounds because: (a) the number of such maneuvers was small, (b) the paper provided no evidence about the distributions of such maneuvers across subjects, (c) the coding may have been subjective, and (d) the coders likely were not entirely blind to the training condition. Peter’s comment did not provide any further evidence in support of these claims. Nor did it challenge my claim that any conclusions from those studies cannot be applied to unimpaired drivers or younger drivers. There is no published evidence that DriveSharp or other cognitive training programs improve driving for unimpaired younger drivers, but the Posit Science website does not mention this limitation on the generalizability of their claims.

Peter’s comment, cites a few additional papers that, he argues, provide converging evidence in support of the effectiveness of DriveSharp training. My final post in the series already addressed one of those papers (Edwards et al, 2009b). That paper showed that older drivers who underwent training were more likely to still be driving 3 years later. As I noted, that finding says nothing about driving ability. In fact, it could reflect overconfidence by those participants in the effectiveness of training. If so, then it might mean that training actually leads to more dangerous driving by encouraging impaired drivers to stay on the road longer!

A second Edwards et al (2009a) paper also used 10 hours of speed training and obtained self-report measures of mobility, driving exposure, and driving difficulty 3 years later (no new training was conducted as part of this study, actually — the data are from two larger training studies). Edwards et al (2009a) noted that, “a limitation of this study is the use of self-report to assess driving mobility outcomes.” As for the other Edwards et al paper, people might think that training helps even if it objectively doesn’t help at all. The control group in this second paper was an internet training session in which older impaired adults were taught to use computers and set up email accounts. The training group received speed training on a task that involved spotting vehicles. The central finding was that for some of the measures, the speed training group showed less of a decline than did the computer-training group. Unfortunately, the study did not report the most crucial statistical test: Did the speed training group decline significantly less than the computer training group? Instead, each training group was compared to a reference group of unimpaired older subjects. The rate of decline for the speed training group did not differ significantly from the reference group, but the rate of decline for the computer group did. That test is inadequate to determine whether the two training groups differed from each other. From the graphs, it appears that they might not have differed significantly on many of the measures. If they didn’t, it’s hard to argue that speed training in particular did anything for their driving.

Finally, Peter cited an in-press paper based on the large-scale ACTIVE Trial (Ball et al, in press). In that study, elderly people received 10 hours of training in the late 1990s, and in the years since then, they have answered questions and completed followup studies (including those in the Edwards et al, 2009a paper). What did the study show? Speed of processing training doubled the risk of being hit by another car.

Yup. You read that right. Older participants in the speed training group were twice as likely to have other cars hit them. That’s not the conclusion that Ball et al (in press) draw from their study or what the Posit Science website claims, but it follows just as logically from the actual results.

Here is what Peter claimed about that paper in his comment:

The researchers found that drivers who did the cognitive training included in DriveSharp had at-fault crash rates that were almost half of the non-trained control group over the 5-year period following training.

Peter’s conclusion is also consistent with the results, but both my conclusion and Peter’s are misleading for the same reason: There was no significant difference in the overall rates of accidents between the training group and the control group. That’s right. There was no difference in accident rates as a result of training. About 22.5% of the subjects in the control group and 19.6% in the training group had an accident, not a reliable difference. (Note that the Posit Science website incorrectly hypes the result that training “cuts risk of a car accident by 50%.” That’s not just imprecise. It’s wrong. The study didn’t show any difference in the overall accident rate as a result of training. The version in Peter’s comment is more precise.)

The doubling (or halving) of accident rates for the trained subjects only appeared when analyzing specific subsets of the accidents, non-fault accidents for my conclusion and at-fault accidents for Peter’s conclusion. The average at-fault accident rate was 18.34% (75/409) in the control group and 10.65% (18/169) in the training group. Yes, that difference is statistically significant and could be viewed as a 50% reduction. The average non-fault accident rate was 4.2% (17/409) for the control group and 9.5% (17/169) for the trained group. So, training makes people more than twice as likely to be hit by someone else! Both claims are equally (in)valid, but really, neither is an appropriate conclusion without acknowledging the alternative conclusion. These conclusions illustrate the danger of breaking down a non-significant difference into sub-components.

If training really helps driving, people should be less likely to be in accidents. Period. They should be better able to avoid situations that put them at risk. The results show that they aren’t able to avoid such situations as they are in accidents at the same rate as the control group. They are not 50% safer. They are 50% less likely to cause an accident, but by the same token, they are twice as likely to be in an accident that wasn’t their fault.

Peter’s comment ends with another bit of marketing: “However, in aggregate across multiple studies, true scientific results emerge.  That is certainly the case for DriveSharp training, where across multiple studies and thousands of participants, we reliably see improvements in multiple measures of driving safety.”

First, these studies did not have thousands of participants. Although the ACTIVE study did have thousands of participants in total, the critical training group in the Ball et al study had only 179. Edwards et al (2009a) had 66 subjects in the training group. Edwards et al (2009b) had 276, but those were the same trained subjects as in the other two papers and this paper just involved analyzing a different outcome measure from the same training group. The Roenker paper included 44 subjects in the speed group. So, in total, fewer than 350 speed-trained subjects form the basis for the conclusions, not thousands of participants.

The second part of this claim, that these studies reliably reveal improvements in multiple measures, is more sciencey. Reliable measures are those that can be replicated across studies, but none of these studies have been replicated (to my knowledge, nobody has tried to replicate any of them directly), and none of the outcome measures are repeated across multiple experiments. These papers also did not constitute independent replications as would be needed for strong converging evidence. The same trained subjects were used for different analyses across papers. (Although it likely had no impact on the results, it’s also worth noting that the authors of these papers are stockholders in Posit Science and explicitly acknowledge their conflict of interest in the paper.) Another interpretation of “reliably” is that training consistently produces better performance, but that’s not true either. In the Roenker et al study, only one outcome measure showed any benefit of speed training.

Peter concludes his comment by noting that Posit Science is “very comfortable with the statements we have made regarding the efficacy of our programs.” The fact that Posit is comfortable making such strong claims for direct benefits of training despite fairly minimal (albeit suggestive and encouraging) evidence is sciencey marketing, not science.

The scientific (rather than sciencey) response would be to remove unsubstantiated claims and qualify overstated ones on the website and in marketing materials. Yet, at the time of this posting (nearly 4 weeks after my original post), the Posit Science website continues to make all of the same claims with no additional qualifications (http://www.positscience.com/our-products/drivesharp see the clinical proof tab). In fact, it even distorts the results of Ball et al (in press) to make training seem more potent: The site incorrectly claims that training “cuts risk of a car accident by 50%” when there was no difference in overall accident rates as a function of training (see above).

Again, I hope that science will bear out many of the claims on the Posit Science site and that their training program will eventually be proven successful. That would be a boon to society. In the meantime, though, marketing claims should be taken skeptically. Several of them are entirely unsupported by the existing evidence and others do not qualify claims that need some qualification. It will be interesting to see if some of these sciencey claims are reigned in so that Posit Science can more justifiably support their claim to be science-based.

Sources Cited:

Ball, CK, Edwards, CJ, Ross, CL, & McGwin, CG (2010). Cognitive Training Decreases Motor Vehicle Involvement Among Older Drivers Journal of the American Geriatric Society

Edwards JD, Myers C, Ross LA, Roenker DL, Cissell GM, McLaughlin AM, & Ball KK (2009). The longitudinal impact of cognitive speed of processing training on driving mobility. The Gerontologist, 49 (4), 485-94 PMID: 19491362

Edwards JD, Delahunt PB, & Mahncke HW (2009). Cognitive speed of processing training delays driving cessation. The journals of gerontology. Series A, Biological sciences and medical sciences, 64 (12), 1262-7 PMID: 19726665

Roenker DL, Cissell GM, Ball KK, Wadley VG, & Edwards JD (2003). Speed-of-processing and driving simulator training result in improved driving performance. Human factors, 45 (2), 218-33 PMID: 14529195

1 comment to Science or Sciencey [afterward]

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>