Tweet about this on TwitterShare on Facebook

An article published by NPR on November 12, “Depressed? Look For Help From A Human, Not A Computer,” by Lynne Shallcross of Kaiser Health News (KHN) reported on the REEACT study by recently published in the British Medical Journal. Gilbody et al. (2015) sought to examine the relative efficacy of two computerized guided self-help cognitive behavioral therapy (CBT) programs, Beating the Blues and MoodGYM, against usual care by general practitioners (GP). Lantern’s Chief Science Officer Dr. Megan Jones, along with an international group of researchers at ICare, find parts of the KHN article, as well as the researchers’ analysis in the study, to be speculative, rather than fact-based. Dr. Megan Jones and ICare researchers submitted a research-focused response to the British Medical Journal, “Why didn’t patients use it? Engagement is the real story in Gilbody et al. (2015), not effectiveness.”

Here is the response they submitted to KHN:

The Shallcross article,“Depressed? Look For Help From A Human, Not A Computer,” reveals a widespread misunderstanding of the true lessons from the Gilbody et al. (2015) study, and makes non-factual assumptions in the article that could potentially discourage people from obtaining digital treatments for depression that have been proven to work.

The Gilbody et al. (2015) study has an important story to tell, except it isn’t about effectiveness, it is about engagement and adherence. Gilbody et al. (2015) found that participants who were offered computer-delivered CBT experienced “no additional improvement in depression compared with usual GP care at four months.” Word choice matters a lot here. These participants were “offered” online guided self-help CBT, except that nearly all participants didn’t take it up. In fact, nearly everyone in all three arms of the trial received the same thing – GP care supplemented by a small dose of education (e.g., one session of online CBT).

When the majority of participants only completed one or two modules of computerized CBT, it is quite a leap to conclude that the interventions are “not effective.” In the NPR-published article, Shallcross states that online programs to fight depression are “not effective because depressed patients aren’t likely to engage with them or stick with them.” This statement is an inference and is not backed by evidence.

It isn’t only Shallcross’ interpretation of the findings that is misleading. The authors of the study did not focus their analysis on exploring predictors of adherence to computerized CBT, or include sensitivity analyses addressing interactions between adherence and outcome– key questions that could help advance the research in this field. Neglecting to measure adherence in self-help intervention trials will compromise the interpretation of research findings, and, in turn, can lead to inappropriate decisions regarding the implementation of such interventions. If a program is found to be non-effective while participants are not exposing themselves to a sufficient “dose” of it, further research must be undertaken to investigate augmenting components to improve adherence (e.g., regular prompts or additional personal support) instead of discarding the idea of self-help altogether. It is vital to study both intervention effects of and adherence to self-help interventions simultaneously and, in the opinion of some researchers, scientific malpractice not to do so.

The real issue in this study is the way the treatments were presented to participants. It would be important to understand how GPs described the online CBT programs and how they encouraged patients to use it. Context matters most in such implementations.

The research methods used by Gilbody et al. were not appropriate to determine if online CBT “worked” or “didn’t work”.

If the main finding of the study around lack of additional improvement from computerized CBT proves anything at all, it is that when people have access to high-quality GP care for depression, that might be all that they need. The problem is that most people don’t have access to regular GP care or have to wait a long time to access it. The KHN article, in it’s abstraction of the facts, also misses addressing the promise of digital treatments, which surely would have been noted if Shallcross had interviewed a digital health expert for this article.

A limited number of people who need mental health treatment actually get it or utilize it. Computerized CBT for depression offers a more accessible and affordable option for people that do not have access to GP care.

Digital programs also help people avoid other barriers to care, such as stigma. Therefore, computerized CBT programs are a critical tool for expanding treatment to more people, because they may be the only care they can receive.

KHN’s article reveals a larger flaw in accurately investigating the evidence behind digital mental health technology by using language that implies smoke and mirrors, rather than a field that is based in strong scientific research. Like most healthcare services, not all online or digital programs are created, or implemented, equally. Therefore, all digital programs should not be dismissed based on assumptions.

Signed by:

Megan Jones, Lantern, San Francisco, United States and Stanford University School of Medicine, Stanford, United States

Rosa Baños, Universidad de Valencia, Spain

Ina Beintner, Technische Universität Dresden, Dresden, Germany

Thomas Berger, Universität Bern, Switzerland

Cristina Botella, Universitat Jaume I, Spain

David Daniel Ebert, Friedrich-Alexander University Erlangen-Nuremberg, Nuremberg, Germany

Dennis Görlich, Westfälische Wilhelms-Universität Münster, Germany

Corinna Jacobi, Technische Universität Dresden, Dresden, Germany

Heleen Riper, VU University Amsterdam, Amsterdam, The Netherlands

Michael P. Schaub, Swiss Research Institute for Public Health and Addiction, WHO Collaborating Center, University of Zurich, Zurich, Switzerland

Ulrike Schmidt, Kings College London, United Kingdom

On Behalf of the ICare Investigators