Real World Evidence -The Challenges for Digital Health

Photo by on


The gold standard for clinical trials is a randomized controlled trial design (RCT)- here, control and treatment groups are compared while all else is held equal to determine if a treatment is making a significant difference. From inception to completion, this process can take 3-5 years. For new digital health companies waiting that long to demonstrate outcomes may mean they run out of money before they can show their product works. The ubiquitous nature of smartphones offers new avenues for trial recruitment and real-world data collection- what can be gained or lost in being able to recruit more people into clinical trials.

A new paper by Abhishek Pratap and colleagues from the University of Washington, published in Nature Digital Medicine, examines retention in remote digital health real-world trials as a means of offering new research models to procure more rapid answers in the digital age. The ability to recruit thousands of patients, in theory, would allow much more extensive population-based data collection, but what about attrition? Clinical studies are often challenged by drop out rates, and often participants are incentivized to continue in the study.

Pratap looked at eight different remote digital health studies on conditions like asthma and heart disease totaling over 100,000 participants. The trials were conducted between 2014 and 2019, and the studies included individual app usage across different clinical areas.

The findings showed retention varied significantly from 2-26 days across the studies. Prior app use studies reflect that over 77% of people stop using apps after two weeks. Even with such large samples, the authors concluded most of the studies did not manage to recruit and retain participants that were demographically representative of the US population.

The authors reported four distinct patterns of use in the data sets. The groups that had app use for at least seven days or more  were coded as being either high, moderate, or sporadic users. Those who didn’t use the app beyond seven days were coded as app abandoners. Real-world data collection holds excellent promise, but this study demonstrates the same issues that plague traditional trials like high drop out rates persist.

The lack of use beyond a week will make it challenging to determine how useful an app can be in preventing or treating chronic conditions. The authors suggest a two week run-in period for new trials that would weed out the abandoners early. While this may render smaller samples, it would potentially lead to more stable levels of engagement to test the interventions.

Those who were most engaged were non-Hispanic whites, the danger of bias and lack of representation in study design is also an aspect that will have to be addressed in future trials lest we repeat the biases that already exist in the medical literature.

The bottom line, real-world data collection holds promise but also has similar pitfalls to traditional study design. Engagement remains ill-defined, and clarity on what matters remains critical to the success of digital tools if they are to gain traction and become widely used in health care. Do people drop out because the user experience has not been designed for their needs and their culture? Ensuring diversity and inclusion are foundational aspects of app design this decade. Also, data use, privacy, and transparency are table-stakes for any company developing tools for use in the clinical arena.

Thanks for reading – Trina
(Opinions are my own)



Pratap, A., Neto, E.C., Snyder, P. et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. npj Digit. Med. 3, 21 (2020).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close Menu
%d bloggers like this: