In September, Journal of Clinical Oncology published a study on machine learning–directed clinical evaluations during cancer treatment. System for High-Intensity EvaLuation During Radiation Therapy (SHIELD-RT) is, broadly across medicine, “one of the first randomized interventional studies to see if machine learning can improve patient outcomes, and an important step in an area that has gotten a lot of enthusiasm but has been relatively unproven in the clinical setting,” said first author and study lead Julian Hong, MD, MS, a former resident at Duke Radiation Oncology and an Assistant Professor at the University of California, San Francisco.
With the knowledge that unplanned acute care can drastically impact cancer patients’ quality of life, outcomes and cost of care, Dr. Hong began work on a machine learning model in 2017 during his residency research year at Duke. He collaborated with faculty to develop the model using historical electronic health record (EHR) data from 2013-2016. “The idea was that we have this clinical need where patients end up with unplanned acute care during treatment,” said Dr. Hong. “Can we predict it and then mitigate the risk?”
After the machine learning model was built, preliminary data suggested that the model was quite good at predicting outcomes. The challenge was getting it into a clinical setting to actually improve care for patients – thus, the SHIELD-RT study was born.
Physics resident (and current Duke Medical Physicist) Neville Eclov, PhD, developed the method for identifying the patients who would be part of the study. Additionally, Dr. Eclov developed the method for extracting the relevant radiation therapy treatment information used by the algorithm to predict which patients were at higher risk of requiring acute care during the course of their treatment. “The process required careful testing during the design phase and some manual review of data,” said Dr. Eclov, explaining that the data structures he used were designed for day-to-day scheduling purposes and not for automated extraction of patients lists.
With guidance from faculty lead Manisha Palta, MD, and because it seemed supplemental visits improved patient outcomes, Dr. Hong decided on twice-weekly evaluation of patients – seen by Dr. Palta, resident Sarah Jo Stephens, MD and advanced practice providers Mary Malicki, RN, MSN, ACNP; Stacey Shields, RN, MSN, ANP-BC; and Alyssa Cobb, RN, BSN.
“APPs provided a vital role,” said Malicki. “We worked directly in the clinic and could assess both patient symptoms and assist with the overall effects of treatment on caregivers.”
A major strength of the SHIELD-RT study was the inclusion of this broad team of providers. This broad team allowed for “integrating machine learning with different steps in the care process,” explained Dr. Hong. Alongside the providers actually doing the patient intervention, the team included Jessica Tenenbaum, PhD (expertise in electronic health records), Dr. Eclov (expertise in data processing and workflow challenges), Yvonne Mowery, MD, PhD (assistance with quality assurance), Nicole Dalal, MD (assistance with data collection), and statisticians Donna Niedzwiecki, PhD, and Samantha Thomas, MS.
The results of the study showed that machine learning accurately triaged patients undergoing radiation therapy and chemoradiation therapy, and that twice-weekly evaluation of patients reduced rates of acute care during treatment from 22.3% to 12.3%.
Implementation of a tool like this model could result in fewer admissions and ED visits, explained Dr. Palta. “Especially in the COVID-19 environment, this is critical to reduce burden on the healthcare system and patient and staff exposure. Long-term, the implementation of this model could have a significant impact on reducing healthcare costs.” The team is still in the process of collecting cost data to determine the financial impact of the published clinical trial results.
Dr. Hong was also optimistic about future randomized controlled studies for machine learning approaches in healthcare. “There’s a relatively short list of studies that prospectively tested to see how well an algorithm works in a clinical setting, and then an even shorter list of studies that use machine learning to drive a specific clinical intervention, particularly with the gold standard randomization,” he said. “As a physician, I’m hoping we’ll see more studies that evaluate how AI can make a clinical impact, because there’s some really exciting computational work being done out there that I hope can reach patients.”
Though the study had promising results, there are still important questions that linger for Dr. Hong – how well results would translate at other institutions; how to minimize bias and make sure that algorithms are fair across socioeconomic factors; how to ensure sustainability, since practice and data both change over time.
Long-term, Dr. Hong envisions machine learning as a way of improving care pathways in oncology and medicine, including “smart clinics” in which technology supplements patient care. It will take work – more studies like SHIELD-RT, proven in the clinical sphere, not just theoretically. “Clinicians don’t necessarily trust algorithms to deliver high-quality care,” said Dr. Hong. “We need some of this high-level evidence, especially since AI and machine learning models frequently lose accuracy when they’re deployed, and it’s challenging to show that they’re actionable to improve patient outcomes.”