Catherine SaundersPHD STUDENT
A person newly diagnosed with cancer is suddenly faced with the need to educate themselves about all aspects of the disease, including the risks and benefits of various treatment options. Most healthcare providers offer their patients educational materials, such as informational pamphlets, to help them learn about their diagnosis and available treatment options. However, few techniques exist for systematically reviewing the quality of written health information designed to help patients make decisions about which sources to trust. In frustration, many patients turn to Google, where the information they find can be inconsistent or unreliable.
How we’re meeting it
In an effort to improve what can often be a frustrating and confusing search for information, two Dartmouth Institute PhD students, Katie Saunders MPH'16, and Curtis Petersen MPH'14, are using machine-learning technology to rate and evaluate whether the information patients receive is clear and helpful. They recently co-authored a paper in JCO Clinical Cancer Informatics— along with faculty co-authors Glyn Elwyn, MD, PhD, MSc, and Marie-Anne Durand, PhD, MSc, MPhil—in which they investigated the untapped potential for applying machine-learning technology to analyze a variety of patient education materials, including handouts, decision aids, and brochures. They found that the full potential of machine learning to assess patient education materials hasn’t been leveraged yet.
Now, building on that work, the duo is developing their own machine-learning model to assess patient education materials. While researchers have developed ways to score whether a document is trustworthy, unbiased, or user friendly, most of the current rating systems are manual, so a real person must do the assessment. It is time-intensive and not particularly reliable. Technologies like natural language processing can perform even more nuanced assessments on larger quantities of text than human raters can. Saunders and Petersen say that from the patient’s perspective, the goal of their work is to produce material that conveys unbiased and actionable information. And, from the provider’s perspective, it is to provide them with material about which they feel confident can help patients understand their health better.
Saunders and Petersen have assembled a small team to help them tackle this problem and build their new model, including three Dartmouth Institute MPH students, Hema Karunakaram, a program manager at IBM’s Watson Health, Samuel Verkhovsky, the manager of interpreter services at Dartmouth-Hitchcock Medical Center, Arian Khoshgowari, an aspiring physician, and Josh Levy, a PhD student in Dartmouth’s graduate program in quantitative biomedical sciences.