A deep learning model was developed that combined imaging and traditional risk factors.
November 30, 2020 — Researchers at Massachusetts General Hospital (MGH) have developed a deep learning model that identifies imaging biomarkers on screening mammograms to predict a patient’s risk for developing breast cancer with greater accuracy than traditional risk assessment tools. Results of the study are being presented at the annual meeting of the Radiological Society of North America (RSNA).
“Traditional risk assessment models do not leverage the level of detail that is contained within a mammogram,” said Leslie Lamb, M.D., M.Sc., breast radiologist at MGH. “Even the best existing traditional risk models may separate sub-groups of patients but are not as precise on the individual level.”
Currently available risk assessment models incorporate only a small fraction of patient data such as family history, prior breast biopsies, and hormonal and reproductive history. Only one feature from the screening mammogram itself, breast density, is incorporated into traditional models.
“Why should we limit ourselves to only breast density when there is such rich digital data embedded in every woman’s mammogram?” said senior author Constance D. Lehman, M.D., Ph.D., division chief of breast imaging at MGH. “Every woman’s mammogram is unique to her just like her thumbprint. It contains imaging biomarkers that are highly predictive of future cancer risk, but until we had the tools of deep learning, we were not able to extract this information to improve patient care.”
Lamb and a team of researchers developed the new deep learning algorithm to predict breast cancer risk using data from five MGH breast cancer screening sites. The model was developed on a population that included women with a personal history of breast cancer, implants or prior biopsies.
The study included 245,753 consecutive 2D digital bilateral screening mammograms performed in 80,818 patients between 2009 and 2016. From the total mammograms, 210,819 exams in 56,831 patients were used for training, 25,644 exams from 7,021 patients for testing, and 9,290 exams from 3,961 patients for validation.
Using statistical analysis, the researchers compared the accuracy of the deep learning image-only model to a commercially available risk assessment model (Tyrer-Cuzick version 8) in predicting future breast cancer within five years of the index mammogram. The deep learning model achieved a predictive rate of 0.71, significantly outperforming the traditional risk model, which achieved a rate of 0.61.
“Our deep learning model is able to translate the full diversity of subtle imaging biomarkers in the mammogram that can predict a woman’s future risk for breast cancer,” Lamb said.
Lamb said the new deep learning model has been externally validated in Sweden and Taiwan, and additional studies are planned for larger African-American and minority populations.
At MGH, deep learning risk information is available on reporting software when the radiologist reads a patient’s screening mammogram.
“Traditional risk models can be time-consuming to acquire and rely on inconsistent or missing data,” Lamb said. “A deep learning image-only risk model can provide increased access to more accurate, less costly risk assessment and help deliver on the promise of precision medicine.”
Co-authors are Adam Yala, M.Eng., Peter Mikhael, B.S., and Regina Barzilay, Ph.D.
For more information: www.rsna.org