Despite significant advancements in genetics and modern imaging technology, for the vast majority of breast cancer patients, the diagnosis catches them by surprise.
For some, it comes too late. Later diagnosis means aggressive treatments, anxiety and uncertain outcomes. Therefore, identifying patients at risk before the disease develops has been a central pillar to breast cancer research and effective early detection programs.
A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep learning model that can predict from a mammogram if a patient is likely to develop breast cancer in the future. They trained their model on mammograms and known outcomes from over 60,000 patients treated at MGH, and their model learned the subtle patterns in breast tissue that are precursors to malignancy.
MIT professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.
Though mammography has been shown through randomized clinical trials to reduce breast cancer mortality, there is continued debate on when to start screening, and how often to perform it. While the American Cancer Society recommends annual screening starting at age 45, the US Preventative Task Force recommends bi-annual screening starting at age 50.
“Rather than taking a one-size fits all approach, we can personalize screening around a woman’s risk of developing cancer,” says Barzilay, senior author of a new paper in Radiology about the project. “For example, a doctor might recommend supplemental MRI screening for women with high model-assessed risk.”
The team’s model was significantly better at predicting risk than existing approaches: it accurately placed 31 percent of all cancer patients in its highest-risk category, compared to only 18 percent for traditional models.
Constance Lehman, professor of radiology at Harvard Medical School and Division Chief of Breast Imaging at Mass General Hospital, noted “support in the medical community for risk-based, rather than age-based, screening strategies has been low because we have not had accurate risk assessment tools that work for individual women….until now.” Barzilay and Lehman co-wrote the paper with lead author Adam Yala, a PhD student at MIT CSAIL. Other co-authors include MIT PhD student Tal Schuster and former MIT master’s student Tally Portnoi.
How it works
Since the first risk model from 1989, model development has largely been driven by human knowledge and intuition of what factors constitute risk. Examples of such factors include: age, detailed family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density. However, most of these markers are only weakly correlated with breast cancer. As a result, even after decades of development, these models still aren’t very accurate at the individual level and most organizations continue to feel risk-based screening programs are not possible given those limitations.
Rather than manually identifying the patterns in a mammogram that drive future cancer, the MIT/MGH team trained a deep learning model to induce the patterns directly from the data. By training on over 90,000 mammograms, the model can learn to pick up on patterns too subtle and too complex for the human eye to detect.
“Since the 1960s, radiologists have noticed women have unique and widely variable patterns of breast tissue visible on the mammogram.” Says Lehman. “These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual woman level.”
Making cancer detection more equitable
The project also makes a pointed step towards making risk assessment more accurate for racial minorities, in particular.
Many early detection models were developed on populations of white women. This meant that such models are much less accurate for other races. The MIT/MGH model, however, is equally accurate for white and black women – a major step towards making more inclusive models for all women. This is especially important for African American women, who are 43% more likely to die from breast cancer than white women.
“It’s particularly striking that the model performs equally as well for black and white people, which has not been the case with prior risk assessment tools”, says Allison Kurian, associate professor of Medicine and Health Research and Policy at Stanford University School of Medicine. “If validated and made available for widespread use, this could really improve on our current strategies to estimate risk.”
“The recent federal legislation that requires all women be informed of their mammographic breast density is a strong message from Congress that women deserve access to their health information.” says Lehman. “We are eager to provide this power of information to all women undergoing screening mammography, not only by sharing their mammographic density but also their future risk of breast cancer.”
Barzilay says their system could one day enable doctors to use mammograms to see if patients are at a greater risk for other health problems, like cardiovascular disease or other cancers. The researchers are eager to apply the models to other diseases and ailments, and especially those with less effective risk models, like pancreatic cancer.
“Our goal is to make these advancements a part of the standard of care,” says Yala. “By predicting who will develop cancer in the future, we can catch cancer before symptoms ever arise and hopefully save lives.”
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- Deep Learning Courses for NLP Market Size by Trends, Segmentation, Top Key Players, Growth and Forecast To 2022-2031on September 15, 2022 at 3:16 am
The total information and communication technology goods (including computers, peripheral devices, communication ...
- Deep learning from a dynamical viewpointon September 14, 2022 at 1:45 pm
NUS mathematicians have developed a new theoretical framework based on dynamical systems to understand when and how a deep neural network can learn arbitrary relationships. Despite achieving ...
- 10 years later, deep learning ‘revolution’ rages on, say AI pioneers Hinton, LeCun and Lion September 14, 2022 at 12:00 pm
On the 10th anniversary of key research that led to deep learning breakthroughs, AI luminaries say the 'revolution' will continue.
- Applying deep-learning AI to X-rays helps find explosives in luggageon September 13, 2022 at 6:50 am
A team of researchers at University College London, working with a colleague from Nylers Ltd. and another from XPCI Technology Ltd., has developed a new way to X-ray luggage to detect small amounts of ...
- A deep learning-augmented smart mirror to enhance fitness trainingon September 13, 2022 at 6:38 am
In recent years, engineers and computer scientists have created a wide range of technological tools that can enhance fitness training experiences, including smart watches, fitness trackers, ...
- Meta’s deep learning framework PyTorch to be led by the newly formed PyTorch Foundationon September 12, 2022 at 9:17 pm
PyTorch artificial intelligence platform it created to the Linux Foundation ’s newly formed PyTorch Foundation. PyTorch is a deep learning framework that’s used to power hundreds of AI projects, ...
- Deep Learning Global Market Report 2022on September 12, 2022 at 6:58 am
Major players in the deep learning market are Amazon Web Services, Google, IBM Corporation, Intel, Micron Technology, Microsoft, NVIDIA, Qualcomm, Samsung, Sensory, Xilinx, Facebook, Mellanox ...
- Insights into GPU for Deep Learning’s growth through 2028on September 12, 2022 at 3:44 am
GPU for Deep Learning Market From 2022 to 2028 According to the GPU for Deep Learning Market research report, the ...
via Bing News