AI improving digestive cancer diagnosis, but data-sharing obstacles remain
MedicalXpress Breaking News-and-Events Mar 04, 2023
Artificial intelligence is helping to deliver earlier and better diagnoses of digestive cancers, but many challenges remain to widespread clinical application, not least limited sharing of medical imaging data between hospitals, and lack of standardization of protocols for medical imaging for AI, a group of researchers has concluded after a comprehensive survey of recent applications of the technology to these most deadly of cancers.
A paper describing their findings appeared in the journal Health Data Science.
Digestive system tumors are the leading cause of deaths from cancer worldwide with a five-year survival rate of under 20 percent. Five of the seven deadliest cancers are the product of these digestive system tumors, or neoplasms, as tumors are described by physicians: esophageal cancer, gastric cancer, colorectal cancer, primary liver cancer, and pancreatic cancer.
Clinical treatment of digestive system neoplasms (DSNs) has improved in recent decades, but the prognosis of DSN patients remains dismal. This is partly due to the aggressive nature of these cancers, but also likely due to challenges in achieving an early diagnosis and in accurate treatment response.
"If earlier, superior diagnoses can be performed, then this should improve prognoses," said Jie Tian, an artificial intelligence specialist with the Key Laboratory of Big Data-Based Precision Medicine at Beihang University and co-author of the paper.
Tissue-based genomic and proteomic assessments of tumors offer enormous promise on the diagnostic front. These new technologies can sequence the whole genome and the full series of proteins produced by cells in a tissue sample from a tumor. But are also intrinsically limited by the fact that a small portion of tumor tissue can never represent the whole of the tumor.
Medical imaging such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) can in principle provide a complementary but also more comprehensive characterization of the tumor. These medical imaging techniques are regularly used as part of the clinical routine for preoperative diagnosis and evaluation of treatment response.
But assessment of the clinical features of a tumor that assist in diagnosis is as much a qualitative art as a quantitative science, with a great deal of variability between the different radiologists performing the imaging. Such assessment of medical images is also extremely time-consuming and labor-intensive.
However, artificial intelligence (AI) algorithms could mine specific imaging features from medical images automatically, which could automate the complex process not instead of but as an assist to clinicians and reduce their workload.
Even better, as has been demonstrated in many fields beyond medicine, an AI could mine imaging features and patterns that are unable to be detected by the naked eye, or by humans at all.
"And so in addition to these amazing new genomic and proteomic sequencing technologies, and conventional medical imaging, over the last decade, there has been a lot of research and experimentation into whether artificial intelligence could help not only with the aim of earlier diagnoses," continued Professor Tian, "but to produce diagnoses that are much more tailored to the patient, to the whole of their tumor—what we call precision medicine."
The group wanted to survey the state of research into use of AI systems to assist with DSN diagnosis, and how experimental efforts thus far had worked across the four most common digestive system cancers. Their paper gives an overview of how far such research has developed and lays out the challenges yet to be overcome.
They note that there are two main AI approaches to DSN medical imaging: radiomics and deep learning. The first involves an AI that is uses data-characterization algorithms to extract imaging features from the image. This involves segmentation of an image, or detailed 'chunking' out different sections of an image into different parts. Which pixel in an image is part of a tumor and which is something else? With radiomics, radiologists often manually label different parts to train the AI to understand and categorize these segments.
"This manual segmentation is once again labor-intensive, so hardly reducing workload compared to human assessment of a medical image," said Shuaitong Zhang, another author of the paper and associate professor at Beihang University.
"And it also, once again, introduces variability from radiologist to radiologist."
With deep learning however, a more complex form of AI, the system only needs a very coarse segmentation of tumor regions before it self-trains and automatically comes up with its own segment labels.
Both radiomics and deep learning profoundly depend on large, well-annotated datasets from a great many hospitals in order to build a robust internal model of tumors that can then be generalized to any patient.
This is where the first major challenge appears. Practices of acquisition and parameterization of such medical images varies substantially between different hospitals, affecting the robustness and thus generalizability of any AI model. In addition, there are a large number of medical images of DSNs, well-annotated image data are limited.
The researchers conclude however that there are multiple methods that can mitigate this problem, including image resampling; rotation, flipping, and shifting of images; and careful blurring of images to reduce image noise. In addition, standardization of protocols for medical imaging for AI should be able to improve reproducibility and comparability.
Moreover, high-quality datasets are usually not publicly available, which can hinder the validation and comparison of different AI models.
"Much more generous data sharing will be vital for a robust and clinically applicable AI model," said Dr. Zhang.
For the moment, the majority of published studies on the use of AI for DSN assessment remain stuck in the realm of the more labor-intensive and time-consuming practice of radiomics rather deep learning. The current stumbling of deep learning is likely due to the great complexity of tumors.
Both forms of AI diagnostic assist do indeed demonstrate considerable benefit across the four cancers considered, performing better than human assessment of medical imaging alone and enabling for example better identification of high-risk patients who need intensive treatment. But the labor-intensity of radiomics may hinder its widespread application.
A final challenge relates to the need for clinicians to be able to interpret what an AI's internal model is articulating. This is especially true for deep learning models, where what is identified by the AI and why remains something of a 'black box'. And so prior to application in clinical practice, the researchers recommend that AI-assisted DSN diagnosis be validated in trials with much larger numbers of participants across many hospitals than the mainly small trials that the researchers considered for their survey paper.
For their own part, the team now aim to develop their own AI model dedicated to esophageal cancer, but one that is straightforwardly interpretable by clinicians.
--Health Data Science
-
Exclusive Write-ups & Webinars by KOLs
-
Daily Quiz by specialty
-
Paid Market Research Surveys
-
Case discussions, News & Journals' summaries