Research into effectiveness of AI at diagnosing disease

Research into effectiveness of AI at diagnosing disease

The first systematic review and meta-analysis into the effectiveness of AI at diagnosing disease suggests artificial intelligence may be as effective as health professionals.

But with only a small number of high quality studies to draw on, the true power of AI remains uncertain, and researchers call for higher standards of research and reporting to improve future evaluations.

AI appears to detect diseases from medical imaging with similar levels of accuracy as healthcare professionals, according to the first systematic review and meta-analysis, synthesising all the available evidence from the scientific literature published in The Lancet Digital Health journal.

Nevertheless, only a few studies were of sufficient quality to be included in the analysis, and the authors caution that the true diagnostic power of the AI technique known as deep learning – the use of algorithms, big data, and computing power to emulate human learning and intelligence – remains uncertain because of the lack of studies that directly compare the performance of humans and machines, or that validate AI’s performance in real clinical environments.

“We reviewed over 20,500 articles, but less than 1% of these were sufficiently robust in their design and reporting that independent reviewers had high confidence in their claims. What’s more, only 25 studies validated the AI models externally (using medical images from a different population), and just 14 studies actually compared the performance of AI and health professionals using the same test sample,” explained Professor Alastair Denniston from University Hospitals Birmingham NHS Foundation Trust, UK, who led the research.

“Within those handful of high-quality studies, we found that deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals. But it’s important to note that AI did not substantially out-perform human diagnosis.”

With deep learning, computers can examine thousands of medical images to identify patterns of disease. This offers enormous potential for improving the accuracy and speed of diagnosis. Reports of deep learning models outperforming humans in diagnostic testing has generated much excitement and debate, and more than 30 AI algorithms for healthcare have already been approved by the US Food and Drug Administration.

Despite strong public interest and market forces driving the rapid development of these technologies, concerns have been raised about whether study designs are biased in favour of machine learning, and the degree to which the findings are applicable to real-world clinical practice.

To provide more evidence, researchers conducted a systematic review and meta-analysis of all studies comparing the performance of deep learning models and health professionals in detecting diseases from medical imaging published between January 2012 and June 2019. They also evaluated study design, reporting, and clinical value.

In total, 82 articles were included in the systematic review. Data were analysed for 69 articles which contained enough data to calculate test performance accurately. Pooled estimates from 25 articles that validated the results in an independent subset of images were included in the meta-analysis.

Analysis of data from 14 studies comparing the performance of deep learning with humans in the same sample found that at best, deep learning algorithms can correctly detect disease in 87% of cases, compared to 86% achieved by healthcare professionals. The ability to accurately exclude patients who don’t have disease was also similar for deep learning algorithms (93% specificity) compared to healthcare professionals (91%).

Importantly, the authors note several limitations in the methodology and reporting of AI-diagnostic studies included in the analysis. Deep learning was frequently assessed in isolation in a way that does not reflect clinical practice. For example, only four studies provided health professionals with additional clinical information that they would normally use to make a diagnosis in clinical practice. Additionally, few prospective studies were done in real clinical environments, and the authors say that to determine diagnostic accuracy requires high-quality comparisons in patients, not just datasets. Poor reporting was also common, with most studies not reporting missing data, which limits the conclusions that can be drawn.