MSDS researchers aim for improved X-ray diagnostics via neural networks

From left: MSDS student researchers Timothy Rodriguez, Alanna Hazlett, and Naomi Ohashi.
From left: MSDS student researchers Timothy Rodriguez, Alanna Hazlett, and Naomi Ohashi saw their research on X-ray disease detection accepted for publication.

A trio of M.S. in Data Science students recently had a paper published on exploring image classification with convolutional neural networks. Guided through their research project by deep learning lecturer Sodiq Adewole, students Alanna Hazlett and Timothy Rodriguez and recent MSDS graduate Naomi Ohashi saw their paper "Chest Disease Detection in X-Ray Images Using Deep Learning Classification Method" published on arXiv, an open-access research-sharing platform operated by Cornell University's Cornell Tech with more than 2 million articles. 

The research team investigated the performance of multiple classification models in classifying chest X-ray images into four categories: COVID-19, pneumonia, tuberculosis, and normal cases. They leveraged transfer learning techniques with state-of-the-art, pre-trained convolutional neural network models, fine-tuning these architectures on labeled medical X-ray images. 

The team was motivated by the common practice of using X-rays to detect upper-respiratory diseases, but they wanted to tackle the difficulty of differentiating between the different diseases, as well as the subjective nature of interpreting results. The belief is that by creating automated diagnostic tools, error-prone readings can be minimized.

By applying gradient-weighted class activation mapping (Grad-CAM) for model interpretability to provide visual explanations for classification decisions, the researchers were attempting to improve trust and transparency in clinical applications. While this is not the first time neural networks have been used to detect respiratory diseases, and some have achieved significant accuracy, the team attempted to expand the interpretability of the model's results. 

And the initial results were promising, with high classification accuracy and strong performance in key machine-learning metrics such as precision, recall, and F1 score. The team hopes that the increased understanding of how the model is making these classifications spurs more widespread acceptance and adoption in practice.

Hazlett said that the experience helped her grow technically and professionally. "Along with achieving strong results we gained a deeper understanding of model interpretability," she said. "Grad-CAM helps visualize what the model is focusing on when making its classification, which helps improve trust and transparency in AI."