What is it about?
When RT-PCR is ineffective in early diagnosis and understanding of COVID-19 severity, Computed Tomography (CT) scans are needed for COVID diagnosis, especially in patients having high ground-glass opacities, consolidations, and crazy paving. Radiologists find the manual method for lesion detection in CT very challenging and tedious. Previously solo deep learning (SDL) was tried but they had low to moderate-level performance. This study presents two new cloud-based quantized deep learning UNet3+ hybrid (HDL) models, which incorporated full-scale skip connections to enhance and improve the detections. Annotations from expert radiologists were used to train one SDL (UNet3+), and two HDL models, namely, VGG-UNet3+ and ResNet-UNet3+. For accuracy, 5-fold cross-validation protocols, training on 3,500 CT scans, and testing on unseen 500 CT scans were adopted in the cloud framework. Two kinds of loss functions were used: Dice Similarity (DS) and binary cross-entropy (BCE). Performance was evaluated using (i) Area error, (ii) DS, (iii) Jaccard Index, (iii) Bland–Altman, and (iv) Correlation plots.
Featured Image
Why is it important?
the significance of developing advanced AI-based diagnostic tools for COVID-19, particularly in medical imaging for rapid and accurate lesion segmentation. The proposed hybrid deep learning (HDL) models, such as VGG-UNet3+ and ResNet-UNet3+, outperform traditional models by improving prediction accuracy and processing speed, making them suitable for real-time medical applications. These models reduce the computational load and enhance the performance of lesion detection using fewer training data. The cloud-based system, COVLIAS 3.0, further ensures accessibility and scalability, facilitating efficient diagnosis and patient care during pandemics like COVID-19.
Perspectives
Read the Original
This page is a summary of: COVLIAS 3.0: cloud-based quantized hybrid UNet3+ deep learning for COVID-19 lesion detection in lung computed tomography, Frontiers in Artificial Intelligence, June 2024, Frontiers,
DOI: 10.3389/frai.2024.1304483.
You can read the full text:
Contributors
The following have contributed to this page