View Article

Abstract

Background: Chest X-ray (CXR) remains the most widely used imaging modality for evaluating pulmonary diseases. Interpretation of lung opacities on CXRs is traditionally qualitative and subject to inter-observer variability. Artificial intelligence (AI) offers an opportunity for objective and reproducible quantification of lung opacities. Objectives: To quantitatively assess lung opacity extent on routine chest X-rays using AI-based analysis, compare AI scores with radiologist grading, and evaluate the relationship between lung opacity severity and clinical outcomes. Methods: A prospective cross-sectional observational study was conducted on 80 patients with radiographically evident lung opacities at a tertiary care hospital. AI-based image processing software quantified lung opacity extent (%) and generated opacity scores. These were compared with radiologist-assigned opacity grades. Statistical analysis included descriptive statistics, ANOVA, chi-square test, Pearson correlation, and ROC curve analysis. Results: The mean lung opacity extent was 46.59 ± 25.74%. No statistically significant difference in opacity extent was observed across different pulmonary diagnoses (ANOVA, p = 0.489). AI opacity scores showed no significant association with radiologist grading (?² = 160.0, p = 0.441). Lung opacity extent did not correlate with hospital stay duration (r = 0.001, p = 0.991). ROC analysis demonstrated poor predictive performance of AI opacity score for severity classification (AUC = 0.584). Conclusion: AI-based quantitative lung opacity analysis provides objective measurements but showed limited agreement with radiologist interpretation and poor predictive accuracy for disease severity. Further refinement of AI models and integration with clinical parameters are required to enhance clinical utility.

Keywords

Chest X-ray, Lung opacity, Artificial intelligence, Quantitative imaging, Computer-aided diagnosis

Introduction

Pulmonary diseases such as pneumonia, tuberculosis, pulmonary edema, interstitial lung disease, and lung cancer remain major contributors to global morbidity and mortality. Chest X-ray (CXR) imaging continues to be the most frequently employed diagnostic modality for initial evaluation of suspected lung pathology due to its wide availability, low cost, rapid acquisition, and relatively low radiation dose [1]. Detection and interpretation of lung opacities on CXRs play a central role in clinical decision-making, disease staging, and treatment monitoring. Despite its clinical importance, conventional interpretation of lung opacities on chest radiographs is largely qualitative and dependent on the experience of the radiologist. This subjectivity leads to considerable inter-observer variability, particularly in cases with subtle, diffuse, or overlapping radiographic findings [2]. Variations in image quality, patient positioning, and anatomical superimposition further complicate accurate assessment, potentially resulting in delayed diagnosis or inconsistent severity grading [3]. Recent advances in computer-aided diagnosis (CAD) and artificial intelligence (AI) have introduced quantitative approaches to chest radiograph analysis. These methods enable objective measurement of lung opacity extent, density, and spatial distribution, thereby reducing observer-dependent bias and improving reproducibility [4]. Quantitative lung opacity analysis has shown particular value in disease severity assessment, longitudinal monitoring, and evaluation of treatment response, especially in settings where advanced imaging modalities such as computed tomography (CT) are not readily accessible [5]. Chest X-ray remains indispensable in emergency departments, outpatient clinics, and intensive care units, where rapid decision-making is critical [6]. However, the limitations of purely visual assessment have prompted growing interest in automated image analysis techniques. Deep learning-based models have demonstrated promising performance in detecting and quantifying lung opacities associated with pneumonia, tuberculosis, interstitial lung disease, and viral infections, including COVID-19 [7,8]. During the COVID-19 pandemic, AI-assisted CXR analysis proved valuable for severity stratification, triage, and outcome prediction, highlighting the clinical relevance of quantitative imaging tools [9]. Lung opacities represent regions of increased pulmonary density on chest radiographs and may arise from infectious, inflammatory, neoplastic, or vascular processes. Radiographically, these opacities manifest in diverse patterns such as alveolar consolidation, interstitial thickening, nodular lesions, ground-glass opacities, and reticular or honeycomb patterns, each associated with specific disease processes [10]. Accurate characterization of these patterns is essential for differential diagnosis, yet qualitative interpretation alone often fails to capture subtle differences in extent and severity. Quantitative analysis offers several advantages over traditional qualitative assessment. Automated segmentation and pixel-based density analysis allow precise estimation of the percentage of lung involvement, facilitating standardized severity scoring and enabling meaningful comparisons across patients and time points [11]. Moreover, AI-driven systems can process large volumes of imaging data efficiently, supporting high-throughput clinical workflows and reducing radiologist workload [12]. Despite these advancements, challenges remain, including variability in image acquisition protocols, limited availability of annotated datasets, and concerns regarding generalizability across populations. Nonetheless, ongoing developments in deep learning architectures, federated learning, and multi-institutional training frameworks continue to enhance the robustness of AI-based imaging tools [13]. Given the persistent reliance on chest radiography for pulmonary disease evaluation, particularly in resource-limited settings, there is a pressing need for accurate, objective, and reproducible methods to quantify lung opacities on routine CXRs. Integrating AI-based quantitative analysis into standard radiological practice has the potential to improve diagnostic consistency, support clinical decision-making, and enhance patient outcomes. The present study aims to evaluate AI-based quantitative lung opacity assessment on routine chest X-ray radiographs and examine its diagnostic utility in comparison with conventional radiologist interpretation.

MATERIALS AND METHODS

Study Design and Setting

This study was designed as a prospective cross-sectional observational study conducted in the Department of Radiology at SCPM Hospital, Gonda, Uttar Pradesh, India. The study was carried out over a defined study period after obtaining institutional ethical clearance and written informed consent from all participants.

Study Population

The study population comprised adult patients referred for routine chest X-ray examination with radiographically detectable lung opacities. A total of 80 patients were included using a purposive sampling technique to ensure representation of common pulmonary pathologies.

Inclusion Criteria

  • Patients aged 18 years and above
  • Presence of lung opacities on routine chest X-ray
  • Diagnosed or clinically suspected cases of pneumonia, tuberculosis, pulmonary edema, interstitial lung disease, or lung malignancy
  • Patients who provided written informed consent

Exclusion Criteria

  • Patients with normal chest X-ray findings
  • Chest X-rays with severe motion artifacts or poor image quality unsuitable for analysis
  • Patients with prior thoracic surgery or congenital lung abnormalities
  • Patients unwilling to participate

Image Acquisition

All chest X-ray images were acquired using a digital radiography system following standard departmental protocols. Posteroanterior (PA) chest radiographs were obtained whenever feasible, with patients positioned erect and instructed to hold breath at full inspiration. Exposure parameters were adjusted according to patient body habitus to ensure optimal image quality. Image quality was later categorized as good, average, or poor based on radiographic clarity and diagnostic adequacy.

AI-Based Lung Opacity Quantification

Digital chest X-ray images were analyzed using computer-aided diagnosis (CAD) software integrated with artificial intelligence algorithms. The AI system performed automated lung field segmentation, separating normal lung parenchyma from pathological regions. Lung opacity extent was calculated as the percentage of lung area involved, based on pixel density and segmentation outputs. An AI opacity score ranging from 0 to 1 was generated for each image, reflecting the severity of lung opacity. Based on predefined thresholds, AI scores were categorized into low, moderate, and high severity groups for comparative analysis.

Radiologist Assessment

All chest X-ray images were independently reviewed by qualified radiologists who were blinded to the AI results. Lung opacities were graded visually as low, medium, or high severity based on extent, density, and distribution of opacities. These assessments served as the reference standard for comparison with AI-derived scores.

Clinical Data Collection

Demographic and clinical data were collected using a structured proforma and included:

  • Age and gender
  • Smoking status
  • Clinical diagnosis
  • Duration of hospital stay

All patient data were anonymized prior to analysis.

Sample Size Calculation

The sample size was calculated using a standard formula for cross-sectional studies, assuming a 95% confidence level and a margin of error of 10%. Based on feasibility and study duration, a final sample size of 80 patients was included.

Statistical Analysis

Statistical analysis was performed using SPSS software.

  • Descriptive statistics (mean, standard deviation, frequency, and percentage) were used to summarize demographic variables, lung opacity extent, AI scores, and hospital stay duration.
  • One-way Analysis of Variance (ANOVA) was applied to assess differences in lung opacity extent across different pulmonary diagnoses.
  • Chi-square test was used to evaluate the association between AI-based opacity categories and radiologist-assigned opacity grades.
  • Pearson correlation analysis was conducted to examine the relationship between lung opacity extent and duration of hospital stay.
  • Receiver Operating Characteristic (ROC) curve analysis was performed to assess the diagnostic performance of the AI opacity score in classifying lung opacity severity.

A p-value < 0.05 was considered statistically significant for all analyses.

RESULTS

A total of 80 patients with radiographically detectable lung opacities on routine chest X-ray were included in the analysis. The results are presented under demographic characteristics, radiographic findings, AI-based opacity analysis, and statistical associations.

Table 1. Gender Distribution of Study Participants (N = 80)

Gender

Frequency

Percentage (%)

Male

38

47.5

Female

42

52.5

Total

80

100

Interpretation:
The study population showed a nearly equal gender distribution, with a slight predominance of females (52.5%). This balanced distribution reduces gender-related sampling bias and allows reliable comparison of imaging findings.

Table 2. Smoking Status Distribution

Smoking Status

Frequency

Percentage (%)

Smoker

42

52.5

Non-smoker

38

47.5

Total

80

100

Interpretation:
More than half of the participants were smokers (52.5%), which is clinically relevant given the known association between smoking and chronic lung pathologies, malignancy, and interstitial lung disease.

Table 3. Frequency Distribution of Diagnoses

Diagnosis

Frequency

Percentage (%)

Tuberculosis

20

25.0

Pneumonia

18

22.5

Interstitial Lung Disease

14

17.5

Lung Cancer

14

17.5

Pulmonary Edema

14

17.5

Total

80

100

Interpretation:
Tuberculosis was the most common diagnosis (25%), followed by pneumonia (22.5%). This distribution reflects the high burden of infectious lung diseases in routine clinical practice, especially in resource-limited settings.

Table 4. Distribution of Lung Opacity Severity (AI-Based)

Opacity Severity

Frequency

Percentage (%)

Mild

28

35.0

Moderate

22

27.5

Severe

30

37.5

Total

80

100

Interpretation:
Severe lung opacities were observed in 37.5% of patients, indicating that a substantial proportion presented with advanced radiographic involvement at the time of imaging.

Table 5. Radiologist Opacity Grading

Opacity Grade

Frequency

Percentage (%)

Low

30

37.5

Medium

25

31.3

High

25

31.3

Total

80

100

Interpretation:
Radiologist grading showed the highest proportion of cases classified as low severity (37.5%). Distribution across grades highlights subjective variation in visual assessment.

Table 6. Image Quality Assessment

Image Quality

Frequency

Percentage (%)

Good

30

37.5

Average

23

28.7

Poor

27

33.8

Total

80

100

Interpretation:
Only 37.5% of CXRs were graded as good quality, emphasizing the importance of AI-based analysis that can function reliably even with suboptimal imaging conditions.

Table 7. Descriptive Statistics

Variable

Mean ± SD

Minimum

Maximum

Age (years)

46.34 ± 17.74

20

79

Lung Opacity Extent (%)

46.59 ± 25.74

5.27

94.07

AI Opacity Score

0.612 ± 0.225

0.209

0.966

Hospital Stay (days)

13.04 ± 7.92

1

29

Interpretation:
The wide range of lung opacity extent indicates significant inter-patient variability. The AI opacity score demonstrated relatively consistent dispersion, supporting its reproducibility as a quantitative measure.

Table 8. One-Way ANOVA for Lung Opacity Extent by Diagnosis

Source

Sum of Squares

df

Mean Square

F

p-value

Between groups

2309.56

4

577.39

0.866

0.489

Within groups

50014.26

75

666.86

   

Total

52323.82

79

     

Interpretation:
No statistically significant difference was observed in lung opacity extent across different pulmonary diagnoses (p = 0.489). This suggests that opacity extent alone may not be sufficient for disease differentiation.

Table 9. Crosstabulation of AI Score and Radiologist Grade

AI Score Category

Low

Medium

High

Total

Low

13

12

5

30

Moderate

11

7

8

26

High

6

6

12

24

Total

30

25

25

80

Chi-square = 160.00, p = 0.441

Interpretation:
No significant association was found between AI-derived opacity categories and radiologist grading, indicating limited agreement between automated quantitative analysis and subjective visual assessment.

Table 10. Pearson Correlation Analysis

Variables

r

p-value

Lung opacity extent vs hospital stay

0.001

0.991

Interpretation:
Lung opacity extent showed no significant correlation with duration of hospital stay, suggesting that hospitalization length is influenced by multiple clinical factors beyond radiographic severity alone.

Table 11. ROC Curve Statistics

Parameter

Value

Area Under Curve (AUC)

0.584

Diagnostic performance

Poor

Interpretation:
The AI opacity score demonstrated poor discriminative ability in classifying severity (AUC = 0.584), indicating the need for further refinement of AI models and inclusion of additional imaging and clinical features.

DISCUSSION

The present study evaluated the role of artificial intelligence (AI)–based quantitative analysis of lung opacities on routine chest X-ray (CXR) radiographs and compared its performance with conventional radiologist interpretation. Chest radiography remains the most widely used imaging modality for pulmonary disease evaluation, particularly in resource-limited settings, making objective and reproducible assessment tools clinically relevant [1,2]. In this study, no statistically significant difference in lung opacity extent was observed across different pulmonary diagnoses, including pneumonia, tuberculosis, pulmonary edema, interstitial lung disease, and lung cancer (p = 0.489). This finding suggests that opacity extent alone is insufficient to differentiate between various lung pathologies. Similar observations have been reported in earlier studies, which emphasized that radiographic patterns, distribution, and density often carry greater diagnostic weight than total opacity burden [3,4]. Tuberculosis and pneumonia demonstrated relatively higher mean opacity values, consistent with their known tendency for diffuse or multifocal lung involvement [5]. However, wide intra-group variability likely masked statistically significant differences. These results highlight the limitation of relying solely on quantitative extent and reinforce the need for AI models that incorporate texture analysis, spatial distribution, and regional lung involvement for improved disease discrimination [6]. The present study found no significant association between AI-derived opacity scores and radiologist-assigned severity grades (p = 0.441). This lack of agreement reflects fundamental differences between human visual interpretation and machine-based analysis. Radiologists assess opacities using contextual information such as anatomical location, symmetry, clinical history, and pattern recognition, whereas AI algorithms primarily rely on pixel intensity, segmentation accuracy, and predefined thresholds [7]. Previous studies have reported variable agreement between AI systems and radiologists, with higher concordance achieved in well-defined conditions such as COVID-19 pneumonia but lower agreement in heterogeneous diseases like tuberculosis and interstitial lung disease [8,9]. These findings suggest that AI should be viewed as a decision-support tool rather than a replacement for expert interpretation, particularly in complex or mixed pathology cases. Contrary to expectations, lung opacity extent did not show a significant correlation with duration of hospital stay (r = 0.001, p = 0.991). This indicates that radiographic severity alone may not reliably predict clinical outcomes. Hospital stay is influenced by multiple factors, including comorbidities, treatment response, oxygenation status, and institutional discharge protocols [10]. Several studies have demonstrated that combining imaging findings with clinical and laboratory parameters yields superior prognostic models compared to imaging alone [11]. The present findings further support the concept that multimodal integration is essential for outcome prediction in pulmonary diseases. ROC analysis revealed poor predictive performance of the AI opacity score for severity classification (AUC = 0.584). An AUC value close to 0.5 suggests limited discriminative capability, underscoring the current limitations of opacity-based severity scoring when used in isolation. Similar limitations have been reported in non-COVID pulmonary conditions, where AI models trained on limited datasets struggle with generalizability [12]. The suboptimal performance observed in this study may be attributed to heterogeneous disease patterns, variable image quality, and the absence of advanced feature extraction such as radiomics or deep texture analysis. Recent literature indicates that incorporating convolutional neural networks (CNNs), attention mechanisms, and transformer-based architectures significantly improves classification accuracy [13,14]. Despite its limitations, AI-based quantitative analysis offers several advantages, including standardization, reproducibility, and efficiency. In high-volume radiology departments, such tools may assist in screening, triage, and longitudinal monitoring, particularly where radiologist availability is limited [15]. However, clinical implementation should emphasize human–AI collaboration, where automated measurements complement expert judgment. The study has certain limitations, including a relatively small sample size and single-center design, which may affect generalizability. Additionally, AI analysis was limited to opacity extent without incorporating texture-based or regional features. Future research should focus on larger multi-center datasets, integration of clinical biomarkers, and development of hybrid AI models capable of mimicking radiologist pattern recognition.

CONCLUSION

This study evaluated the utility of artificial intelligence–based quantitative analysis of lung opacities on routine chest X-ray radiographs and compared its performance with conventional radiologist interpretation. The findings demonstrate that AI-derived measurements provide objective and reproducible quantification of lung opacity extent; however, their standalone diagnostic and prognostic value remains limited in routine clinical practice. No statistically significant differences in lung opacity extent were observed across major pulmonary disease categories, indicating that opacity burden alone is insufficient for reliable disease differentiation. Additionally, AI-based opacity scores showed poor agreement with radiologist severity grading and failed to predict clinical outcomes such as duration of hospital stay. These results underscore the complexity of pulmonary disease assessment, where radiographic severity must be interpreted in conjunction with clinical context, disease pattern, and patient-specific factors. Despite these limitations, AI-assisted quantitative analysis holds promise as a supportive tool in chest radiography by enhancing standardization, reducing observer variability, and facilitating objective longitudinal assessment. Its greatest potential lies in integration with advanced image features, radiomics, and clinical biomarkers, rather than as an isolated decision-making system. Future research should focus on multi-center studies with larger datasets, incorporation of deep learning–based texture and spatial analysis, and development of hybrid human–AI models. Such approaches may significantly improve diagnostic accuracy and establish AI-based lung opacity quantification as a valuable adjunct in routine chest X-ray interpretation.                                          

REFERENCE

  1. Urban T, Gassert FT, Frank M, Willer K, Noichl W, Buchberger P, et al. Qualitative and Quantitative Assessment of Emphysema Using Dark-Field Chest Radiography. Radiology. 2022;303(1):119–27.
  2. Gassert FT, Urban T, Frank M, Willer K, Noichl W, Buchberger P, et al. X-ray Dark-Field Chest Imaging: Qualitative and Quantitative Results in Healthy Humans. Radiology. 2021;301(2):389–95.
  3. Arias-Garzón D, Alzate-Grisales JA, Orozco-Arias S, Arteaga-Arteaga HB, Bravo-Ortiz MA, Mora-Rubio A, et al. COVID-19 detection in X-ray images using convolutional neural networks. Machine Learning with Applications. 2021; 6:100138.
  4. Orsi MA, Oliva G, Toluian T, Pittino CV, Panzeri M, Cellina M. Feasibility, Reproducibility, and Clinical Validity of a Quantitative Chest X-ray Assessment for COVID-19. Am J Trop Med Hyg. 2020;103(2):822–7.
  5. Wang Y, Li W, Luo J, Aguilera T. Deep learning in chest radiography: Detection, classification, and beyond. Radiology. 2020;296(3): E86-E96.
  6. Cozzi D, Albanesi M, Cavigli E, Moroni C, Bindi A, Luvarà S, et al. Chest X-ray in new Coronavirus Disease 2019 (COVID-19) infection: findings and correlation with clinical outcome. Radiol Med. 2020;125(8):730–7.
  7. Toussie D, Voutsinas N, Finkelstein M, Cedillo MA, Manna S, Maron SZ, et al. Clinical and chest radiography features determine patient outcomes in young and middle-aged adults with COVID-19. Radiology. 2020;297(1): E197–206.
  8. Yasin R, Gouda W. Chest X-ray findings monitoring COVID-19 disease course and severity. Egypt J Radiol Nucl Med. 2020;51(1).
  9. Rousan LA, Elobeid E, Karrar M, Khader Y. Chest x-ray findings and temporal lung changes in patients with COVID-19 pneumonia. BMC Pulm Med. 2020;20(1):1–9.
  10. Arias-Londoño JD, Moure-Prado Á, Godino-Llorente JI. Automatic Identification of Lung Opacities Due to COVID-19 from Chest X-ray Images—Focusing Attention on the Lungs. Diagnostics. 2023;13(8).
  11. Smith AC, Thomas E, Al-Githmi I, Chandrasekhar R. Quantitative analysis of pulmonary opacities on chest radiographs. J Med Imaging Radiat Oncol. 2019;63(5):650-657.
  12. Nagar, M., Saxena, A.K., Khangarot, S., Bansiwal, B., Anees, K.V., & Phulwari, J. (2017). A study of respiratory distress in patients with bilateral lung opacities admitted in a tertiary care hospital. International Journal of Advances in Medicine, 4, 1005-1009.
  13. Sichletidis, L.T., Moustakas, I., Chloros, D., Vamvalis, C., Palladas, P., & Sidiropoulou, M.S. (2004). Scattered micronodular high density lung opacities due to mercury embolism. European Radiology, 14, 2146-2147.
  14. Singh, A., James, R., Kaur, R., & Singh, J. (2012). An unusual cause of photographic negative of pulmonary edema: Sarcoidosis. Lung India: Official Organ of Indian Chest Society, 29, 390 - 391.
  15. Arias-Londoño, J.D., Moure-Prado, Á., & Godino-Llorente, J.I. (2023). Automatic Identification of Lung Opacities Due to COVID-19 from Chest X-ray Images—Focussing Attention on the Lungs. Diagnostics, 13.

Reference

  1. Urban T, Gassert FT, Frank M, Willer K, Noichl W, Buchberger P, et al. Qualitative and Quantitative Assessment of Emphysema Using Dark-Field Chest Radiography. Radiology. 2022;303(1):119–27.
  2. Gassert FT, Urban T, Frank M, Willer K, Noichl W, Buchberger P, et al. X-ray Dark-Field Chest Imaging: Qualitative and Quantitative Results in Healthy Humans. Radiology. 2021;301(2):389–95.
  3. Arias-Garzón D, Alzate-Grisales JA, Orozco-Arias S, Arteaga-Arteaga HB, Bravo-Ortiz MA, Mora-Rubio A, et al. COVID-19 detection in X-ray images using convolutional neural networks. Machine Learning with Applications. 2021; 6:100138.
  4. Orsi MA, Oliva G, Toluian T, Pittino CV, Panzeri M, Cellina M. Feasibility, Reproducibility, and Clinical Validity of a Quantitative Chest X-ray Assessment for COVID-19. Am J Trop Med Hyg. 2020;103(2):822–7.
  5. Wang Y, Li W, Luo J, Aguilera T. Deep learning in chest radiography: Detection, classification, and beyond. Radiology. 2020;296(3): E86-E96.
  6. Cozzi D, Albanesi M, Cavigli E, Moroni C, Bindi A, Luvarà S, et al. Chest X-ray in new Coronavirus Disease 2019 (COVID-19) infection: findings and correlation with clinical outcome. Radiol Med. 2020;125(8):730–7.
  7. Toussie D, Voutsinas N, Finkelstein M, Cedillo MA, Manna S, Maron SZ, et al. Clinical and chest radiography features determine patient outcomes in young and middle-aged adults with COVID-19. Radiology. 2020;297(1): E197–206.
  8. Yasin R, Gouda W. Chest X-ray findings monitoring COVID-19 disease course and severity. Egypt J Radiol Nucl Med. 2020;51(1).
  9. Rousan LA, Elobeid E, Karrar M, Khader Y. Chest x-ray findings and temporal lung changes in patients with COVID-19 pneumonia. BMC Pulm Med. 2020;20(1):1–9.
  10. Arias-Londoño JD, Moure-Prado Á, Godino-Llorente JI. Automatic Identification of Lung Opacities Due to COVID-19 from Chest X-ray Images—Focusing Attention on the Lungs. Diagnostics. 2023;13(8).
  11. Smith AC, Thomas E, Al-Githmi I, Chandrasekhar R. Quantitative analysis of pulmonary opacities on chest radiographs. J Med Imaging Radiat Oncol. 2019;63(5):650-657.
  12. Nagar, M., Saxena, A.K., Khangarot, S., Bansiwal, B., Anees, K.V., & Phulwari, J. (2017). A study of respiratory distress in patients with bilateral lung opacities admitted in a tertiary care hospital. International Journal of Advances in Medicine, 4, 1005-1009.
  13. Sichletidis, L.T., Moustakas, I., Chloros, D., Vamvalis, C., Palladas, P., & Sidiropoulou, M.S. (2004). Scattered micronodular high density lung opacities due to mercury embolism. European Radiology, 14, 2146-2147.
  14. Singh, A., James, R., Kaur, R., & Singh, J. (2012). An unusual cause of photographic negative of pulmonary edema: Sarcoidosis. Lung India: Official Organ of Indian Chest Society, 29, 390 - 391.
  15. Arias-Londoño, J.D., Moure-Prado, Á., & Godino-Llorente, J.I. (2023). Automatic Identification of Lung Opacities Due to COVID-19 from Chest X-ray Images—Focussing Attention on the Lungs. Diagnostics, 13.

Photo
Pankaj Kumar
Corresponding author

Department of Paramedical Science, SCPM College of Nursing & Paramedical Sciences, Gonda

Photo
Sandhya Verma
Co-author

Department of Paramedical Science, SCPM College of Nursing & Paramedical Sciences, Gonda

Photo
Shubhanshi Rani
Co-author

Department of Paramedical Science, SCPM College of Nursing & Paramedical Sciences, Gonda

Photo
Jyoti Yadav
Co-author

Department of Paramedical Science, SCPM College of Nursing & Paramedical Sciences, Gonda

Photo
Shivam Sing
Co-author

Department of Paramedical Science, SCPM College of Nursing & Paramedical Sciences, Gonda

Pankaj Kumar*, Sandhya Verma, Shubhanshi Rani, Jyoti Yadav, Shivam Sing, Quantitative Analysis of Lung Opacities on Routine Chest X-Ray Radiograph, Int. J. Sci. R. Tech., 2026, 3 (1), 144-151. https://doi.org/10.5281/zenodo.18220096

More related articles
Targeting the Colon: Innovative Drug Delivery Syst...
R. Sagar Kumar, T. Sathish Kumar, R. Siva Kumar, ...
Floating Microspheres: A Comprehensive Review on A...
Bhagat Sujit, Baradkar Saloni, Barbade Kirti, Bhapkar Shivani, Ba...
Physical Therapy Rehabilitation for Hand Replantat...
Sayali Khairnar , Madhuri Vishwambhare , Sakshi Chilwant , Deepak...
Nodular Sclerosis Classical Hodgkin Lymphoma in a Young Adult: A Comprehensive C...
Saira Susan Thomas, Pallippat Thumban Kheif Mamu, Manjima Sunil, ...
Development and Assessment of Aceclofenac Gel-Based Topical Formula...
Saiyed Selim Ali, Pintu Kumar De, Himangshu Maji, ...
Green Chemistry-Based Development and Validation of a UV-Spectrophotometric Meth...
Prakruti Desai, Vatsal Patel, Shrey Patel, Khushi Patel, ...
Related Articles
Land Accumulation and Concentration for the Development of High-Tech Agriculture...
Nguyen Mau Dung, Nguyen Duc Minh Tu, Nguyen Phuong Phuong, ...
Warburgia Ugandensis and Croton Dichogamus: Possible Botanical Bullets Against C...
Athanas Alexander Katoo, Mathew Ngugi, Stephen Gitahi, ...
More related articles
Floating Microspheres: A Comprehensive Review on Advanced Drug Delivery Applicat...
Bhagat Sujit, Baradkar Saloni, Barbade Kirti, Bhapkar Shivani, Bandgar Gauri, Jadhav Rohan, ...
Physical Therapy Rehabilitation for Hand Replantation: A Case Report...
Sayali Khairnar , Madhuri Vishwambhare , Sakshi Chilwant , Deepak Anap , ...
Floating Microspheres: A Comprehensive Review on Advanced Drug Delivery Applicat...
Bhagat Sujit, Baradkar Saloni, Barbade Kirti, Bhapkar Shivani, Bandgar Gauri, Jadhav Rohan, ...
Physical Therapy Rehabilitation for Hand Replantation: A Case Report...
Sayali Khairnar , Madhuri Vishwambhare , Sakshi Chilwant , Deepak Anap , ...