What is it about?
A new study found that ChatGPT-4, a popular AI language model, provides inconsistent risk assessments for patients with chest pain not caused by injury, despite showing strong overall correlation with established risk scoring tools. The variability in ChatGPT-4's scores when given identical patient data raises concerns about its reliability for clinical decision-making in evaluating these patients.
Featured Image
Photo by camilo jimenez on Unsplash
Why is it important?
This study is the first to comprehensively evaluate ChatGPT-4's ability to assess heart attack risk in patients with chest pain not caused by injury. The findings are timely and important because they highlight the need for further refinement and customization of AI language models before they can be safely integrated into clinical practice for this purpose. Addressing these limitations could help unlock the potential of AI to improve cardiac risk assessment and patient care.
Perspectives
Read the Original
This page is a summary of: ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain, PLoS ONE, April 2024, PLOS,
DOI: 10.1371/journal.pone.0301854.
You can read the full text:
Resources
Contributors
The following have contributed to this page