What is it about?
Artificial intelligence (AI) is being used more and more in healthcare, from diagnosing diseases to planning treatments. While AI can bring many benefits, it also creates new risks for patients—like unfair treatment, loss of privacy, and decisions that are hard to understand or challenge. This article suggests a new tool called the Patients’ Rights Impact Assessment (PRIA). It helps doctors, hospitals, and technology companies check whether their AI systems respect patients’ rights. By using this tool before AI is introduced, healthcare providers can make sure that patients are treated fairly and their rights are protected.
Featured Image
Photo by Ant Rozetsky on Unsplash
Why is it important?
AI is changing healthcare, helping doctors make faster and better decisions. But if we’re not careful, these technologies can harm patients—for example, by making unfair decisions, invading privacy, or leaving people without clear answers about their care. It’s important to make sure AI respects patients’ rights, like fairness, privacy, and the right to good care. This matters because everyone deserves to be treated safely and with dignity when they get medical help. A tool like the Patients’ Rights Impact Assessment helps protect patients before problems happen.
Read the Original
This page is a summary of: A Human Rights-Based Approach to Artificial Intelligence in Healthcare: A Proposal for a Patients’ Rights Impact Assessment Tool, March 2025, De Gruyter,
DOI: 10.1163/9789004708389_004.
You can read the full text:
Contributors
The following have contributed to this page







