What is it about?

Artificial intelligence (AI) is rapidly becoming part of everyday healthcare. It can read scans, suggest treatments, and even help perform surgery. These systems promise faster diagnoses, more personalised care, and better outcomes for patients. But they also create new legal and ethical questions. If an AI system makes a mistake, who should be held accountable—the doctor who used it, the hospital that installed it, or the company that built it? For decades, medical negligence law in the UK has been based on the “Bolam test,” which compares a doctor’s actions with those of a “responsible body” of other doctors. This works well when only humans are making decisions. But as AI systems become the norm, the law may shift. Doctors could be expected to use AI when it is clearly beneficial—or conversely be held responsible if they follow an AI recommendation that turns out to be wrong. This article explores how the law might adapt. It looks at whether hospitals should carry more of the risk, whether software developers should be held responsible (as in new European Union rules), and how patients can give proper informed consent when AI is involved. It also explains the “black box” problem—the difficulty of understanding how AI reaches its decisions—and why that makes proving fault hard for injured patients. Ultimately, the piece calls for doctors, hospitals, and developers to work with policymakers to set fair rules. Clear, modern legal standards will protect patients and give health professionals confidence to use new technology responsibly.

Featured Image

Why is it important?

This article is timely because AI is moving from pilot projects to everyday clinical practice. Systems that once seemed futuristic—reading X-rays, planning treatments, even performing parts of operations—are now being deployed in hospitals. Yet the law still mostly assumes a human decision-maker. Without clear rules, doctors and hospitals risk being blamed for outcomes they cannot fully control, while patients may struggle to obtain redress if harmed by an algorithm. The piece draws together developments in English negligence law with the EU’s new AI and Product Liability Directives, showing how legal thinking is shifting on both sides of the Channel. It highlights not just the risks of using AI but also the risks of not using it, as courts may one day see AI as part of the standard of care. This makes it essential for clinicians, policymakers, and technology developers to engage now rather than wait for a crisis.

Perspectives

As a barrister working at the intersection of healthcare and law, I see daily how technological change outpaces legal frameworks. AI is no longer an abstract idea—it is influencing real clinical decisions about diagnosis, treatment and patient safety. I wrote this article because I believe doctors and hospitals need clarity and protection just as much as patients do. If clinicians fear being sued for using (or not using) AI, they may shy away from tools that could save lives. By identifying the legal gaps now, we can shape rules that encourage innovation while safeguarding patients. My hope is that this article helps spark informed debate and proactive regulation rather than reactive litigation after harm has occurred.

Robert Kellar

Read the Original

This page is a summary of: AI in Healthcare: Redefining Liability for Doctors and Hospitals, British Journal of Hospital Medicine, September 2025, Mark Allen Group,
DOI: 10.12968/hmed.2025.0212.
You can read the full text:

Read

Contributors

The following have contributed to this page