What is it about?
This article explains how artificial intelligence (AI) is being used in Australian healthcare and what this means for clinicians, patients, and regulators. It reviews developments from 2020–2025 and looks at: - How AI is currently used in diagnosis, documentation, triage, and data analysis. - How Australia compares with the United States in adoption and attitudes toward AI. - What regulators require, including the roles of Ahpra, the National Boards, the Therapeutic Goods Administration (TGA), and the Australian Privacy Principles. - What concerns patients have, including trust, safety, explainability, and data ownership. - Key risks, such as algorithmic bias, privacy breaches, cross-border cloud storage, and unclear legal accountability. - How clinicians and organisations can implement AI responsibly, including governance, training, patient communication, and ethical oversight. In short, it is a national-level summary of how Australia is preparing for AI in healthcare and what challenges remain before safe and equitable adoption can be achieved.
Featured Image
Photo by Markus Winkler on Unsplash
Why is it important?
AI promises to make healthcare more efficient, personalised, and accessible—but it also brings new risks. This article is important because: 1. It highlights gaps in Australia’s regulatory and ethical landscape Australian healthcare regulation is polycentric: responsibilities are split across Ahpra, the National Boards, the TGA, the Privacy Act, and digital health policy. This complexity makes it difficult for clinicians to know what rules apply. The article clarifies this evolving landscape. 2. Patient trust is lagging behind technological innovation Findings in the article show patients are cautious about AI, especially regarding privacy, data sharing, and AI involvement in decision-making. Without trust, adoption will stall—even if technology is available. 3. Bias and inequity are real dangers The article provides evidence that AI systems trained on unrepresentative data can worsen health inequities—particularly for Indigenous Australians and other underrepresented groups. This makes ethical oversight and diverse datasets essential. 4. Clinicians remain fully accountable under law and professional codes Even when AI tools are used, clinicians are still responsible for decisions. Understanding this responsibility—and how to meet it—is crucial for safe practice. 5. Australia needs a coordinated strategy The review argues that successful AI adoption requires: - National policy alignment - Organisational readiness - Clinician training - Transparent communication with patients - Strong governance and data protection These insights are valuable for policymakers, health administrators, and clinical leaders.
Perspectives
Clinician Perspective Clinicians see AI as a potential support tool but remain cautious about losing control over decision-making, medico-legal risks, and impacts on patient trust. Training and clear guidelines are needed to bridge this confidence gap. Patient Perspective Patients tend to accept AI when it assists clinicians (such as for monitoring or documentation) but are far less comfortable with AI performing autonomous tasks (e.g., robotic surgery or diagnostic decisions). Privacy concerns and data misuse remain major barriers. Regulatory Perspective Regulators are racing to keep up with technological change. Ahpra focuses on professional obligations, the TGA on device safety, and the OAIC on privacy and data sovereignty. But uncertainty persists—especially around generative AI tools like digital scribes, which sometimes cross into “diagnostic suggestion” territory and may require TGA approval. Ethical Perspective Bias is the iceberg beneath the surface: unrepresentative data, historical inequities, and flawed algorithm design can undermine fairness. The article argues strongly that bias management should be a core requirement in AI development and deployment. System & Policy Perspective For Australia to benefit from AI, healthcare organisations must invest in governance, workforce readiness, and patient-centred communication. Coordination between national agencies is essential to prevent fragmented or unsafe adoption.
Professor Chi Eung Danforn Lim
University of Technology Sydney
Read the Original
This page is a summary of: Responsible use of AI in healthcare: an Australian perspective on promise, perils, and professional duties, AI and Ethics, November 2025, Springer Science + Business Media,
DOI: 10.1007/s43681-025-00892-5.
You can read the full text:
Contributors
The following have contributed to this page







