What is it about?
We aim to learn what it takes to prevent harm in current day AI systems by learning from the history of system safety. This field has been tasked with safeguarding software-based automation in safety-critical domains such as aviation or medicine. Since the 1950s this field has grappled with the increasing complexity of automated systems, and drawn up concrete lessons for how to control for safety and prevent harms and accidents. These lessons build on the seminal work of system safety pioneer Professor Nancy Leveson.
Featured Image
Photo by Egor Myznik on Unsplash
Why is it important?
We are seeing a plethora of new harms and failures emerging from novel AI applications, often with vulnerable people bearing the brunt of ill-designed or ill-governed AI-technology, ranging from AI in automated decision making for welfare systems to self-driving cars or misinformation online. Meanwhile, many of the lessons from system safety of what it actually takes to build safe systems are yet to be integrated in the development and governance of AI systems. This presents both opportunities to build safer and more responsible systems, but foremost offers ways to contest and critique currently unsafe implementations to address and stop emergent forms of algorithmic harm.
Perspectives
Read the Original
This page is a summary of: System Safety and Artificial Intelligence, June 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3531146.3533215.
You can read the full text:
Resources
Contributors
The following have contributed to this page