What is it about?

This research looks at a big problem with Artificial Intelligence (AI) systems - making sure they're fair, respect people's privacy, and are safe to use, even after they're put to work in the real world. We came up with a way to watch these AI systems closely and spot any problems quickly. We also made a tool that can automatically set up these watching systems. This helps the people who build AI to make sure their creations are behaving properly and fix them if they're not.

Featured Image

Why is it important?

This work is important right now because many Artificial Intelligence (AI) systems being used today have trouble meeting standards for fairness and privacy. As more and more companies and organizations use AI for important jobs, it's crucial to keep a close eye on these systems to make sure they're not hurting anyone or being unfair. What's new about this research is that it makes it much easier for the people who build AI to set up these watching systems. This means it's more likely that AI will be used responsibly in the real world.

Perspectives

Writing this paper was an exciting opportunity. We got to look into ways to make Artificial Intelligence (AI) better behaved and easier to understand. It was great to work with my team to build a new tool. I really enjoyed the whole process. I hope our work inspires other people to come up with new ideas about making AI more responsible. We also want to help make AI systems that work well but also respect important human values like being fair to everyone and keeping people's information private.

Hira Naveed
Monash University

Read the Original

This page is a summary of: Towards Runtime Monitoring for Responsible Machine Learning using Model-driven Engineering, September 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3640310.3674092.
You can read the full text:

Read

Contributors

The following have contributed to this page