What is it about?
This research looks at a big problem with Artificial Intelligence (AI) systems - making sure they're fair, respect people's privacy, and are safe to use, even after they're put to work in the real world. We came up with a way to watch these AI systems closely and spot any problems quickly. We also made a tool that can automatically set up these watching systems. This helps the people who build AI to make sure their creations are behaving properly and fix them if they're not.
Featured Image
Photo by Igor Omilaev on Unsplash
Why is it important?
This work is important right now because many Artificial Intelligence (AI) systems being used today have trouble meeting standards for fairness and privacy. As more and more companies and organizations use AI for important jobs, it's crucial to keep a close eye on these systems to make sure they're not hurting anyone or being unfair. What's new about this research is that it makes it much easier for the people who build AI to set up these watching systems. This means it's more likely that AI will be used responsibly in the real world.
Perspectives
Read the Original
This page is a summary of: Towards Runtime Monitoring for Responsible Machine Learning using Model-driven Engineering, September 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3640310.3674092.
You can read the full text:
Contributors
The following have contributed to this page