What is it about?
Recent advancements in multi-agent reinforcement learning (MARL) have opened up vast application prospects, such as swarm control of drones, collaborative manipulation by robotic arms, and multitarget encirclement. However, potential security threats during the MARL deployment need more attention and thorough investigation.
Featured Image
Photo by Kasia Derenda on Unsplash
Why is it important?
Recent research reveals that attackers can rapidly exploit the victim’s vulnerabilities, generating adversarial policies that result in the failure of specific tasks. For instance, reducing the winning rate of a superhuman-level Go AI to around 20%. Existing studies predominantly focus on two-player competitive environments, assuming attackers possess complete global state observation.
Perspectives
Read the Original
This page is a summary of: SUB-PLAY:
Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3658644.3670293.
You can read the full text:
Resources
Contributors
The following have contributed to this page