What is it about?
NeuGuard provides a framework to protect training data leakage against membership inference attacks from both training and inference phases. The goal of the approach is to reduce the difference in the distribution of output confidence scores between training and test data. With a more uniform output distribution, it can cheat attacks and also maintain high utility.
Featured Image
Why is it important?
We find that two existing neural network based privacy attacks have different behaviors against existing defenses. Our proposed defense can be more efficient and effective against different membership inference attacks.
Read the Original
This page is a summary of: NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks, December 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3564625.3567986.
You can read the full text:
Contributors
The following have contributed to this page