What is it about?

Many machine-learning-based applications depend on users' data contribution, i.e., web clicks, reviews, social network profiles, etc. Users can use the proposed tool FT-PrivacyScore to determine the privacy risk of participating in a fine-tuning-based machine learning task.

Featured Image

Why is it important?

With recently established privacy laws, data contributors have the right to determine whether they want to be forgotten by a data owner's machine learning task. However, there is no tool for users to estimate the risk of participating in the task and make informed decisions. The proposed tool fills in this gap.

Perspectives

The approach is a novel application of membership inference attacks (MIA) to estimate the privacy risk of a sample being identified as a participant in a machine learning training dataset. The risk is closely linked to and justified by the differential privacy theory.

Keke Chen
University of Maryland Baltimore County

Read the Original

This page is a summary of: Demo: FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3658644.3691366.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page