What is it about?

One of the criticisms made about data and algorithm-driven intelligent systems is that their results are viewed as being unfair or inequitable by individuals who believe in fairness criteria other than those embedded in the system design. In fact, computer and data scientists admit potential unfairness residing in intelligent systems. Accordingly, various approaches have been proposed to make intelligent systems fair. However, the consideration of a fundamental issue is missing in current efforts to design fair intelligent systems: Fairness is in the eye of the beholder. That is, the concept of fairness is very often highly subjective in most domains. Based on the premise that fairness is subjective, we propose a framework to represent and quantify individuals’ subjective fairness beliefs and provide methodologies to aggregate them. The proposed approach provides insight into how a population will assess the fairness of a decision or policy, which in turn can provide guidance for policy as well as designing intelligent systems.

Featured Image

Why is it important?

This paper pioneers in quantifying individuals' subjective fairness beliefs and further proposes a framework to aggregate the population's belief for fair policy-making and system design. It is important as controversies are growing about fair policies and fair intelligent systems.

Perspectives

I am glad that it appeared in the ACM Transactions outlet. I hope that it could impact policy-making and system design practices when fairness is a significant concern.

Chenglong Zhang
The Chinese University of Hong Kong, Shenzhen

Read the Original

This page is a summary of: Modeling Individual Fairness Beliefs and Its Applications, ACM Transactions on Management Information Systems, August 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3682070.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page