On the Impact of Output Perturbation on Fairness in Binary Linear Classification - Université de Lille
Pré-Publication, Document De Travail Année : 2024

On the Impact of Output Perturbation on Fairness in Binary Linear Classification

Résumé

We theoretically study how differential privacy interacts with both individual and group fairness in binary linear classification. More precisely, we focus on the output perturbation mechanism, a classic approach in privacy-preserving machine learning. We derive high-probability bounds on the level of individual and group fairness that the perturbed models can achieve compared to the original model. Hence, for individual fairness, we prove that the impact of output perturbation on the level of fairness is bounded but grows with the dimension of the model. For group fairness, we show that this impact is determined by the distribution of so-called angular margins, that is signed margins of the non-private model re-scaled by the norm of each example.

Dates et versions

hal-04440982 , version 1 (06-02-2024)

Identifiants

Citer

Vitalii Emelianov, Michaël Perrot. On the Impact of Output Perturbation on Fairness in Binary Linear Classification. 2024. ⟨hal-04440982⟩
45 Consultations
0 Téléchargements

Altmetric

Partager

More