Mitigating disparate impact on model accuracy in differentially private learning

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

The techniques based on the theory of differential privacy (DP) have become a standard building block in the machine learning community. The DP training mechanisms offer strong guarantees that an adversary cannot determine with high confidence about the training data based on analyzing the released model, let alone any details of the instances. However, DP may disproportionately affect the underrepresented and relatively complicated classes, which means that the reduction in utility (i.e., model's accuracy) is unequal for each class. Existing work neglects the adverse impact of DP or omits the influence of hyperparameters on the private learning procedure. This paper proposes a fair differential privacy algorithm (FairDP) to mitigate the disparate impact on each class's model accuracy. We cast the learning procedure as a bilevel programming problem, which could integrate differential privacy with fairness. FairDP establishes a self-adaptive DP mechanism and dynamically adjusts instance influence in each class depending on the theoretical bias-variance bound with privacy guarantees simultaneously. Our experimental evaluation shows the effectiveness of FairDP in mitigating the disparate impact on model accuracy among the classes on several benchmark datasets and scenarios ranging from text to vision and achieving state-of-the-art accuracy and fairness.

Original languageEnglish
Pages (from-to)108-126
Number of pages19
JournalInformation Sciences
Volume616
DOIs
StatePublished - Nov 2022

Keywords

  • Bias-variance trade-off
  • Bilevel optimization
  • Differential Privacy
  • Fairness

Fingerprint

Dive into the research topics of 'Mitigating disparate impact on model accuracy in differentially private learning'. Together they form a unique fingerprint.

Cite this