摘要
In recent years, deep neural networks have been widely used in real decision-making systems. Inequities in decision-making systems will exacerbate social inequality and cause social harm. Therefore, researchers have begun to carry out a lot of research on the fairness of deep learning systems, but most of them focus on group fairness, and they cannot guarantee the fairness within the group. In response to the above problems, we define two individual fairness calculation methods, which are individual fairness rate IFRb based on labels of output, that is the probability of having same predicted label for two similar samples, and individual fairness rate IFRp based on distributions of output, that is the probability of having similiar predicted output distribution for two similar samples, respectively, the latter being the stricter individual fairness. In addition, we also propose an algorithm IIFR to improve the individual fairness of these models. The algorithm uses the cosine similarity to measure the similarity between samples, and then selects the similar sample pairs by the similarity threshold decided by different applications, finally adds the output difference of the similar sample pairs to the objective function as an individual fairness loss item during the training process, which penalizes the similar training samples with large differences of model output in order to improve the individual fairness of the model. The experimental results show that our IIFR algorithm outperforms the state-of-the-art method on the improvement of individual fairness. In addition, IIFR can maintain group fairness of models while improving individual fairness.
| 投稿的翻译标题 | Research on Fairness in Deep Learning Models |
|---|---|
| 源语言 | 繁体中文 |
| 期刊 | Ruan Jian Xue Bao/Journal of Software |
| 卷 | 34 |
| 期 | 9 |
| DOI | |
| 出版状态 | 已出版 - 2023 |
关键词
- deep learning
- group fairness
- individual fairness
- model bias
指纹
探究 '深度学习模型中的公平性研究' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver