基于概率模型检查的树模型公平性验证方法

Translated title of the contribution: Fairness Verification Method of Tree-based Model Based on Probabilistic Model Checking

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

More and more social decisions are made using machine learning models, including legal decisions, financial decisions, and so on. For these decisions, the fairness of algorithms is very important. In fact, one of the goals of introducing machine learning into these environments is to avoid or reduce human bias in decision-making. However, datasets often contain sensitive attributes that can cause machine learning algorithms to generate biased models. Since the importance of feature selection for tree-based models, they are susceptible to sensitive attributes. This study proposes a probabilistic model checking solution to formally verify fairness metrics of the decision tree and tree ensemble model for underlying data distribution and given compound sensitive attributes. The fairness problem is transformed into the probabilistic verification problem and different fairness metrics are measured. The tool called FairVerify is developed based on the proposed approach and it is validated on multiple classifiers based on different datasets and compound sensitive attributes, showing sound performance. Compared with the existing distribution-based verifiers, the method has higher scalability and robustness.

Translated title of the contributionFairness Verification Method of Tree-based Model Based on Probabilistic Model Checking
Original languageChinese (Traditional)
Pages (from-to)2482-2498
Number of pages17
JournalRuan Jian Xue Bao/Journal of Software
Volume33
Issue number7
DOIs
StatePublished - Jul 2022

Fingerprint

Dive into the research topics of 'Fairness Verification Method of Tree-based Model Based on Probabilistic Model Checking'. Together they form a unique fingerprint.

Cite this