Abstract
Ensemble trees are a popular machine learning model which often yields high prediction performance when analysing structured data. Although individual small decision trees are deemed explainable by nature, an ensemble of large trees is often difficult to understand. In this work, we propose an approach called optimised explanation (OptExplain) that faithfully extracts global explanations of ensemble trees using a combination of logical reasoning, sampling, and nature-inspired optimisation. OptExplain is an interpretable surrogate model that is as close as possible to the prediction ability of the original model. Building on top of this, we propose a method called the profile of equivalent classes (ProClass), which simplify the explanation even further by solving the maximum satisfiability problem (MAX-SAT). ProClass gives the profile of the classes and features from the perspective of the model. Experiment on several datasets shows that our approach can provide high-quality explanations to large ensemble tree models, and it betters recent top-performers.
| Original language | English |
|---|---|
| Pages (from-to) | 14371-14382 |
| Number of pages | 12 |
| Journal | Applied Intelligence |
| Volume | 53 |
| Issue number | 11 |
| DOIs | |
| State | Published - Jun 2023 |
Keywords
- Classification
- Decision rule extraction
- Explainable Artificial Intelligence (XAI)
- Random forest