Extracting optimal explanations for ensemble trees via automated reasoning

Gelin Zhang, Zhé Hóu, Yanhong Huang*, Jianqi Shi, Hadrien Bride, Jin Song Dong, Yongsheng Gao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Ensemble trees are a popular machine learning model which often yields high prediction performance when analysing structured data. Although individual small decision trees are deemed explainable by nature, an ensemble of large trees is often difficult to understand. In this work, we propose an approach called optimised explanation (OptExplain) that faithfully extracts global explanations of ensemble trees using a combination of logical reasoning, sampling, and nature-inspired optimisation. OptExplain is an interpretable surrogate model that is as close as possible to the prediction ability of the original model. Building on top of this, we propose a method called the profile of equivalent classes (ProClass), which simplify the explanation even further by solving the maximum satisfiability problem (MAX-SAT). ProClass gives the profile of the classes and features from the perspective of the model. Experiment on several datasets shows that our approach can provide high-quality explanations to large ensemble tree models, and it betters recent top-performers.

Original languageEnglish
Pages (from-to)14371-14382
Number of pages12
JournalApplied Intelligence
Volume53
Issue number11
DOIs
StatePublished - Jun 2023

Keywords

  • Classification
  • Decision rule extraction
  • Explainable Artificial Intelligence (XAI)
  • Random forest

Fingerprint

Dive into the research topics of 'Extracting optimal explanations for ensemble trees via automated reasoning'. Together they form a unique fingerprint.

Cite this