Abstract
Explaining decision-making neural network models in deep reinforcement learning (DRL) systems is crucial, albeit challenging. Abstract policy graphs (APGs) emerge as an effective method for elucidating these models. However, constructing highly explainable APGs with high-fidelity is challenging. Through empirical analysis, we glean an insight that a larger cluster size corresponds to an APG with higher fidelity. We present a novel approach called Abstract-Train-Abstract (ATA), building on the integration of two key ideas. Abstraction-based training facilitates the clustering of abstract states, expanding the scope of each cluster. Abstraction-oriented clustering ensures that states within the same cluster correspond to the same action. Identifying the cluster to which a state belongs enhances the accuracy of predicting its associated action. Our experiments show that ATA surpasses the state of the art, achieving up to 26.63% higher fidelity, while still preserving competitive rewards. Additionally, our user study demonstrates that ATA substantially improves the accuracy of user prediction by 35.7% on average.
| Original language | English |
|---|---|
| Article number | 107749 |
| Journal | Neural Networks |
| Volume | 190 |
| DOIs | |
| State | Published - Oct 2025 |
Keywords
- Abstract policy graph
- Deep reinforcement learning
- Explainability
- State abstraction