TY - GEN
T1 - DCFG
T2 - 2021 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021
AU - Li, Yan
AU - Liu, Shasha
AU - Wu, Chunwei
AU - Xi, Xidong
AU - Cao, Guitao
AU - Cao, Wenming
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - While Deep Neural Networks (DNNs) are achieving state-of-the-art performance on medical domains across a variety of tasks, the need for explainability of model predictions in these high-stakes tasks is still lacking. Current for the explainability in model predictions potentially relies on the supervised counterfactual generation that is time-consuming and direction uncontrollable. Yet, the counterfactual generation needs to be easy to implement and have a controllable direction. In light of this trend, we propose an approach for the unsupervised latent direction search of black-box models that are steerable to the user by enabling the user to effectively explore counterfactual generation in a directional way, without relying on domain- or data-specific assumptions. To identify these explainable directions, we use Principal Component Analysis (PCA), a general manifold learning framework to extract low-dimensional subspaces based on a local noise injection of the pre-trained generative model, so that a small perturbation in the subspaces would provide enough change in the resulting data. With experiments on three real-world CXR datasets involving 6 tasks, we find that our approach is capable of learning explainable predictions that discard unrelated confounding factors. Moreover, our method enables practitioners to edit directions to better understand which features are used for predictions.
AB - While Deep Neural Networks (DNNs) are achieving state-of-the-art performance on medical domains across a variety of tasks, the need for explainability of model predictions in these high-stakes tasks is still lacking. Current for the explainability in model predictions potentially relies on the supervised counterfactual generation that is time-consuming and direction uncontrollable. Yet, the counterfactual generation needs to be easy to implement and have a controllable direction. In light of this trend, we propose an approach for the unsupervised latent direction search of black-box models that are steerable to the user by enabling the user to effectively explore counterfactual generation in a directional way, without relying on domain- or data-specific assumptions. To identify these explainable directions, we use Principal Component Analysis (PCA), a general manifold learning framework to extract low-dimensional subspaces based on a local noise injection of the pre-trained generative model, so that a small perturbation in the subspaces would provide enough change in the resulting data. With experiments on three real-world CXR datasets involving 6 tasks, we find that our approach is capable of learning explainable predictions that discard unrelated confounding factors. Moreover, our method enables practitioners to edit directions to better understand which features are used for predictions.
KW - Black-box Models
KW - Directional CounterFactual Generation
KW - Explainability
KW - Explainable Artificial Intelligence
UR - https://www.scopus.com/pages/publications/85125181377
U2 - 10.1109/BIBM52615.2021.9669770
DO - 10.1109/BIBM52615.2021.9669770
M3 - 会议稿件
AN - SCOPUS:85125181377
T3 - Proceedings - 2021 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021
SP - 972
EP - 979
BT - Proceedings - 2021 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021
A2 - Huang, Yufei
A2 - Kurgan, Lukasz
A2 - Luo, Feng
A2 - Hu, Xiaohua Tony
A2 - Chen, Yidong
A2 - Dougherty, Edward
A2 - Kloczkowski, Andrzej
A2 - Li, Yaohang
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 December 2021 through 12 December 2021
ER -