TY - GEN
T1 - Off-TANet
T2 - 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021
AU - Zhang, Jiahao
AU - Liu, Feng
AU - Zhou, Aimin
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Micro-expression recognition is a video sentiment classification task with extremely small sample size. The transience and spatial locality of micro-expressions bring difficulties to constructing large micro-expression databases and designing micro-expression recognition algorithms. To reach the balance between classification accuracy and model complexity in this domain, we propose a lightweight neural micro-expression recognizer, Off-TANet, which is based on apex-onset optical flow features. The neural network contains a simple yet powerful triplet attention mechanism, and the powerfulness of this design could be interpreted in 2 aspects, FACS AU and matrix sparseness. The model evaluation is conducted with a LOSO cross-validation strategy on a combined database including 3 mainstream micro-expression databases. With obviously fewer total parameters (59,403), the results of the experiment indicate that the model achieves an average recall of 0.7315 and an average F1-score of 0.7242, exceeding other major architectures in this domain. A series of ablation experiments are also conducted to ensure the validity of our model design.
AB - Micro-expression recognition is a video sentiment classification task with extremely small sample size. The transience and spatial locality of micro-expressions bring difficulties to constructing large micro-expression databases and designing micro-expression recognition algorithms. To reach the balance between classification accuracy and model complexity in this domain, we propose a lightweight neural micro-expression recognizer, Off-TANet, which is based on apex-onset optical flow features. The neural network contains a simple yet powerful triplet attention mechanism, and the powerfulness of this design could be interpreted in 2 aspects, FACS AU and matrix sparseness. The model evaluation is conducted with a LOSO cross-validation strategy on a combined database including 3 mainstream micro-expression databases. With obviously fewer total parameters (59,403), the results of the experiment indicate that the model achieves an average recall of 0.7315 and an average F1-score of 0.7242, exceeding other major architectures in this domain. A series of ablation experiments are also conducted to ensure the validity of our model design.
KW - Attention module
KW - Computational affection
KW - Convolutional neural networks
KW - Micro-expression recognition
KW - Optical flow features
KW - Self-attention mechanism
UR - https://www.scopus.com/pages/publications/85119019386
U2 - 10.1007/978-3-030-89188-6_20
DO - 10.1007/978-3-030-89188-6_20
M3 - 会议稿件
AN - SCOPUS:85119019386
SN - 9783030891879
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 266
EP - 279
BT - PRICAI 2021
A2 - Pham, Duc Nghia
A2 - Theeramunkong, Thanaruk
A2 - Governatori, Guido
A2 - Liu, Fenrong
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 November 2021 through 12 November 2021
ER -