TY - GEN
T1 - EarPPG
T2 - 28th International Conference on Intelligent User Interfaces, IUI 2023
AU - Choi, Seokmin
AU - Yim, Junghwan
AU - Jin, Yincheng
AU - Gao, Yang
AU - Li, Jiyang
AU - Jin, Zhanpeng
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/3/27
Y1 - 2023/3/27
N2 - Wearable devices have become indispensable gadgets in people's daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user's unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.
AB - Wearable devices have become indispensable gadgets in people's daily lives nowadays; especially wireless earphones have experienced unprecedented growth in recent years, which lead to increasing interest and explorations of user authentication techniques. Conventional user authentication methods embedded in wireless earphones that use microphones or other modalities are vulnerable to environmental factors, such as loud noises or occlusions. To address this limitation, we introduce EarPPG, a new biometric modality that takes advantage of the unique in-ear photoplethysmography (PPG) signals, altered by a user's unique speaking behaviors. When the user is speaking, muscle movements cause changes in the blood vessel geometry, inducing unique PPG signal variations. As speaking behaviors and PPG signals are unique, the EarPPG combines both biometric traits and presents a secure and obscure authentication solution. The system first detects and segments EarPPG signals and proceeds to extract effective features to construct a user authentication model with the 1D ReGRU network. We conducted comprehensive real-world evaluations with 25 human participants and achieved 94.84% accuracy, 0.95 precision, recall, and f1-score, respectively. Moreover, considering the practical implications, we conducted several extensive in-the-wild experiments, including body motions, occlusions, lighting, and permanence. Overall outcomes of this study possess the potential to be embedded in future smart earable devices.
KW - Blood Vessel Deformation
KW - Earable Devices
KW - PPG
KW - Photoplethysmogram
KW - User Authentication
KW - Voice Activation
KW - Wearable Computing
UR - https://www.scopus.com/pages/publications/85152134832
U2 - 10.1145/3581641.3584070
DO - 10.1145/3581641.3584070
M3 - 会议稿件
AN - SCOPUS:85152134832
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 835
EP - 849
BT - IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces
PB - Association for Computing Machinery
Y2 - 27 March 2023 through 31 March 2023
ER -