Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection

Qinxiao Quan, Yang Gao, Yang Bai, Zhanpeng Jin

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

As the contradiction between the pursuit of health and the increasing duration of sedentary office work intensifies, there has been a growing focus on maintaining correct sitting posture while working in recent years. Scientific studies have shown that sitting posture correction plays a positive role in alleviating physical pain. With the rapid development of artificial intelligence technology, a significant amount of research has shifted towards implementing sitting posture detection and recognition systems using machine learning approaches. In this paper, we introduce an innovative sitting posture recognition system that integrates visual and pressure modalities. The system employs a differentiated pre-training strategy for training the bimodal models and features a feature fusion module designed based on feed-forward networks. Our system utilizes commonly available built-in cameras in laptops for collecting visual data and thin-film pressure sensor mats for pressure data in office scenarios. It achieved an F1-Macro score of 95.43% on a dataset with complex composite actions, marking an improvement of 7.13% and 10.79% over systems that rely solely on pressure or visual modalities, respectively, and a 7.07% improvement over systems using a uniform pre-training strategy.

Original languageEnglish
Article number112451
Pages (from-to)380-393
Number of pages14
JournalCCF Transactions on Pervasive Computing and Interaction
Volume6
Issue number4
DOIs
StatePublished - Dec 2024

Keywords

  • Computer vision
  • Feature fusion
  • Multi-label classification
  • Pressure sensing
  • Sitting posture recognition

Fingerprint

Dive into the research topics of 'Multi-modal fusion in ergonomic health: bridging visual and pressure for sitting posture detection'. Together they form a unique fingerprint.

Cite this