Transformer-based 3D Human pose estimation and action achievement evaluation
DOI:
CSTR:
Author:
Affiliation:

Clc Number:

TP391 TH86

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    According to the challenges of human pose analysis and assessment in domains such as human-computer interaction and medical rehabilitation, this paper introduces a Transformer-based methodology for 3D human pose estimation and the evaluation of action achievement. Firstly, key points of human pose and their joint angles were defined, and based on the deep pose estimation network (DPEN), a Transformer-based 3D human pose estimation model (TPEM) is proposed and constructed, the incorporation of Transformer facilitates better enhanced extraction of long-term sequential features of human pose. Secondly, the TPEM model′s outcomes in 3D human pose estimation are utilized to formulate a dynamic time warping algorithm, which focuses on weighted 3D joint angles. This algorithm temporally aligns pose keyframes for different individuals performing the same action and subsequently introduces an assessment method for action accomplishment to provide scores for the degree of action fulfillment. Finally, through experimental validation across various datasets, TPEM achieves an average joint point error of 37. 3 mm on the Human3. 6 M dataset, while the dynamic time warping algorithm based on weighted 3D joint angles yields an average error of 5. 08 frames on the Fit3D dataset. These results demonstrate the feasibility and effectiveness of the proposed approach for 3D human pose estimation and action accomplishment assessment.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: July 15,2024
  • Published:
Article QR Code