Estimation of gesture pointing for human-robot interaction
DOI:
Author:
Affiliation:

Clc Number:

TP391 TH86

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    To solve the problem of interaction between robot and human in the human-robot integration environment, this article proposes an estimation method of gesture pointing for human-robot interaction scenes by pointing gesture to achieve the information interaction between the robot and the target point on the workplane. First, based on the RGB-D camera and the human motion capture system VICON, a time-synchronized visual pointing gesture position dataset is established. Each sample contains the RGB-D image of the pointing gesture and the true value of the pointing gesture pose. Secondly, a multi-level neural network model is formulated for estimating pointing gesture pose by combining semantic and geometric information. Thirdly, a ray approximation loss function is designed, which combines the position errorΔP and direction angle error Δθ. The pointing gesture pose estimation model is trained based on the constructed dataset. Finally, human-robot interaction experiments and model validation are implemented in the laboratory environment. In the range of 5 m from the camera, results show that the average precision of pointing gesture detection is 98. 4% , the average position error of pointing gesture pose is 34 mm, and the average angle error is 9. 94°. The average error of gesture pointing to the target point on the workplane is 0. 211 m.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: July 11,2023
  • Published: