Improving Haptic Response for Contextual Human Robot Interaction

Sensors (Basel). 2022 Mar 5;22(5):2040. doi: 10.3390/s22052040.

Abstract

For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve the prediction time and reduce the robot time taken to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. The experimental results in this study revealed that eye-gaze-based prediction significantly improved the detection time by 37% and the robot time taken to reach the target by 27%. Further analysis provided more insight on the effect of the eye-gaze window and the hand threshold on the device response for the experimental task.

Keywords: eye–gaze tracking; haptic devices; human–robot interaction; response time; virtual reality.

MeSH terms

  • Hand / physiology
  • Haptic Technology
  • Humans
  • Motivation
  • Robotics* / methods
  • Upper Extremity