Visual Sensor Fusion Based Autonomous Robotic System for Assistive Drinking

Sensors (Basel). 2021 Aug 11;21(16):5419. doi: 10.3390/s21165419.

Abstract

People with severe motor impairments like tetraplegia are restricted in activities of daily living (ADL) and are dependent on continuous human assistance. Assistive robots perform physical tasks in the context of ADLs to support people in need of assistance. In this work a sensor fusion algorithm and a robot control algorithm for localizing the user's mouth and autonomously navigating a robot arm are proposed for the assistive drinking task. The sensor fusion algorithm is implemented in a visual tracking system which consists of a 2-D camera and a single point time-of-flight distance sensor. The sensor fusion algorithm utilizes computer vision to combine camera images and distance measurements to achieve reliable localization of the user's mouth. The robot control algorithm uses visual servoing to navigate a robot-handled drinking cup to the mouth and establish physical contact with the lips. This system features an abort command that is triggered by turning the head and unambiguous tracking of multiple faces which enable safe human robot interaction. A study with nine able-bodied test subjects shows that the proposed system reliably localizes the mouth and is able to autonomously navigate the cup to establish physical contact with the mouth.

Keywords: assistive robotics; autonomous robotic system; computer vision; human robot interaction; localization; sensor fusion; visual servoing.

MeSH terms

  • Activities of Daily Living*
  • Algorithms
  • Humans
  • Mouth
  • Quadriplegia
  • Robotic Surgical Procedures*