Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators

Sensors (Basel). 2023 Aug 9;23(16):7039. doi: 10.3390/s23167039.

Abstract

In this paper, a control approach for reconfigurable parallel robots is designed. Based on it, controls in the vision-sensor, 3D and joint spaces are designed and implemented in target tracking tasks in a novel reconfigurable delta-type parallel robot. No a priori information about the target trajectory is required. Robot reconfiguration can be used to overcome some of the limitations of parallel robots like small relative workspace or multiple singularities, at the cost of increasing the complexity of the manipulator, making its control design even more challenging. No general control methodology exists for reconfigurable parallel robots. Tracking objects with unknown trajectories is a challenging task required in many applications. Sensor-based robot control has been actively used for this type of task. However, it cannot be straightforwardly extended to reconfigurable parallel manipulators. The developed vision-sensor space control is inspired by, and can be seen as an extension of, the Velocity Linear Camera Model-Camera Space Manipulation (VLCM-CSM) methodology. Several experiments were carried out on a reconfigurable delta-type parallel robot. An average positioning error of 0.6 mm was obtained for static objectives. Tracking errors of 2.5 mm, 3.9 mm and 11.5 mm were obtained for targets moving along a linear trajectory at speeds of 6.5, 9.3 and 12.7 cm/s, respectively. The control cycle time was 16 ms. These results validate the proposed approach and improve upon previous works for non-reconfigurable robots.

Keywords: camera-space manipulation; parallel robot; vision-based control.

Grants and funding

This work was partially funded by CONACyT grant Cátedras CONACyT 2016/972 and CONHACyT grant 712819. The APC was funded by Posgrado en Ing. Mecánica, Posgrado en Ing. Eléctrica, and Ing. en Mecatrónica, Facultad de Ingeniería, UASLP.