Deep Learning-Based Motion Style Transfer Tools, Techniques and Future Challenges

Sensors (Basel). 2023 Feb 26;23(5):2597. doi: 10.3390/s23052597.

Abstract

In the fourth industrial revolution, the scale of execution for interactive applications increased substantially. These interactive and animated applications are human-centric, and the representation of human motion is unavoidable, making the representation of human motions ubiquitous. Animators strive to computationally process human motion in a way that the motions appear realistic in animated applications. Motion style transfer is an attractive technique that is widely used to create realistic motions in near real-time. motion style transfer approach employs existing captured motion data to generate realistic samples automatically and updates the motion data accordingly. This approach eliminates the need for handcrafted motions from scratch for every frame. The popularity of deep learning (DL) algorithms reshapes motion style transfer approaches, as such algorithms can predict subsequent motion styles. The majority of motion style transfer approaches use different variants of deep neural networks (DNNs) to accomplish motion style transfer approaches. This paper provides a comprehensive comparative analysis of existing state-of-the-art DL-based motion style transfer approaches. The enabling technologies that facilitate motion style transfer approaches are briefly presented in this paper. When employing DL-based methods for motion style transfer, the selection of the training dataset plays a key role in the performance. By anticipating this vital aspect, this paper provides a detailed summary of existing well-known motion datasets. As an outcome of the extensive overview of the domain, this paper highlights the contemporary challenges faced by motion style transfer approaches.

Keywords: deep learning; deep neural networks; human motions; motion datasets; motions style transfer.

Publication types

  • Review

Grants and funding

This publication was supported by the Department of Computer Graphics, Vision, and Digital Systems, under the statute research project (Rau6, 2023), Silesian University of Technology (Gliwice, Poland).