Person image generation through graph-based and appearance-decomposed generative adversarial network

PeerJ Comput Sci. 2021 Dec 24:7:e761. doi: 10.7717/peerj-cs.761. eCollection 2021.

Abstract

Due to the sophisticated entanglements for non-rigid deformation, generating person images from source pose to target pose is a challenging work. In this paper, we present a novel framework to generate person images with shape consistency and appearance consistency. The proposed framework leverages the graph network to infer the global relationship of source pose and target pose in a graph for better pose transfer. Moreover, we decompose the source image into different attributes (e.g., hair, clothes, pants and shoes) and combine them with the pose coding through operation method to generate a more realistic person image. We adopt an alternate updating strategy to promote mutual guidance between pose modules and appearance modules for better person image quality. Qualitative and quantitative experiments were carried out on the DeepFashion dateset. The efficacy of the presented framework are verified.

Keywords: Generative adversarial network; Graph network; Image generation; Pose transfer.

Grants and funding

This work was supported by the National Natural Science Foundation of China (Nos. 61462038, 61562039 and 61966016), the Jiangxi Provincial Natural Science Foundation (No. 20212BAB212005), the Science and Technology Planning Project of Jiangxi Provincial Department of Education (Nos. GJJ190217, GJJ190180 and GJJ200428), and the Open Project Program of the State Key Lab of CAD & CG of Zhejiang University (No. A2029). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.