TY - JOUR
T1 - Deep image synthesis from intuitive user input
T2 - A review and perspectives
AU - Xue, Yuan
AU - Guo, Yuan Chen
AU - Zhang, Han
AU - Xu, Tao
AU - Zhang, Song Hai
AU - Huang, Xiaolei
N1 - Funding Information:
The co-authors Y.-C. Guo and S.-H. Zhang were supported by the National Natural Science Foundation of China (Project Nos. 61521002 and 61772298), a Research Grant of Beijing Higher Institution Engineering Research Center, and the Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.
Publisher Copyright:
© 2021, The Author(s).
PY - 2022/3
Y1 - 2022/3
N2 - In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.
AB - In many applications of computer graphics, art, and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph, or layout, and have a computer system automatically generate photo-realistic images according to that input. While classically, works that allow such automatic image content generation have followed a framework of image retrieval and composition, recent advances in deep generative models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and flow-based methods have enabled more powerful and versatile image generation approaches. This paper reviews recent works for image synthesis given intuitive user input, covering advances in input versatility, image generation methodology, benchmark datasets, and evaluation metrics. This motivates new perspectives on input representation and interactivity, cross fertilization between major image generation paradigms, and evaluation and comparison of generation methods.
UR - http://www.scopus.com/inward/record.url?scp=85118267980&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118267980&partnerID=8YFLogxK
U2 - 10.1007/s41095-021-0234-8
DO - 10.1007/s41095-021-0234-8
M3 - Review article
AN - SCOPUS:85118267980
SN - 2096-0433
VL - 8
SP - 3
EP - 31
JO - Computational Visual Media
JF - Computational Visual Media
IS - 1
ER -