Self-Supervised GANs via Auxiliary Rotation Loss Ting Chen∗ Xiaohua Zhai Marvin Ritter University of California, Los Angeles Google Brain Google Brain
[email protected] [email protected] [email protected] Mario Lucic Neil Houlsby Google Brain Google Brain
[email protected] [email protected] Abstract proposed [4, 5, 6, 7, 8, 9, 10]. A major contributor to train- ing instability is the fact that the generator and discriminator Conditional GANs are at the forefront of natural image learn in a non-stationary environment. In particular, the dis- synthesis. The main drawback of such models is the neces- criminator is a classifier for which the distribution of one sity for labeled data. In this work we exploit two popular class (the fake samples) shifts as the generator changes dur- unsupervised learning techniques, adversarial training and ing training. In non-stationary online environments, neural self-supervision, and take a step towards bridging the gap networks forget previous tasks [11, 12, 13]. If the discrimi- between conditional and unconditional GANs. In particular, nator forgets previous classification boundaries, training may we allow the networks to collaborate on the task of repre- become unstable or cyclic. This issue is usually addressed sentation learning, while being adversarial with respect to either by reusing old samples or by applying continual learn- the classic GAN game. The role of self-supervision is to ing techniques [14, 15, 16, 17, 18, 19]. These issues become encourage the discriminator to learn meaningful feature rep- more prominent in the context of complex data sets. A key resentations which are not forgotten during training.