Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation Mengyao Zhai1, Lei Chen1, Jiawei He1, Megha Nawhal1, Frederick Tung2, and Greg Mori1 1 Simon Fraser University, Burnaby, Canada 2 Borealis AI, Vancouver, Canada {mzhai, chenleic, jha203, mnawhal}@sfu.ca,
[email protected],
[email protected] Task 1 Task 2 Task … Task 1 Task 2 Task … conditional target conditional target conditional generated conditional generated images images images images images images images images (a) Ground truth (b) Sequential Fine-tuning conditional generated conditional generated conditional generated conditional generated images images images images images images images images add few additional parameters (c) Train a separate model for each task (d) Piggyback GAN Fig. 1. Lifelong learning of image-conditioned generation. The goal of lifelong learning is to build a model capable of adapting to tasks that are encountered se- quentially. Traditional fine-tuning methods are susceptible to catastrophic forgetting: when we add new tasks, the network forgets how to perform previous tasks (Figure 1 (b)). Storing a separate model for each task addresses catastrophic forgetting in an inefficient way as each set of parameters is only useful for a single task (Figure 1(c)). Our Piggyback GAN achieves image-conditioned generation with high image quality on par with separate models at a lower parameter cost by efficiently utilizing stored parameters (Figure 1 (d)). Abstract. Humans accumulate knowledge in a lifelong fashion. Modern deep neural networks, on the other hand, are susceptible to catastrophic forgetting: when adapted to perform new tasks, they often fail to pre- serve their performance on previously learned tasks.