1


[1910.14673v1] CoGeneration with GANs using AIS based HMC
Different from the classical optimization based methods, specifically GD, which get easily trapped in local optima when solving this task, the proposed approach is much more robust
Abstract Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones – which we refer to as cogeneration – is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction. In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling highdimensional distributions, particularly generative adversarial nets (GANs). Therefore, in this paper, we study the occurring challenges for cogeneration with GANs. To address those challenges we develop an annealed importance sampling based Hamiltonian Monte Carlo cogeneration algorithm. The presented approach significantly outperforms classical gradient based methods on a synthetic and on the CelebA and LSUN datasets. The code is available at https://github. com/AilsaF/cogen_by_ais.
‹Figure 1: Vanilla GAN loss in Z space (top) and gradient descent (GD) reconstruction error for 500, 1.5k, 2.5k and 15k generator training epochs. (AIS based HMC for CoGeneration)Figure 2: AIS based HMC (Overview)Figure 3: Rows correspond to generators trained for a different number of epochs as indicated (left). The columns illustrate: (a) Samples generated with a vanilla GAN (black); (b) GD reconstructions from 100 random initializations; (c) Reconstruction error bar plot for the result in column (b); (d) Reconstructions recovered with Alg. ??; (e) Reconstruction error bar plot for the results in column (d). (Implementation Details)Figure 4: Samples z in Z space during the AIS procedure: after 100, 2k, 3k, 4k and 6k AIS loops. (Implementation Details)Figure 5: Reconstructions errors over the number of progressive GAN training iterations. (a) MSSIM on CelebA; (b) MSE on CelebA; (c) MSSIM on LSUN; (d) MSE on LSUN; (e) MSSIM on CelebAHQ; (f) MSE on CelebAHQ. (Experiments)Figure 6: Reconstructions on 128 × 128 CelebA images for a progressive GAN trained for 10k iterations. (Experiments)Figure 7: Reconstructions on 256 × 256 LSUN images using a pretrained progressive GAN trained for 10k iter. (Synthetic Data)Figure 8: SISR: 128 × 128 to 1024 × 1024 for CelebAHQ images using a progressive GAN (19k iter.). (Synthetic Data)Figure 9: (a) Samples generated with a vanilla GAN (black); (b) GD reconstructions from 100 random initializations; (c) Reconstruction error bar plot for the result in column (b); (d) Reconstructions recovered with Alg. 1; (e) Reconstruction error bar plot for the results in column (d). (Appendix: Additional Synthetic Data Analysis)Figure 10: WGANGP’s loss in Z space and GD loss for reconstruction at 1000th, 5000th, 10000th and 50000th epoch. (Appendix: Additional Synthetic Data Analysis)Figure 11: Rows correspond to generators trained for a different number of epochs as indicated (left). The columns illustrate: (a) Samples generated with a WGANGP (black); (b) GD reconstructions from 100 random initializations; (c) Reconstruction error bar plot for the result in column (b); (d) Reconstructions recovered with our proposed AIS based HMC algorithm; (e) Reconstruction error bar plot for the results in column (d). (Appendix: Additional Synthetic Data Analysis)Figure 12: z state in Z space during the AIS procedure after the 100th, 3700th, 3800th, 3900th and 4000th AIS loop. (Appendix: Additional Synthetic Data Analysis)Figure 13: Reconstructions errors over the number of progressive GAN training iterations. (a) MSSIM on LSUN test data; (b) MSE on LSUN test data. (Appendix: Additional Real Data Examples)Figure 14: Reconstructions on 128×128 CelebA images for a trained progressive GAN at 10kth iteration. (a) Ground truth and masked (observed) images (top to bottom); (b) The result obtained by optimizing the best z picked from 5,000 initializations (top to bottom); (c) Result generated by our algorithm. (Appendix: Additional Real Data Examples)Figure 15: Reconstructions on 256 × 256 LSUN images for a trained progressive GAN at 10kth iteration. (a) Ground truth; (b) Masked (observed) images; (c) The result obtained whenn optimizing a single z; (d) The result obtained by optimizing the best z picked from 5,000 initializations; (e) Result of our algorithm. (Appendix: Additional Real Data Examples)Figure 16: Simple Super Resolution Task result from a 128×128 to a 1024×1024 image for a trained progressive GAN at 19kth iteration. (a) Ground truth; (b) The result obtained by optimizing a single z; (c) The result obtained by optimizing the best z picked from 5,000 initializations; (d) Results of our algorithm. (Appendix: Additional Real Data Examples)›



Related: TFIDF
[1903.03477] AutoEncoding Progressive Generative Adversarial Networks For 3D Multi Object Scenes[1902.03442] Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving[1807.03026] Pioneer Networks: Progressively Growing Generative Autoencoder[1804.04732] Multimodal Unsupervised ImagetoImage Translation[1802.01568] Selective Sampling and Mixture Models in Generative Adversarial Networks[1711.05914] How Generative Adversarial Networks and Their Variants Work: An Overview[1807.06358] IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis[1805.10871] CerfGAN: A Compact, Effective, Robust, and Fast Model for Unsupervised MultiDomain ImagetoImage Translation[1905.01270] DRIT++: Diverse ImagetoImage Translation via Disentangled Representations[1901.04530] CrossNet: Latent CrossConsistency for Unpaired Image Translation
