
[1911.13270v1] Transflow Learning: Repurposing Flow Models Without Retraining
Such developments would be a boon to the applicability of Transflow Learning, especially when being used for downstream tasks.
Abstract It is well known that deep generative models have a rich latent space, and that it is possible to smoothly manipulate their outputs by traversing this latent space. Recently, architectures have emerged that allow for more complex manipulations, such as making an image look as though it were from a different class, or painted in a certain style. These methods typically require large amounts of training in order to learn a single class of manipulations. We present Transflow Learning, a method for transforming a pretrained generative model so that its outputs more closely resemble data that we provide afterwards. In contrast to previous methods, Transflow Learning does not require any training at all, and instead warps the probability distribution from which we sample latent vectors using Bayesian inference. Transflow Learning can be used to solve a wide variety of tasks, such as neural style transfer and fewshot classification.
‹Figure 1: Transflow Learning finds a posterior (top right) in between the prior (bottom left) and the evidence (cross mark). We see that as λ becomes larger, the mean of the posterior becomes closer to the mean of the prior, and the covariance of the posterior becomes larger, but in both cases the evidence can be sampled with relatively high probability. (Algorithm)Figure 2: Interpolation between two sets of images far outside the training distribution, by first projecting onto the manifold of human faces, and then interpolating the parameters of the posterior distributions. Note that as the distribution gets closer to that of Rembrandt’s selfportraits, the colours in the image get darker, men are sampled much more frequently, the hair is often gone from the samples (as Rembrandt often wore a hat which blended in with the background), and the sampled faces are more tilted towards the right. (View in numerical or reverse numerical order) (Related Work)Figure 3: Direct interpolation between two images from the same dataset as in Figure ??. Note that many intermediate images are not faces. (Related Work)Figure 4: A flow model trained on CelebA, conditioned on relatively natural images. The images of people with red hair (left) are sampled from a posterior using only five images as evidence, showing that our model is very sampleefficient for strict subsets of the CelebA distribution. Greyscale images (right), despite not appearing in the CelebA training set, were also successfully captured by a Transflow Learning posterior. (Indistribution Conditioning)Figure 5: Even when providing Transflow learning with evidence that is far outside of the distribution on which the flow model is originally trained (a), we are able to learn a sensible posterior distribution (b). Evidence that is so unlikely that it could not have come from a natural image (c), however, causes the posterior mean to be too far from the mean of the original distribution, and output samples (d) are no longer meaningful, even for high values of λ. (Indistribution Conditioning)Figure 6: Varying the λ hyperparameter for a greyscale dataset. λ that is low creates images resembling pencil sketches, whereas λ that is high creates images with very subdued colors. (Indistribution Conditioning)Figure 7: Varying the λ hyperparameter for different outofdistribution conditioning datasets. λ that is too low creates samples too close to the sample mean, whereas λ that is too high creates samples too close to those from the original distribution. (Outofdistribution Conditioning)Figure 8: Samples from the posterior of a CelebA flow model conditioned on MNIST, for m equal to 1, 5, and 30 respectively. For m equal to 1, the samples look very similar to the evidence. As m is increased, sample quality decreases due to the sample means becoming closer to ~ 0 (and therefore becoming more “humanlike), but prediction accuracy increases greatly. (MNIST Classification)›

