[1910.06922v1] Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs
We hypothesize and confirm experimentally that L∞ -norm penalties with Hinge loss produce better GANs than L2 -norm penalties (based on common evaluation metrics)

Figure 1: Two-dimensional GAN example with different choices of boundaries.
‹

min_(w, b)(/ w /_2**2) s.t. y(w**T * x - b) >= 1 forall(x, y) in D

Figure 2: ∇f(x(1)) at different values of x(1) for the two-dimensional example assuming a sigmoid function. (Why do maximum-margin classifiers make good GAN discriminators/critics?) (Why do maximum-margin classifiers make good GAN discriminators/critics?)›