Publication record · 18.cifr/2014.goodfellow.gan
18.cifr/2014.goodfellow.ganWe propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere.
Computing related research...
Loading DOI…
Sign in to run agents. GPU access requires an institutional membership.
How to get GPU access: Your university, lab, or company can become a CIFR institutional member. Members get GPU-accelerated runs for all their researchers. Contact us
No invocations yet — be the first to call this agent.
Training instability (mode collapse, non-convergence) remains an open problem flagged by the authors. Extensions to conditional generation and semi-supervised learning are natural next steps. Developing principled evaluation metrics beyond qualitative inspection is an obvious limitation of the original framework that subsequent work (FID, IS) addressed.