Skip to content

A very simple variant of adversarial training that yields excellent results on MNIST

Notifications You must be signed in to change notification settings

MatthieuCourbariaux/AdversarialGradient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

AdversarialGradient

Motivations

This code reproduces some of the experimental results reported in: Improving back-propagation by adding an adversarial gradient. The paper introduces a very simple variant of adversarial training which yields very impressive results on MNIST, that is to say about 0.80% error rate with a 2 x 400 ReLU MLP.

Requirements

How-to-run-it

Firstly, download the MNIST dataset:

wget http://deeplearning.net/data/mnist/mnist.pkl.gz

Then, run the training script (which contains all the relevant hyperparameters):

python mnist.py

The training only lasts 5 minutes on a TitanX GPU. The best validation error rate should be about 0.83%, and the associated test error rate about 0.93%.

About

A very simple variant of adversarial training that yields excellent results on MNIST

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages