Skip to content

Latest commit

 

History

History
30 lines (20 loc) · 1 KB

Readme.md

File metadata and controls

30 lines (20 loc) · 1 KB

AdversarialGradient

Motivations

This code reproduces some of the experimental results reported in: Improving back-propagation by adding an adversarial gradient. The paper introduces a very simple variant of adversarial training which yields very impressive results on MNIST, that is to say about 0.80% error rate with a 2 x 400 ReLU MLP.

Requirements

How-to-run-it

Firstly, download the MNIST dataset:

wget http://deeplearning.net/data/mnist/mnist.pkl.gz

Then, run the training script (which contains all the relevant hyperparameters):

python mnist.py

The training only lasts 5 minutes on a TitanX GPU. The best validation error rate should be about 0.83%, and the associated test error rate about 0.93%.