Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem submission - Learning gradient descent with synthetic objectives #10

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open

Conversation

cjratcliff
Copy link

No description provided.

Copy link
Member

@farizrahman4u farizrahman4u left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 new lines after each section; Only 1 required between paragraphs of the same section.

## Problem description
Current optimization algorithms for neural networks such as SGD, RMSProp and Adam are hand-crafted and generally quite simple. This can be partly explained by the high-dimensional, non-convex nature of neural network's objective functions which human intuition, normally limited to three spatial dimensions, is not well-suited for. A learning algorithm, therefore, may be able to design a superior optimizer.


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary white space.

## Project status
A formula for generating synthetic objective functions has been created. These functions are differentiable and their dimensionality and degree of non-linearity can be controlled with hyperparameters.


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here too.

@fchollet
Copy link
Member

I would like to see more people reviewing this proposal and giving feedback.

@farizrahman4u
Copy link
Member

@cjratcliff Do you have links for the stuff mentioned under project status? For the optimizers trained with supervision and reinforcement learning?

@cjratcliff
Copy link
Author

@farizrahman4u Sure. I've now uploaded the code to Github here. Note that the learned optimizers can't be used as seamlessly as TensorFlow's inbuilt ones right now. Gradients have to be explicitly calculated and passed into the optimizer, which outputs the updates.

Don't expect anything much from the RL version. It's far enough from being complete that it's no better than choosing random updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants