Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Conditional Gradient Optimizer #468

Closed
pkan2 opened this issue Aug 31, 2019 · 0 comments
Closed

Add Conditional Gradient Optimizer #468

pkan2 opened this issue Aug 31, 2019 · 0 comments

Comments

@pkan2
Copy link
Contributor

pkan2 commented Aug 31, 2019

System information

  • TensorFlow version (you are using): 2.0.0-beta1
  • TensorFlow Addons version:
  • Is it in the tf.contrib (if so, where): No
  • Are you willing to contribute it (yes/no): Yes
  • Are you willing to maintain it going forward? (yes/no): Yes

Describe the feature and the current behavior/state.
The implementation of this optimizer is based on the following paper:
https://arxiv.org/pdf/1803.06453.pdf
Current implementation enforces Frobenius norm constraints on a model.
The variable update rule being implemented is:
variable -= (1-learning_rate) * (variable + lamda * gradient / frobenius_norm(gradient))
where, learning_rate and lamda are the parameters that are needed to input into the model when initializing the model.

Will this change the current api? How?
It will not change the current api. It is implemented based on the abstract interface of tf.keras.optimizers.Optimizer.

Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.

Any Other info.
co-contributor – Vishnu Lokhande

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants