You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are you willing to maintain it going forward? (yes/no): Yes
Describe the feature and the current behavior/state.
The implementation of this optimizer is based on the following paper: https://arxiv.org/pdf/1803.06453.pdf
Current implementation enforces Frobenius norm constraints on a model.
The variable update rule being implemented is:
variable -= (1-learning_rate) * (variable + lamda * gradient / frobenius_norm(gradient))
where, learning_rate and lamda are the parameters that are needed to input into the model when initializing the model.
Will this change the current api? How?
It will not change the current api. It is implemented based on the abstract interface of tf.keras.optimizers.Optimizer.
Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.
Any Other info.
co-contributor – Vishnu Lokhande
The text was updated successfully, but these errors were encountered:
System information
Describe the feature and the current behavior/state.
The implementation of this optimizer is based on the following paper:
https://arxiv.org/pdf/1803.06453.pdf
Current implementation enforces Frobenius norm constraints on a model.
The variable update rule being implemented is:
variable -= (1-learning_rate) * (variable + lamda * gradient / frobenius_norm(gradient))
where, learning_rate and lamda are the parameters that are needed to input into the model when initializing the model.
Will this change the current api? How?
It will not change the current api. It is implemented based on the abstract interface of tf.keras.optimizers.Optimizer.
Who will benefit with this feature?
We provide an API for an optimizer which can enforce hard constraints on neural networks. It is based on conditional gradient descent algorithm. The community primarily benefiting from this feature would be machine learning researchers and scientists.
Any Other info.
co-contributor – Vishnu Lokhande
The text was updated successfully, but these errors were encountered: