-
Notifications
You must be signed in to change notification settings - Fork 613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add lisht kernel #529
add lisht kernel #529
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, nice PR!
namespace tensorflow { | ||
namespace addons { | ||
|
||
using CPUDevice = Eigen::ThreadPoolDevice; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typedef?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just out of curiosity. Is there any difference between these two? I do suppose our ops are compiled with c++11 standard.
Quote from cppreference.
There is no difference between a type alias declaration and typedef declaration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. Their difference is: the alias declaration is compatible with templates, whereas the C style typedef is not. My first thought is for consistency with tf core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
#if GOOGLE_CUDA | ||
|
||
using GPUDevice = Eigen::GpuDevice; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
|
||
|
||
## Contribution Guidelines | ||
#### Standard API | ||
In order to conform with the current API standard, all activations | ||
must: | ||
* Be a `tf.function`. | ||
* Have the signature `fn(input, axis=-1, name=None)`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not needed in this PR, but wondering if we should remove the name
parameter from sparsemax
activation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to remove it (at least the style is consistent in the same submodule). But not sure if other operations in image
and text
etc should keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep name
in other modules :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I will create another PR to clean up activation functions (name arg, duplicated tests etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. cc @Squadrick @fsx950223 if they have any concerns about this change?
REGISTER_OP("Addons>Lisht") | ||
.Input("features: T") | ||
.Output("activations: T") | ||
.Attr("T: {half, float, double}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should support bfloat16 too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be a good feature if we could support bfloat16. I will investigate it later and make all acitvations support bfloat16 in the following PRs. Thanks for the suggestions!
.Input("gradients: T") | ||
.Input("features: T") | ||
.Output("backprops: T") | ||
.Attr("T: {half, float, double}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
"@local_config_tf//:tf_header_lib", | ||
] + if_cuda_is_configured([ | ||
"@local_config_cuda//cuda:cuda_libs", | ||
"@local_config_cuda//cuda:cuda_headers", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe there should be a bazel function wrap the cuda config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, all! I prefer to merge it, and address the left comments in the following issues/PRs.
tfa.activations.*