-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Attention for Dense Networks on Keras #54
Conversation
proposal work with some configurations of RNN networks. | ||
Namely, when users create | ||
[tf.keras.layers.LSTM](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) | ||
with `return_sequences=True`, the rest works the same way as CNN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My impression is that tf.keras.layers.LSTM returns the output in shape [sequence_length, batch_size, ...] while CNNs operate in [batch_size, sequence_length, ...].
So maybe mention here that you'll need a transpose to use your attention layer on an LSTM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this. But I could not find any documentation or examples on the output shape for LSTM. Can you give me a pointer to verify the shape?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The output shape for LSTM is [batch, timestep, unit] when return_sequences=True. In the case of return_seqences=False, the output shape is [batch, unit].
rfcs/20190115-dense-attention.md
Outdated
[tf.keras.layers.LSTM](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) | ||
with `return_sequences=True`, the rest works the same way as CNN. | ||
|
||
Unfortunately, this technique does not cover sequence-to-sequence RNN models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By "this technique" do you mean your proposal?
If so I think your proposal should support masking, as it's very common.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify your comment, please? In the call() method, users can pass a mask. Do you think we need to do something additional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By masking I mean the input-dependent masking which you use when doing self-attention in a decoder (where the attention at position i is allowed to look at all positions j < i).
I think this should compose well with a manual mask but maybe providing an example would be nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. This is a valid use case. Let me spend some more time to better understand this case and get back to you, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this, and I don't think this can be implemented as a manual mask or a composable layer. It needs to be a feature of the layer.
I added a couple of options about how this can be implemented in the "Self-attention" section. Let me know what you think, thanks.
|
||
We propose to implement the following common attention layers: | ||
|
||
* `Attention`: Basic dot-product attention, a.k.a. Luong-style attention. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section should put the equations for both forms, for clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I put pseudocode in the pydoc in the following section. Let me add that here, too (once I figure out how to do this in github, sorry for the delay).
rfcs/20190115-dense-attention.md
Outdated
|
||
The output is of shape `[batch_size, Tq, dim]`. | ||
|
||
Following the pattern of other Keras layers, we pass the list `[query, value]` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels awkward, specially passing a tuple as the mask argument. Why not have query_mask and value_mask arguments separately, for better self-documenting code?
Similarly for query and value; it's nice if users can tell easily which is which.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Francois suggested a list, because the same pattern is used in other places, such as tf.keras.layers.Add. See the code in https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/python/keras/layers/merge.py#L205
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss in the design review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SG, I am adding it to "Alternatives Considered".
|
||
We will first work on the implementation for Tensorflow. | ||
|
||
### Self-Attention |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most uses of self-attention that I know of are seq2seq and so want masking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify what you mean? I think the mask argument this api supports is good enough for DNN/CNN networks, such as transformer. WDYT?
The Self-Attention variant can be implemented by passing the same tensor to both | ||
`query` and `value`. | ||
|
||
### Multi-Head Attention |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is covered by your proposal; just use multiple Attention layers; each will represent one attention head.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. It is not very pretty, but it works. Let me update the text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multi head attention is usually batched so I think it requires a special case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All tensors are assumed to be batched. Namely, input tensors are [batch_size, dim]
etc. Is this what you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant multi head attention is usually a single layer that takes [batch_size, num_heads, time, dim]
and not num_heads
layers that take [batch_size, time, dim]
as @alextp proposed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, then the user needs to split the tensor into num_heads
tensors of shape [batch_size, time, dim]
. This is one way to do it.
in the model output. The proposed techniques can be implemented as an additional | ||
feature in the Attention API. | ||
|
||
* https://arxiv.org/abs/1803.02155 describes how relative position |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd help to clarify here whether the position-dependent attention implementations compose with this layer or would require new layers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. For relative position representation, we need to modify the attention layer. Absolute position can be implemented as a separate layer that composes with this layer. Let me update the text.
rfcs/20190115-dense-attention.md
Outdated
how absolute position information can be added as a deterministic function | ||
of position. | ||
|
||
### 2D and 3D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels to me like we should be able to support n-d attention (not just 2d or 3d) with ~the same code, so we should do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. Let me update the text.
The amount of "does this but does not do that" cautions in the proposal confirms a suspicion I've had for awhile: the Keras API is a little too low-level. |
@LanceNorskog thanks for the feedback. Do you have any suggestions on how to make it more powerful, starting from this review? Thanks. |
rfcs/20190115-dense-attention.md
Outdated
and | ||
[tf.contrib.seq2seq.BahdanauAttention](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BahdanauAttention) | ||
are implementations of dot-product (Luong) and additive (Bahdanau) Attention | ||
respectively for RNN in Tensorflow. This proposal is based on this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By "based", do you mean it will extend classes from tf.seq2seq
? It's unclear to me if this proposal has this recent change in mind: tensorflow/tensorflow@b797012.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for you comment. I mean that it is inspired from it, and that the implementation is consistent. Let me update the text.
Yes, I am aware of this change. It covers RNN networks, and it moves in the same direction as my proposal. Namely, implementing as keras.layer. Let me update the text to clarify that.
Notes from the review meeting:
|
* Adding a doc to deprecate collections * Responding to Karmels comments * Minor fix to VariableTracker sample code * RFC for random numbers in TensorFlow 2.0 * Changes after some feedback * Removed 'global_seed' in the main code and showed the design with 'global_seed' in the Questions section. * Some changes after feedback * A tweak * Change after feedback * A tweak * changes * changes * fix link * new-rfc * changes * Update rfcs/20181225-tf-backend.md Co-Authored-By: alextp <[email protected]> * Added some considerations about tf.function * Renamed the internal name "op_generator" to "global_generator" * Changed seed size from 256 to 1024 bits * Initial signpost for community meetings Adding this so there is basic information about how to find the community calendar and get invited to meetings. * Add iCal link too * changes * Initial version of embedding and partitioned variable RFC. * Fix one formatting issue. * Fix another formatting issue. * Use markdown language for the table instead of HTML. * Add tensorflow/io R Package CRAN release instructions (tensorflow#53) * Added Design Review Notes * Make clear distinction between embedding variables and loadbalancing variables. * Added decisions below each question, and "how to use generators with distribution strategies". * Adopted Dong Lin's suggestions * Add a paragraph pointing out the problem with the `partition_strategy` argument. * RFC: Move from tf.contrib to addons (tensorflow#37) * Checkpoint addons RFC for review * Add code review to RFC Add future pull request information to criteria Update modified date added some description RFC Move to addons * Add weight decay optimizers * Remove conv2d_in_plane * Add group_norm * Accept addons RFC * Update alternatives since `DynamicPartition` and `DynamicStitch` do have GPU kernels. * Add a section for saving and restore `PartitionedVariable`. * Mention that variable types can be nested, attention needs to be paid to their saving and restoring mechanism. * Create README.md (tensorflow#57) * Splitted `_state_var` into `_state_var` and `_alg_var` (because of concerns from implementation), and changed status to "Accepted" * Updated timestamp * Moved the auto-selection of algorithm from `create_rng_state` to `Generator.__init__` * Update according to the discussion * Move performance heuristics in Distribution Strategy level. We will not expose knobs for users to control; * Emphasize that embedding support in v2 will all be via `Embedding` layer. Users can use `tf.compat.v1` to handle embedding by themselves; * Mention that default `partition_strategy` in v1 `embedding_lookup` is "mod", which will possibly break users's model when they update to TF 2.0; * We want to prioritize shuffling embedding after 2.0 release; * We have plans to serialize and deserialize `Embedding` layer and Distribution Strategies to allow loading a saved model to a different number of partitions. * Update relese binary build command for sig-io (tensorflow#58) This PR updates relese binary build command for sig-io Signed-off-by: Yong Tang <[email protected]> * Add Bryan to SIG IO release team (tensorflow#59) * Change to accepted * Add link to TensorFlow IO R package * Updated link for the friction log. (tensorflow#64) * Switch DistStrat revised API examples to TensorFlow 2 style. (tensorflow#63) * RFC: Attention for Dense Networks on Keras (tensorflow#54) * Design review for "Attention for Dense Networks" * RFC: Stateful Containers with tf.Module (tensorflow#56) * Create 20190117-tf-module.md * Update 20190117-tf-module.md * Loosen return type for variable properties. * Use Dense consistently. Thanks brilee@ for spotting! * Remove convert_to_tensor from examples. This wasn't ever required and including it might cause confusion. h/t pluskid@ gehring@ and awav@ * Remove owned_* methods. * Document `_flatten` See tensorflow/tensorflow@5076adf6 for more context. * Fix typo in module name. Thanks k-w-w@! * Update 20190117-tf-module.md * RFC: New tf.print (tensorflow#14) * New tf.print proposal * Attempt to fix table of contents * Removed not-working TOC label * Minor updates to the doc. * Update tf.print to be accepted * Added design review notes * Marking doc as accepted * Update cond_v2 design doc (tensorflow#70) * Update to bring in line with implementation * Added the symbol map to the RFC. * Updated testing section of the Community site. * Removed the 100%, formatting tweaks. * Update CHARTER.md * Change contact email address I will leave my current company soon, so update my email. * Create README.md * Logos for SIGs * Update README.md * Update addons owners (tensorflow#85) Add Yan Facai as another project lead. * Created a FAQ for TF 2.0. (tensorflow#78) Adding 2.0 related FAQ to the Testing group. * Request and charter for SIG JVM (tensorflow#86) Chartering docs for SIG JVM * Update CODEOWNERS Add @karllessard, @sjamesr and @tzolov as code owners for sigs/jvm. * Update CODEOWNERS Add missing / * Update CODEOWNERS Add @dynamicwebpaige as owner for sigs/testing/ * Update RFC with current information (tensorflow#89) Make current to SIG Addons * RFC: TF on Demand Project (tensorflow#69) * Adding an RFC for TF on Demand Project. * modified one line in tf-on-demand md file. * Changing RFC status from PROPOSED to ACCEPTED. * RFC: SavedModel Save/Load in 2.x (tensorflow#34) * RFC for SavedModel Save/Load in 2.x * Minor edits and a discussion topic for load() with multiple MetaGraphs * Tweak to the "Imported representations of signatures" section * Update "Importing existing SavedModels" with the .signatures change * Update RFC and add review notes * Status -> accepted * Update CHARTER.md New leads. * Update 20180920-unify-rnn-interface.md (tensorflow#81) Typo fix. * Update yyyymmdd-rfc-template.md Adding "user benefit" section into the RFC template, to encourage articulating the benefit to users in a clear way. * Update while_v2 design doc (tensorflow#71) * Update while_v2 design doc, include link to implementation * Update TF 2.0 FAQ to link to TensorBoard TF 2.0 tutorial (tensorflow#94) * CLN: update sig addons logo png (tensorflow#99) * Add SIG Keras Add a reference link to Keras' governance repository for SIG Keras. * RFC: String Tensor Unification (tensorflow#91) * RFC: String Tensor Unification * Updated rfcs/20190411-string-unification.md Updated TFLite sections to address feedback from @jdduke. Marked as Accepted. * Start RFC for tensor buffers
The feedback phase will be open for 2 weeks until 2019-01-30
Attention for Dense Networks on Keras
Summary
This RFC proposes adding a layer for attention in
tf.keras.layers
that works with CNN/DNN networks.Recently people have had success using the Attention mechanism in dense layers,
e.g. CNN+Attention or Transformer networks.
tf.keras.layers
is the recommended way to build models in tensorflow, but it does not have a layer for attention that works with CNN/DNN networks.Note that this review focuses on Dense networks, namely CNN/DNN. It does not cover Recurrent Neural Networks (RNN).