Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 9918 #9932

Merged
merged 2 commits into from
Feb 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/source/main_classes/tokenizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,8 @@ PreTrainedTokenizer
:special-members: __call__
:members:

.. automethod:: encode


PreTrainedTokenizerFast
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -64,6 +66,8 @@ PreTrainedTokenizerFast
:special-members: __call__
:members:

.. automethod:: encode


BatchEncoding
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
37 changes: 23 additions & 14 deletions src/transformers/models/dpr/modeling_dpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,28 +364,35 @@ def init_weights(self):

Indices can be obtained using :class:`~transformers.DPRTokenizer`. See
:meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
details. attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`,
`optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0,
1]``:
details.

`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:

- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.

`What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`torch.LongTensor` of
shape :obj:`(batch_size, sequence_length)`, `optional`): Segment token indices to indicate first and second
portions of the inputs. Indices are selected in ``[0, 1]``:
`What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0,
1]``:

- 0 corresponds to a `sentence A` token,
- 1 corresponds to a `sentence B` token.

`What are token type IDs? <../glossary.html#token-type-ids>`_ inputs_embeds (:obj:`torch.FloatTensor` of
shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing
:obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want
more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal
embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the
attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail.
output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers.
See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`):
`What are token type IDs? <../glossary.html#token-type-ids>`_
inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
vectors than the model's internal embedding lookup matrix.
output_attentions (:obj:`bool`, `optional`):
Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
tensors for more detail.
output_hidden_states (:obj:`bool`, `optional`):
Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
more detail.
return_dict (:obj:`bool`, `optional`):
Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
"""

Expand All @@ -403,6 +410,8 @@ def init_weights(self):

Indices can be obtained using :class:`~transformers.DPRReaderTokenizer`. See this class documentation for
more details.

`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(n_passages, sequence_length)`, `optional`):
Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:

Expand Down
12 changes: 8 additions & 4 deletions src/transformers/models/dpr/modeling_tf_dpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -486,22 +486,26 @@ def serving(self, inputs):

(a) For sequence pairs (for a pair title+text for example):

``tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]``
::
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That docstring had been changed to work around the doc-styling failure. This makes it the same as the PyTorch one,


``token_type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1``
tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
token_type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1

(b) For single sequences (for a question for example):

``tokens: [CLS] the dog is hairy . [SEP]``
::

``token_type_ids: 0 0 0 0 0 0 0``
tokens: [CLS] the dog is hairy . [SEP]
token_type_ids: 0 0 0 0 0 0 0

DPR is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.

Indices can be obtained using :class:`~transformers.DPRTokenizer`. See
:meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
details.

`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:

Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/rag/modeling_rag.py
Original file line number Diff line number Diff line change
Expand Up @@ -412,6 +412,8 @@ def from_pretrained_question_encoder_generator(
Indices of input sequence tokens in the vocabulary. :class:`~transformers.RagConfig`, used to initialize
the model, specifies which generator to use, it also specifies a compatible generator tokenizer. Use that
tokenizer class to obtain the indices.

`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:

Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/t5/modeling_t5.py
Original file line number Diff line number Diff line change
Expand Up @@ -1041,6 +1041,8 @@ def forward(
:meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
detail.

`What are input IDs? <../glossary.html#input-ids>`__

To know more on how to prepare :obj:`input_ids` for pretraining take a look a `T5 Training
<./t5.html#training>`__.
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Expand Down
4 changes: 3 additions & 1 deletion src/transformers/models/t5/modeling_tf_t5.py
Original file line number Diff line number Diff line change
Expand Up @@ -929,14 +929,16 @@ def _shift_right(self, input_ids):

T5_INPUTS_DOCSTRING = r"""
Args:
inputs (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`):
input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on the right or the left.

Indices can be obtained using :class:`~transformers.BertTokenizer`. See
:func:`transformers.PreTrainedTokenizer.__call__` and :func:`transformers.PreTrainedTokenizer.encode` for
details.

`What are input IDs? <../glossary.html#input-ids>`__

To know more on how to prepare :obj:`inputs` for pretraining take a look at `T5 Training
<./t5.html#training>`__.
decoder_input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
Expand Down
21 changes: 20 additions & 1 deletion utils/style_doc.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,14 @@ def init_in_block(self, text):
"""
return SpecialBlock.NOT_SPECIAL

def end_of_special_style(self, line):
"""
Sets back the `in_block` attribute to `NOT_SPECIAL`.

Useful for some docstrings where we may have to go back to `ARG_LIST` instead.
"""
self.in_block = SpecialBlock.NOT_SPECIAL
Comment on lines +138 to +144
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically the problem was that when a code block was nested in the argument list, we were returning to a not special style instead of returning to an arg list style. This new method overriden in the subclass DocStyler ensures we return to the proper type.


def style_paragraph(self, paragraph, max_len, no_style=False, min_indent=None):
"""
Style `paragraph` (a list of lines) by making sure no line goes over `max_len`, except if the `no_style` flag
Expand Down Expand Up @@ -220,6 +228,7 @@ def style(self, text, max_len=119, min_indent=None):
new_lines = []
paragraph = []
self.current_indent = ""
self.previous_indent = None
# If one of those is True, the paragraph should not be touched (code samples, lists...)
no_style = False
no_style_next = False
Expand Down Expand Up @@ -251,7 +260,7 @@ def style(self, text, max_len=119, min_indent=None):
self.current_indent = indent
elif not indent.startswith(self.current_indent):
# If not, we are leaving the block when we unindent.
self.in_block = SpecialBlock.NOT_SPECIAL
self.end_of_special_style(paragraph[0])

if self.is_special_block(paragraph[0]):
# Maybe we are starting a special block.
Expand Down Expand Up @@ -326,13 +335,23 @@ def is_comment_or_textual_block(self, line):

def is_special_block(self, line):
if self.is_no_style_block(line):
if self.previous_indent is None and self.in_block == SpecialBlock.ARG_LIST:
self.previous_indent = self.current_indent
self.in_block = SpecialBlock.NO_STYLE
return True
if _re_arg_def.search(line) is not None:
self.in_block = SpecialBlock.ARG_LIST
return True
return False

def end_of_special_style(self, line):
if self.previous_indent is not None and line.startswith(self.previous_indent):
self.in_block = SpecialBlock.ARG_LIST
self.current_indent = self.previous_indent
else:
self.in_block = SpecialBlock.NOT_SPECIAL
self.previous_indent = None

def init_in_block(self, text):
lines = text.split("\n")
while len(lines) > 0 and len(lines[0]) == 0:
Expand Down