Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing flaky conversational test + flag it as a pipeline test. #9837

Merged
merged 1 commit into from
Jan 28, 2021

Conversation

Narsil
Copy link
Contributor

@Narsil Narsil commented Jan 27, 2021

What does this PR do?

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@patrickvonplaten

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.

@@ -52,9 +53,11 @@ def get_pipeline(self):
# Force model output to be L
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does "Force model output to be L" mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model is a random one that I generate on the fly.
Doing that enables me to actually make the output of that model consistent (because the logits are always 0, 0, 0, 1, 0, 0, ....)

I could always do what I did with the tokenizer which is move it to a dummy model on the hub and use AutoModel instead.
It does respect more previous testing

I used this here because makes that logic explicit in the test, so the tests gets easier to understand from a reader's perspective (is the "L" correct ? Can I change it etc..). I've seen quite a few tests in pipelines that were not detecting bugs because of hidden test logic.

I'll happily switch to AutoModel if you want to.
@LysandreJik

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the L comment either

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it comes from the expected result:

                Conversation(
                    None, past_user_inputs=["What's the last book you have read?"], generated_responses=["L"]
                ),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me like that.

@@ -52,9 +53,11 @@ def get_pipeline(self):
# Force model output to be L
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the L comment either

@@ -52,9 +53,11 @@ def get_pipeline(self):
# Force model output to be L
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it comes from the expected result:

                Conversation(
                    None, past_user_inputs=["What's the last book you have read?"], generated_responses=["L"]
                ),

@Narsil Narsil merged commit b936582 into huggingface:master Jan 28, 2021
Qbiwan pushed a commit to Qbiwan/transformers that referenced this pull request Jan 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants