Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix QA argument handler #8765

Merged
merged 2 commits into from
Nov 25, 2020
Merged

Fix QA argument handler #8765

merged 2 commits into from
Nov 25, 2020

Conversation

LysandreJik
Copy link
Member

The QA argument handler does not handle multiple sequences at a time anymore. This was not tested, so I added it to the tests.

Fix #8759

To test it out run the following on master:

from transformers import pipeline

nlp = pipeline("question-answering")

context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the `run_squad.py`.
"""

print(
    nlp(
        question=["What is extractive question answering?", "What is a good example of a question answering dataset?"],
        context=[context, context],
    )
)

@LysandreJik
Copy link
Member Author

The errors are due to connection errors.

@LysandreJik LysandreJik merged commit 138f45c into master Nov 25, 2020
@LysandreJik LysandreJik deleted the fix-qa-pipeline branch November 25, 2020 19:02
LysandreJik added a commit that referenced this pull request Nov 30, 2020
* Fix QA argument handler

* Attempt to get a better fix for QA (#8768)

Co-authored-by: Nicolas Patry <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Version 3.5 broke the multi context/questions feature for the QuestionAnsweringPipeline
2 participants