Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix missing param on transformers 4.4 #59

Merged
merged 9 commits into from
Jul 25, 2024

Conversation

gravityrail
Copy link

This was necessary for me on MacOS, not sure if others need it.

Copy link
Member

@eginhard eginhard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR! Just one minor thing for the style check and then could you also update the minimum version for transformers in

"transformers>=4.41.1",
to 4.42.0? That looks to be the first one where this is available.

@eginhard eginhard merged commit 20bbb41 into idiap:dev Jul 25, 2024
49 checks passed
@pseudotensor
Copy link

I still get this error off your head of dev

Traceback (most recent call last):
  File "/home/jon/h2ogpt/src/tts_coqui.py", line 114, in get_voice_streaming
    for i, chunk in enumerate(chunks):
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 658, in inference_stream
    gpt_generator = self.gpt.get_generator(
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/TTS/tts/layers/xtts/gpt.py", line 602, in get_generator
    return self.gpt_inference.generate_stream(
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/TTS/tts/layers/xtts/stream_generator.py", line 179, in generate
    model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
  File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/generation/utils.py", line 498, in _prepare_attention_mask_for_generation
    torch.isin(elements=inputs, test_elements=pad_token_id).any()
TypeError: isin() received an invalid combination of arguments - got (test_elements=int, elements=Tensor, ), but expected one of:
 * (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)

I had a patch that worked for transformers 4.42.3:

https://github.com/h2oai/h2ogpt/blob/52923ac21a1532983c72b45a8e0785f6689dc770/docs/xtt.patch

But 4.43.1+ broke that further.

@pseudotensor
Copy link

I had to patch transformers itself a bit to work-around my issues: h2oai/h2ogpt#1771

Maybe affects caching behavior and slows things down, unsure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants