Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support token-level timestamps in whisper models #13400

Open
iceychris opened this issue Feb 17, 2025 · 1 comment
Open

[Feature]: Support token-level timestamps in whisper models #13400

iceychris opened this issue Feb 17, 2025 · 1 comment

Comments

@iceychris
Copy link

iceychris commented Feb 17, 2025

🚀 The feature, motivation and pitch

Dynamic time warping applied on encoder-decoder cross-attention matrices of whisper models can be used to find a word-level alignment between audio and transcriptions. openai/whisper provides an implementation this in find_alignment that returns timestamps (start and end) for each word in the transcription (here text_tokens).

This has various usecases for us and it would be great to have this capability exposed via vLLM.

Alternatives

  • one alternative here is to use the reference impl find_alignment from python directly, calling it once for each sample in a batch of audio samples (or maybe implement a variant find_alignment capable of handling batch inputs)
  • whisper.cpp and the code implemented in this PR is also an option

Both options are feasible but:

  • require the client/user to run custom python or native code
  • both alternatives are neither efficient nor fast for a large number of (possibly concurrent) audio inputs/requests

Additional context

This is the PR for initial whisper support in vLLM but afaik there is no support for alignment yet.

Two more comments looking at the reference impl for find_alignment:

  • batching the encoder inference should be easy, whereas decoder batching is probably more complicated (due to flash attention and bookkeeping of the cross-attention matrices)
  • text_tokens could be a transcription of the whisper model itself but doesn't have to be (can be any other sequence of tokens, possibly from another model or human-labeled data). As such it would be great if vLLM also supports user-provided inputs for this.

cc @mru4913 @NickLucche

@NickLucche
Copy link
Contributor

Just FYI transcriptions API is now available since #12909, so you can indeed run and serve whisper with vllm but without alignment.

The algorithm is very interesting but I am afraid unlike anything that has been implemented in vllm so far (please correct me @robertgshaw2-redhat ).
In particular fetching the QK activations fom the various backends seems very problematic. I don't expect any of them will allow hooks as you have with torch (you compute everything in blocks) as flexibility is traded for performance (cc @mgoin ).

You can still recompute them manually or even reverse from attn output, but that would hit performance. Also in terms of interfaces this would be very custom/fit to whisper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants