You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dynamic time warping applied on encoder-decoder cross-attention matrices of whisper models can be used to find a word-level alignment between audio and transcriptions. openai/whisper provides an implementation this in find_alignment that returns timestamps (start and end) for each word in the transcription (here text_tokens).
This has various usecases for us and it would be great to have this capability exposed via vLLM.
Alternatives
one alternative here is to use the reference impl find_alignment from python directly, calling it once for each sample in a batch of audio samples (or maybe implement a variant find_alignment capable of handling batch inputs)
Two more comments looking at the reference impl for find_alignment:
batching the encoder inference should be easy, whereas decoder batching is probably more complicated (due to flash attention and bookkeeping of the cross-attention matrices)
text_tokens could be a transcription of the whisper model itself but doesn't have to be (can be any other sequence of tokens, possibly from another model or human-labeled data). As such it would be great if vLLM also supports user-provided inputs for this.
Just FYI transcriptions API is now available since #12909, so you can indeed run and serve whisper with vllm but without alignment.
The algorithm is very interesting but I am afraid unlike anything that has been implemented in vllm so far (please correct me @robertgshaw2-redhat ).
In particular fetching the QK activations fom the various backends seems very problematic. I don't expect any of them will allow hooks as you have with torch (you compute everything in blocks) as flexibility is traded for performance (cc @mgoin ).
You can still recompute them manually or even reverse from attn output, but that would hit performance. Also in terms of interfaces this would be very custom/fit to whisper.
🚀 The feature, motivation and pitch
Dynamic time warping applied on encoder-decoder cross-attention matrices of whisper models can be used to find a word-level alignment between audio and transcriptions. openai/whisper provides an implementation this in
find_alignment
that returns timestamps (start and end) for each word in the transcription (heretext_tokens
).This has various usecases for us and it would be great to have this capability exposed via vLLM.
Alternatives
find_alignment
from python directly, calling it once for each sample in a batch of audio samples (or maybe implement a variantfind_alignment
capable of handling batch inputs)Both options are feasible but:
Additional context
This is the PR for initial whisper support in vLLM but afaik there is no support for alignment yet.
Two more comments looking at the reference impl for
find_alignment
:text_tokens
could be a transcription of the whisper model itself but doesn't have to be (can be any other sequence of tokens, possibly from another model or human-labeled data). As such it would be great if vLLM also supports user-provided inputs for this.cc @mru4913 @NickLucche
The text was updated successfully, but these errors were encountered: