Distil-whisper

Models by this creator

📊

distil-large-v2

distil-whisper

Total Score

490

The distil-large-v2 model is a distilled version of the Whisper large-v2 model. It is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets compared to the larger Whisper model. This makes it a more efficient alternative for speech recognition tasks. The Distil-Whisper repository provides the training code used to create this model. Model inputs and outputs The distil-large-v2 model is a speech recognition model that takes audio as input and outputs text transcriptions. It can handle audio of up to 30 seconds in length, and can be used for both short-form and long-form transcription. Inputs Audio data (e.g. wav, mp3, etc.) Outputs Text transcription of the input audio Optional: Timestamps for the transcribed text Capabilities The distil-large-v2 model demonstrates strong performance on speech recognition tasks, performing within 1% WER of the larger Whisper large-v2 model. It is particularly adept at handling accents, background noise, and technical language. The model can also be used for zero-shot translation from multiple languages into English. What can I use it for? The distil-large-v2 model is well-suited for applications that require efficient and accurate speech recognition, such as automated transcription, accessibility tools, and language learning applications. Its speed and size also suggest that it could be used as a building block for more complex speech-to-text systems. Things to try One interesting aspect of the distil-large-v2 model is its ability to perform long-form transcription through the use of a chunking algorithm. This allows the model to transcribe audio samples of arbitrary length, which could be useful for transcribing podcasts, lectures, or other long-form audio content.

Read more

Updated 5/28/2024

🧠

distil-large-v3

distil-whisper

Total Score

136

Distil-Whisper: distil-large-v3 Distil-Whisper was proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. This is the third and final installment of the Distil-Whisper English series. It the knowledge distilled version of OpenAI's Whisper large-v3, the latest and most performant Whisper model to date. Compared to previous Distil-Whisper models, the distillation procedure for distil-large-v3 has been adapted to give superior long-form transcription accuracy with OpenAI's sequential long-form algorithm. The result is a distilled model that performs to within 1% WER of large-v3 on long-form audio using both the sequential and chunked algorithms, and outperforms distil-large-v2 by 4.8% using the sequential algorithm. The model is also faster than previous Distil-Whisper models: 6.3x faster than large-v3, and 1.1x faster than distil-large-v2. Model Params / M Rel. Latency Short-Form Sequential Long-Form Chunked Long-Form large-v3 1550 1.0 8.4 10.0 11.0 distil-large-v3 756 6.3 9.7 10.8 10.9 distil-large-v2 756 5.8 10.1 15.6 11.6 Since the sequential algorithm is the "de-facto" transcription algorithm across the most popular Whisper libraries (Whisper cpp, Faster-Whisper, OpenAI Whisper), this distilled model is designed to be compatible with these libraries. You can expect significant performance gains by switching from previous Distil-Whisper checkpoints to distil-large-v3 when using these libraries. For convenience, the weights for the most popular libraries are already converted, with instructions for getting started below. Table of Contents Transformers Usage Short-Form Transcription Sequential Long-Form Chunked Long-Form Speculative Decoding Additional Speed and Memory Improvements Library Integrations Whisper cpp Faster Whisper OpenAI Whisper Transformers.js Candle Model Details License Transformers Usage distil-large-v3 is supported in the Hugging Face Transformers library from version 4.39 onwards. To run the model, first install the latest version of Transformers. For this example, we'll also install Datasets to load a toy audio dataset from the Hugging Face Hub: pip install --upgrade pip pip install --upgrade transformers accelerate datasets[audio] Short-Form Transcription The model can be used with the pipeline class to transcribe short-form audio files ( 30-seconds), and returns more accurate transcriptions compared to the chunked long-form algorithm. The sequential long-form algorithm should be used in either of the following scenarios: Transcription accuracy is the most important factor, and latency is less of a consideration You are transcribing batches of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm described below. For a detailed explanation of the different algorithms, refer to Sections 5 of the Distil-Whisper paper. The pipeline class can be used to transcribe long audio files with the sequential algorithm as follows: import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset0 result = pipe(sample) print(result["text"]) For more control over the generation parameters, use the model + processor API directly: import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import Audio, load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate)) sample = dataset0 inputs = processor( sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, ) inputs = inputs.to(device, dtype=torch_dtype) gen_kwargs = { "max_new_tokens": 448, "num_beams": 1, "condition_on_prev_tokens": False, "compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space) "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, "return_timestamps": True, } pred_ids = model.generate(*i nputs, *gen_kwargs) pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False) print(pred_text) Chunked Long-Form distil-large-v3 remains compatible with the Transformers chunked long-form algorithm. This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances, the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the Distil-Whisper paper). To enable chunking, pass the chunk_length_s parameter to the pipeline. For distil-large-v3, a chunk length of 25-seconds is optimal. To activate batching over long audio files, pass the argument batch_size: import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=25, batch_size=16, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset0 result = pipe(sample) print(result["text"]) Speculative Decoding distil-large-v3 is the first Distil-Whisper model that can be used as an assistant to Whisper large-v3 for speculative decoding. Speculative decoding mathematically ensures that exactly the same outputs as Whisper are obtained, while being 2 times faster. This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed. In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then specify it as the "assistant model" for generation: from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor import torch from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 assistant_model_id = "distil-whisper/distil-large-v3" assistant_model = AutoModelForCausalLM.from_pretrained( assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) assistant_model.to(device) model_id = "openai/whisper-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, generate_kwargs={"assistant_model": assistant_model}, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset0 result = pipe(sample) print(result["text"]) For more details on speculative decoding, refer to the blog post Speculative Decoding for 2x Faster Whisper Inference. Additional Speed & Memory Improvements You can apply additional speed and memory improvements to Distil-Whisper to further reduce the inference speed and VRAM requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a more efficient flash attention version. Flash Attention 2 We recommend using Flash-Attention 2 if your GPU allows for it. To do so, you first need to install Flash Attention: pip install flash-attn --no-build-isolation Then pass attn_implementation="flash_attention_2" to from_pretrained: model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="flash_attention_2") Torch Scale-Product-Attention (SDPA) If your GPU does not support Flash Attention, we recommend making use of PyTorch scaled dot-product attention (SDPA). This attention implementation is activated by default for PyTorch versions 2.1.1 or greater. To check whether you have a compatible PyTorch version, run the following Python code snippet: from transformers.utils import is_torch_sdpa_available print(is_torch_sdpa_available()) If the above returns True, you have a valid version of PyTorch installed and SDPA is activated by default. If it returns False, you need to upgrade your PyTorch version according to the official instructions Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying attn_implementation="sdpa" as follows: model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, attn_implementation="sdpa") Torch compile Coming soon... 4-bit and 8-bit Inference Coming soon... Library Integrations Whisper.cpp Distil-Whisper can be run with the Whisper.cpp package with the original sequential long-form transcription algorithm. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster than Whisper large-v3, while performing to within 0.8% WER over long-form audio. Steps for getting started: Clone the Whisper.cpp repository: git clone https://github.com/ggerganov/whisper.cpp.git cd whisper.cpp Install the Hugging Face Hub Python package: pip install --upgrade huggingface_hub And download the GGML weights for distil-large-v3 using the following Python snippet: from huggingface_hub import hf_hub_download hf_hub_download(repo_id='distil-whisper/distil-large-v3-ggml', filename='ggml-distil-large-v3.bin', local_dir='./models') Note that if you do not have a Python environment set-up, you can also download the weights directly with wget: wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3.bin -P ./models Run inference using the provided sample audio: make -j && ./main -m models/ggml-distil-large-v3.bin -f samples/jfk.wav Faster-Whisper Faster-Whisper is a reimplementation of Whisper using CTranslate2, a fast inference engine for Transformer models. First, install the Faster-Whisper package according to the official instructions. For this example, we'll also install Datasets to load a toy audio dataset from the Hugging Face Hub: pip install --upgrade pip pip install --upgrade git+https://github.com/SYSTRAN/faster-whisper datasets[audio] The following code snippet loads the distil-large-v3 model and runs inference on an example file from the LibriSpeech ASR dataset: import torch from faster_whisper import WhisperModel from datasets import load_dataset define our torch configuration device = "cuda:0" if torch.cuda.is_available() else "cpu" compute_type = "float16" if torch.cuda.is_available() else "float32" load model on GPU if available, else cpu model = WhisperModel("distil-large-v3", device=device, compute_type=compute_type) load toy dataset for example dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset1["path"] segments, info = model.transcribe(sample, beam_size=1) for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) To transcribe a local audio file, simply pass the path to the audio file as the audio argument to transcribe: segments, info = model.transcribe("audio.mp3", beam_size=1) OpenAI Whisper To use the model in the original Whisper format, first ensure you have the openai-whisper package installed. For this example, we'll also install Datasets to load a toy audio dataset from the Hugging Face Hub: pip install --upgrade pip pip install --upgrade openai-whisper datasets[audio] The following code-snippet demonstrates how to transcribe a sample file from the LibriSpeech dataset loaded using Datasets: from huggingface_hub import hf_hub_download from datasets import load_dataset from whisper import load_model, transcribe model_path = hf_hub_download(repo_id="distil-whisper/distil-large-v3-openai", filename="model.bin") model = load_model(model_path) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset0["path"] pred_out = transcribe(model, audio=sample, language="en") print(pred_out["text"]) Note that the model weights will be downloaded and saved to your cache the first time you run the example. Subsequently, you can re-use the same example, and the weights will be loaded directly from your cache without having to download them again. To transcribe a local audio file, simply pass the path to the audio file as the audio argument to transcribe: pred_out = transcribe(model, audio=sample, language="en") The Distil-Whisper model can also be used with the OpenAI Whisper CLI. Refer to the following instructions for details. Transformers.js Distil-Whisper can be run completely in your web browser with Transformers.js: Install Transformers.js from NPM: npm i @xenova/transformers Import the library and perform inference with the pipeline API. import { pipeline } from '@xenova/transformers'; const transcriber = await pipeline('automatic-speech-recognition', 'distil-whisper/distil-large-v3'); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; const output = await transcriber(url); // { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." } Check out the online Distil-Whisper Web Demo to try it out yourself. As you'll see, it runs locally in your browser: no server required! Refer to the Transformers.js docs for further information. Candle Through an integration with Hugging Face Candle , Distil-Whisper is available in the Rust library Benefit from: Optimised CPU backend with optional MKL support for Linux x86 and Accelerate for Macs Metal support for efficiently running on Macs CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL WASM support: run Distil-Whisper in a browser Steps for getting started: Install candle-core as explained here Clone the candle repository locally: git clone https://github.com/huggingface/candle.git Enter the example directory for Whisper: cd candle/candle-examples/examples/whisper Run an example: cargo run --example whisper --release --features symphonia -- --model distil-large-v3 To specify your own audio file, add the --input flag: cargo run --example whisper --release --features symphonia -- --model distil-large-v3 --input audio.wav Tip: for compiling using Apple Metal, specify the metal feature when you run the example: cargo run --example whisper --release --features="symphonia,metal" -- --model distil-large-v3 Note that if you encounter the error: error: target whisper in package candle-examples requires the features: symphonia Consider enabling them by passing, e.g., --features="symphonia" You should clean your cargo installation: cargo clean And subsequently recompile: cargo run --example whisper --release --features symphonia -- --model distil-large-v3 Model Details Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector inputs to a sequence of hidden-state vectors. The decoder auto-regressively predicts text tokens, conditional on all previous tokens and the encoder hidden-states. Consequently, the encoder is only run forward once, whereas the decoder is run as many times as the number of tokens generated. In practice, this means the decoder accounts for over 90% of total inference time. Thus, to optimise for latency, the focus is on minimising the inference time of the decoder. To distill the Whisper model, we reduce the number of decoder layers while keeping the encoder fixed. The encoder (shown in green) is entirely copied from the teacher to the student and frozen during training. The student's decoder consists of a subset of the teacher decoder layers, which are intialised from maximally spaced layers. The model is then trained on a weighted sum of the KL divergence and pseudo-label loss terms. Differences with distil-large-v2 Compared to previous version of Distil-Whisper, distil-large-v3 is specifically designed to target the OpenAI sequential long-form transcription algorithm. There are no architectural differences compared to distil-large-v2, other than the fact the model layers are intialised from the latest large-v3 model rather than the older large-v2 one. The differences lie in the way the model was trained. Previous Distil-Whisper models were trained on a mean input length of 7-seconds, whereas the original Whisper models were pre-trained on 30-second inputs. During distillation, we shift the distribution of the model weights to the distribution of our training data. If our training data contains shorter utterances (e.g. on average 7-seconds audio instead of 30-seconds), then the predicted distribution shifts to this shorter context length. At inference time, the optimal context window for distil-large-v2 was an interpolation of these two values: 15-seconds. Beyond this time, the predictions for the distil-large-v2 model were largely inaccurate, particularly for the timestamp predictions. However, the sequential long-form algorithm uses 30-second sliding windows for inference, with the window shifted according to the last predicted timestamp. Since the last timestamp typically occurs after the 15-second mark, it was predicted with low accuracy, causing the long-form transcription to often fail. To preserve Whisper's ability to transcribe sliding 30-second windows, as is done with sequential decoding, we need to ensure the context length of distil-large-v3 is also 30-seconds. This was primarily achieved with four strategies: Packing the audio samples in the training dataset to 30-seconds: since the model is both pre-trained and distilled on audio data packed to 30-seconds, distil-large-v3 now operates on the same ideal context window as Whisper, predicting accurate timestamps up to and including 30-seconds. Freezing the decoder input embeddings: we use the same input embeds representation as the original model, which is designed to handle longer context lengths than previous Distil-Whisper iterations. Using a longer maximum context length during training: instead of training on a maximum target length of 128, we train on a maximum of 256. This helps distil-large-v3 transcribe 30-second segments where the number of tokens possibly exceeds 128. Appending prompt conditioning to 50% of the training samples: enables the model to be used with the condition_on_prev_tokens argument, and context windows up to 448 tokens. There were further tricks that were employed to improve the performance of distil-large-v3 under the sequential decoding algorithm, which we be explained fully in an upcoming blog post. Evaluation The following code-snippets demonstrates how to evaluate the Distil-Whisper model on the LibriSpeech validation-clean dataset with streaming mode, meaning no audio data has to be downloaded to your local device. First, we need to install the required packages, including Datasets to stream and load the audio data, and Evaluate to perform the WER calculation: pip install --upgrade pip pip install --upgrade transformers datasets[audio] evaluate jiwer Evaluation can then be run end-to-end with the following example: from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from datasets import load_dataset from evaluate import load import torch from tqdm import tqdm define our torch configuration device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v3" load the model + processor model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, use_safetensors=True, low_cpu_mem_usage=True) model = model.to(device) processor = AutoProcessor.from_pretrained(model_id) load the dataset with streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) define the evaluation metric wer_metric = load("wer") def inference(batch): 1. Pre-process the audio data to log-mel spectrogram inputs audio = [sample["array"] for sample in batch["audio"]] input_features = processor(audio, sampling_rate=batch"audio"["sampling_rate"], return_tensors="pt").input_features input_features = input_features.to(device, dtype=torch_dtype) 2. Auto-regressively generate the predicted token ids pred_ids = model.generate(input_features, max_new_tokens=128) 3. Decode the token ids to the final transcription batch["transcription"] = processor.batch_decode(pred_ids, skip_special_tokens=True) batch["reference"] = batch["text"] return batch batch size 16 inference dataset = dataset.map(function=inference, batched=True, batch_size=16) all_transcriptions = [] all_references = [] iterate over the dataset and run inference for result in tqdm(dataset, desc="Evaluating..."): all_transcriptions.append(result["transcription"]) all_references.append(result["reference"]) normalize predictions and references all_transcriptions = [processor.normalize(transcription) for transcription in all_transcriptions] all_references = [processor.normalize(reference) for reference in all_references] compute the WER metric wer = 100 * wer_metric.compute(predictions=all_transcriptions, references=all_references) print(wer) Print Output: 2.428920763531516 Intended Use Distil-Whisper is intended to be a drop-in replacement for Whisper large-v3 on English speech recognition. In particular, it achieves comparable WER results over out-of-distribution (OOD) test data, while being 6x faster on both short and long-form audio. Data Distil-Whisper is trained on 22,000 hours of audio data from nine open-source, permissively licensed speech datasets on the Hugging Face Hub: Dataset Size / h Speakers Domain Licence People's Speech 12,000 unknown Internet Archive CC-BY-SA-4.0 Common Voice 13 3,000 unknown Narrated Wikipedia CC0-1.0 GigaSpeech 2,500 unknown Audiobook, podcast, YouTube apache-2.0 Fisher 1,960 11,900 Telephone conversations LDC LibriSpeech 960 2,480 Audiobooks CC-BY-4.0 VoxPopuli 540 1,310 European Parliament CC0 TED-LIUM 450 2,030 TED talks CC-BY-NC-ND 3.0 SwitchBoard 260 540 Telephone conversations LDC AMI 100 unknown Meetings CC-BY-4.0 Total 21,770 18,260+ The combined dataset spans 10 distinct domains and over 50k speakers. The diversity of this dataset is crucial to ensuring the distilled model is robust to audio distributions and noise. The audio data is then pseudo-labelled using the Whisper large-v3 model: we use Whisper to generate predictions for all the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training. WER Filter The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds a specified threshold, we discard the training example. Otherwise, we keep it for training. Section 9.2 of the Distil-Whisper paper demonstrates the effectiveness of this filter for improving downstream performance of the distilled model. We also partially attribute Distil-Whisper's robustness to hallucinations to this filter. Training The model was trained for 80,000 optimisation steps (or 11 epochs) with batch size 256. The Tensorboard training logs can be found under: https://huggingface.co/distil-whisper/distil-large-v3/tensorboard?params=scalars#frame Results The distilled model performs to within 1.5% WER of Whisper large-v3 on out-of-distribution (OOD) short-form audio, within 1% WER on sequential long-form decoding, and outperforms large-v3 by 0.1% on chunked long-form. This performance gain is attributed to lower hallucinations. For a detailed per-dataset breakdown of the evaluation results, refer to Tables 16 and 17 of the Distil-Whisper paper Distil-Whisper is also evaluated on the ESB benchmark datasets as part of the OpenASR leaderboard, where it performs to within 0.2% WER of Whisper. Reproducing Distil-Whisper Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training This code will shortly be updated to include the training updates described in the section Differences with distil-large-v2. License Distil-Whisper inherits the MIT license from OpenAI's Whisper model. Citation If you use this model, please consider citing the Distil-Whisper paper: @misc{gandhi2023distilwhisper, title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling}, author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush}, year={2023}, eprint={2311.00430}, archivePrefix={arXiv}, primaryClass={cs.CL} } Acknowledgements OpenAI for the Whisper model, in particular Jong Wook Kim for the original codebase and training discussions Hugging Face Transformers for the model integration Georgi Gerganov for the Whisper cpp integration Systran team for the Faster-Whisper integration Joshua Lochner for the Transformers.js integration Laurent Mazare for the Candle integration Vaibhav Srivastav for Distil-Whisper distribution Google's TPU Research Cloud (TRC) programme for Cloud TPU v4 compute resource Raghav Sonavane for an early iteration of Distil-Whisper on the LibriSpeech dataset

Read more

Updated 5/28/2024

🌀

distil-medium.en

distil-whisper

Total Score

109

The distil-medium.en model is a distilled version of the Whisper medium.en model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is 6 times faster, 49% smaller, and performs within 1% word error rate (WER) on out-of-distribution evaluation sets compared to the original Whisper medium.en model. This makes it an efficient alternative for English speech recognition tasks. The model is part of the Distil-Whisper repository, which contains several distilled variants of the Whisper model. The distil-large-v2 model is another example, which surpasses the performance of the original Whisper large-v2 model. Model inputs and outputs Inputs Audio data**: The model takes audio data as input, in the form of log-Mel spectrograms. Outputs Transcription text**: The model outputs transcribed text in the same language as the input audio. Capabilities The distil-medium.en model demonstrates strong performance on English speech recognition tasks, achieving a short-form WER of 11.1% and a long-form WER of 12.4% on out-of-distribution evaluation sets. It is significantly more efficient than the original Whisper medium.en model, running 6.8 times faster with 49% fewer parameters. What can I use it for? The distil-medium.en model is well-suited for a variety of English speech recognition applications, such as transcribing audio recordings, live captioning, and voice-to-text conversion. Its efficiency makes it a practical choice for real-world deployment, particularly in scenarios where latency and model size are important considerations. Things to try You can use the distil-medium.en model with the Hugging Face Transformers library to perform short-form transcription of audio samples. The model can also be used for long-form transcription by leveraging the chunking capabilities of the pipeline class, allowing it to handle audio files of arbitrary length. Additionally, the Distil-Whisper repository provides training code that you can use to distill the Whisper model on other languages, expanding the model's capabilities beyond English. If you're interested in distilling Whisper for your language, be sure to check out the training code.

Read more

Updated 5/28/2024

✨

distil-small.en

distil-whisper

Total Score

78

The distil-small.en model is a distilled version of the Whisper model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is the smallest Distil-Whisper checkpoint, with just 166M parameters, making it the ideal choice for memory constrained applications. Compared to the Whisper small.en model, distil-small.en is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets. For most other applications, the distil-medium.en or distil-large-v2 checkpoints are recommended, since they are both faster and achieve better WER results. Model inputs and outputs The distil-small.en model is an automatic speech recognition (ASR) model that takes audio as input and generates a text transcript as output. It uses an encoder-decoder architecture, where the encoder maps the audio input to a sequence of hidden representations, and the decoder auto-regressively generates the output text. Inputs Audio data in the form of a raw waveform or log-mel spectrogram Outputs A text transcript of the input audio Capabilities The distil-small.en model is capable of transcribing English speech with high accuracy, even on out-of-distribution datasets. It demonstrates robust performance in the presence of accents, background noise, and technical language. The distilled model maintains performance close to the larger Whisper small.en model, while being significantly faster and smaller. What can I use it for? The distil-small.en model is well-suited for deployment in memory-constrained environments, such as on-device applications, where the small model size is a key requirement. It can be used to add high-quality speech transcription capabilities to a wide range of applications, from accessibility tools to voice interfaces. Things to try One interesting thing to try with the distil-small.en model is to use it as an assistant model for speculative decoding with the larger Whisper models. By combining distil-small.en with Whisper, you can obtain the exact same outputs as Whisper while being 2 times faster, making it a drop-in replacement for existing Whisper pipelines.

Read more

Updated 5/28/2024