distil-small.en

Maintainer: distil-whisper

Total Score

78

Last updated 5/28/2024

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

The distil-small.en model is a distilled version of the Whisper model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is the smallest Distil-Whisper checkpoint, with just 166M parameters, making it the ideal choice for memory constrained applications. Compared to the Whisper small.en model, distil-small.en is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets. For most other applications, the distil-medium.en or distil-large-v2 checkpoints are recommended, since they are both faster and achieve better WER results.

Model inputs and outputs

The distil-small.en model is an automatic speech recognition (ASR) model that takes audio as input and generates a text transcript as output. It uses an encoder-decoder architecture, where the encoder maps the audio input to a sequence of hidden representations, and the decoder auto-regressively generates the output text.

Inputs

  • Audio data in the form of a raw waveform or log-mel spectrogram

Outputs

  • A text transcript of the input audio

Capabilities

The distil-small.en model is capable of transcribing English speech with high accuracy, even on out-of-distribution datasets. It demonstrates robust performance in the presence of accents, background noise, and technical language. The distilled model maintains performance close to the larger Whisper small.en model, while being significantly faster and smaller.

What can I use it for?

The distil-small.en model is well-suited for deployment in memory-constrained environments, such as on-device applications, where the small model size is a key requirement. It can be used to add high-quality speech transcription capabilities to a wide range of applications, from accessibility tools to voice interfaces.

Things to try

One interesting thing to try with the distil-small.en model is to use it as an assistant model for speculative decoding with the larger Whisper models. By combining distil-small.en with Whisper, you can obtain the exact same outputs as Whisper while being 2 times faster, making it a drop-in replacement for existing Whisper pipelines.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🌀

distil-medium.en

distil-whisper

Total Score

109

The distil-medium.en model is a distilled version of the Whisper medium.en model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is 6 times faster, 49% smaller, and performs within 1% word error rate (WER) on out-of-distribution evaluation sets compared to the original Whisper medium.en model. This makes it an efficient alternative for English speech recognition tasks. The model is part of the Distil-Whisper repository, which contains several distilled variants of the Whisper model. The distil-large-v2 model is another example, which surpasses the performance of the original Whisper large-v2 model. Model inputs and outputs Inputs Audio data**: The model takes audio data as input, in the form of log-Mel spectrograms. Outputs Transcription text**: The model outputs transcribed text in the same language as the input audio. Capabilities The distil-medium.en model demonstrates strong performance on English speech recognition tasks, achieving a short-form WER of 11.1% and a long-form WER of 12.4% on out-of-distribution evaluation sets. It is significantly more efficient than the original Whisper medium.en model, running 6.8 times faster with 49% fewer parameters. What can I use it for? The distil-medium.en model is well-suited for a variety of English speech recognition applications, such as transcribing audio recordings, live captioning, and voice-to-text conversion. Its efficiency makes it a practical choice for real-world deployment, particularly in scenarios where latency and model size are important considerations. Things to try You can use the distil-medium.en model with the Hugging Face Transformers library to perform short-form transcription of audio samples. The model can also be used for long-form transcription by leveraging the chunking capabilities of the pipeline class, allowing it to handle audio files of arbitrary length. Additionally, the Distil-Whisper repository provides training code that you can use to distill the Whisper model on other languages, expanding the model's capabilities beyond English. If you're interested in distilling Whisper for your language, be sure to check out the training code.

Read more

Updated Invalid Date

📊

distil-large-v2

distil-whisper

Total Score

490

The distil-large-v2 model is a distilled version of the Whisper large-v2 model. It is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets compared to the larger Whisper model. This makes it a more efficient alternative for speech recognition tasks. The Distil-Whisper repository provides the training code used to create this model. Model inputs and outputs The distil-large-v2 model is a speech recognition model that takes audio as input and outputs text transcriptions. It can handle audio of up to 30 seconds in length, and can be used for both short-form and long-form transcription. Inputs Audio data (e.g. wav, mp3, etc.) Outputs Text transcription of the input audio Optional: Timestamps for the transcribed text Capabilities The distil-large-v2 model demonstrates strong performance on speech recognition tasks, performing within 1% WER of the larger Whisper large-v2 model. It is particularly adept at handling accents, background noise, and technical language. The model can also be used for zero-shot translation from multiple languages into English. What can I use it for? The distil-large-v2 model is well-suited for applications that require efficient and accurate speech recognition, such as automated transcription, accessibility tools, and language learning applications. Its speed and size also suggest that it could be used as a building block for more complex speech-to-text systems. Things to try One interesting aspect of the distil-large-v2 model is its ability to perform long-form transcription through the use of a chunking algorithm. This allows the model to transcribe audio samples of arbitrary length, which could be useful for transcribing podcasts, lectures, or other long-form audio content.

Read more

Updated Invalid Date

👀

whisper-small

openai

Total Score

156

The whisper-small model is part of the Whisper family of pre-trained models for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning. The whisper-small model is a 244M parameter version of the Whisper model, available in both English-only and multilingual configurations. Compared to the smaller whisper-tiny model, the whisper-small offers improved performance at the cost of increased model size and complexity. Model inputs and outputs The Whisper models are designed to take audio samples as input and generate transcriptions or translations as output. The WhisperProcessor is used to preprocess the audio inputs into log-Mel spectrograms that the model can understand, and to postprocess the model outputs back into readable text. Inputs Audio samples**: The model accepts raw audio data as input, which the WhisperProcessor converts to log-Mel spectrograms. Outputs Transcriptions/Translations**: The model outputs sequences of text, which can represent either transcriptions (in the same language as the input audio) or translations (to a different language). Capabilities The whisper-small model demonstrates strong performance on a variety of speech recognition and translation tasks, particularly for English and a handful of other high-resource languages. It is robust to factors like accents, background noise, and technical vocabulary. The model can also perform zero-shot translation from multiple languages into English. What can I use it for? The whisper-small model could be useful for developers building speech-to-text applications, especially for English transcription. It may also be applicable for improving accessibility tools that require accurate speech recognition. While the model cannot be used for real-time transcription out of the box, its speed and size suggest that others may be able to build applications on top of it that allow for near-real-time speech recognition and translation. Things to try One interesting aspect of the Whisper models is their ability to perform both speech recognition and speech translation. By setting the appropriate "context tokens", you can control whether the model should transcribe the audio in the same language, or translate it to a different language. This allows for a wide range of potential applications, from captioning videos in multiple languages to building multilingual voice assistants.

Read more

Updated Invalid Date

🔍

whisper-tiny.en

openai

Total Score

80

The whisper-tiny.en model is part of the Whisper family of pre-trained models for automatic speech recognition (ASR) and speech translation. Developed by OpenAI, the Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The whisper-tiny.en model is the smallest English-only Whisper checkpoint, with 39M parameters. Compared to the larger whisper-small and whisper-medium models, the tiny model may have slightly lower accuracy but can be more efficiently deployed. All Whisper models leverage a Transformer-based encoder-decoder architecture trained on 680k hours of labeled speech data. Model inputs and outputs Inputs Audio samples in various formats and sampling rates Outputs Transcribed text in the same language as the input audio Optionally, the model can also output timestamps for the transcribed text Capabilities The whisper-tiny.en model exhibits robust performance on English speech recognition tasks, with the ability to handle a variety of accents, background noise, and technical language. It can also perform zero-shot translation, generating English transcripts from non-English audio. What can I use it for? The whisper-tiny.en model can be a useful tool for developers building speech-to-text applications, especially for English language transcription. While it may not be suitable for real-time use due to its size, the model's efficiency makes it well-suited for batch processing or offline transcription. Potential use cases include improving accessibility through automatic captioning, developing voice-based interfaces, and streamlining audio-to-text workflows. Things to try One interesting aspect of the Whisper models is their ability to handle long-form audio through a chunking algorithm. By breaking up the input audio into 30-second segments, the whisper-tiny.en model can be used to transcribe recordings of arbitrary length, making it suitable for transcribing podcasts, lectures, or other long-form content.

Read more

Updated Invalid Date