distil-large-v2

Maintainer: distil-whisper

Total Score

490

Last updated 5/28/2024

📊

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

The distil-large-v2 model is a distilled version of the Whisper large-v2 model. It is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets compared to the larger Whisper model. This makes it a more efficient alternative for speech recognition tasks. The Distil-Whisper repository provides the training code used to create this model.

Model inputs and outputs

The distil-large-v2 model is a speech recognition model that takes audio as input and outputs text transcriptions. It can handle audio of up to 30 seconds in length, and can be used for both short-form and long-form transcription.

Inputs

  • Audio data (e.g. wav, mp3, etc.)

Outputs

  • Text transcription of the input audio
  • Optional: Timestamps for the transcribed text

Capabilities

The distil-large-v2 model demonstrates strong performance on speech recognition tasks, performing within 1% WER of the larger Whisper large-v2 model. It is particularly adept at handling accents, background noise, and technical language. The model can also be used for zero-shot translation from multiple languages into English.

What can I use it for?

The distil-large-v2 model is well-suited for applications that require efficient and accurate speech recognition, such as automated transcription, accessibility tools, and language learning applications. Its speed and size also suggest that it could be used as a building block for more complex speech-to-text systems.

Things to try

One interesting aspect of the distil-large-v2 model is its ability to perform long-form transcription through the use of a chunking algorithm. This allows the model to transcribe audio samples of arbitrary length, which could be useful for transcribing podcasts, lectures, or other long-form audio content.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🌀

distil-medium.en

distil-whisper

Total Score

109

The distil-medium.en model is a distilled version of the Whisper medium.en model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is 6 times faster, 49% smaller, and performs within 1% word error rate (WER) on out-of-distribution evaluation sets compared to the original Whisper medium.en model. This makes it an efficient alternative for English speech recognition tasks. The model is part of the Distil-Whisper repository, which contains several distilled variants of the Whisper model. The distil-large-v2 model is another example, which surpasses the performance of the original Whisper large-v2 model. Model inputs and outputs Inputs Audio data**: The model takes audio data as input, in the form of log-Mel spectrograms. Outputs Transcription text**: The model outputs transcribed text in the same language as the input audio. Capabilities The distil-medium.en model demonstrates strong performance on English speech recognition tasks, achieving a short-form WER of 11.1% and a long-form WER of 12.4% on out-of-distribution evaluation sets. It is significantly more efficient than the original Whisper medium.en model, running 6.8 times faster with 49% fewer parameters. What can I use it for? The distil-medium.en model is well-suited for a variety of English speech recognition applications, such as transcribing audio recordings, live captioning, and voice-to-text conversion. Its efficiency makes it a practical choice for real-world deployment, particularly in scenarios where latency and model size are important considerations. Things to try You can use the distil-medium.en model with the Hugging Face Transformers library to perform short-form transcription of audio samples. The model can also be used for long-form transcription by leveraging the chunking capabilities of the pipeline class, allowing it to handle audio files of arbitrary length. Additionally, the Distil-Whisper repository provides training code that you can use to distill the Whisper model on other languages, expanding the model's capabilities beyond English. If you're interested in distilling Whisper for your language, be sure to check out the training code.

Read more

Updated Invalid Date

distil-small.en

distil-whisper

Total Score

78

The distil-small.en model is a distilled version of the Whisper model proposed in the paper Robust Knowledge Distillation via Large-Scale Pseudo Labelling. It is the smallest Distil-Whisper checkpoint, with just 166M parameters, making it the ideal choice for memory constrained applications. Compared to the Whisper small.en model, distil-small.en is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets. For most other applications, the distil-medium.en or distil-large-v2 checkpoints are recommended, since they are both faster and achieve better WER results. Model inputs and outputs The distil-small.en model is an automatic speech recognition (ASR) model that takes audio as input and generates a text transcript as output. It uses an encoder-decoder architecture, where the encoder maps the audio input to a sequence of hidden representations, and the decoder auto-regressively generates the output text. Inputs Audio data in the form of a raw waveform or log-mel spectrogram Outputs A text transcript of the input audio Capabilities The distil-small.en model is capable of transcribing English speech with high accuracy, even on out-of-distribution datasets. It demonstrates robust performance in the presence of accents, background noise, and technical language. The distilled model maintains performance close to the larger Whisper small.en model, while being significantly faster and smaller. What can I use it for? The distil-small.en model is well-suited for deployment in memory-constrained environments, such as on-device applications, where the small model size is a key requirement. It can be used to add high-quality speech transcription capabilities to a wide range of applications, from accessibility tools to voice interfaces. Things to try One interesting thing to try with the distil-small.en model is to use it as an assistant model for speculative decoding with the larger Whisper models. By combining distil-small.en with Whisper, you can obtain the exact same outputs as Whisper while being 2 times faster, making it a drop-in replacement for existing Whisper pipelines.

Read more

Updated Invalid Date

🤯

whisper-large-v2

openai

Total Score

1.6K

The whisper-large-v2 model is a pre-trained Transformer-based encoder-decoder model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labeled data by OpenAI, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. Compared to the original Whisper large model, the whisper-large-v2 model has been trained for 2.5x more epochs with added regularization for improved performance. Model inputs and outputs Inputs Audio samples**: The model takes audio samples as input and performs either speech recognition or speech translation. Outputs Text transcription**: The model outputs text transcriptions of the input audio. For speech recognition, the transcription is in the same language as the audio. For speech translation, the transcription is in a different language than the audio. Timestamps (optional)**: The model can optionally output timestamps for the transcribed text. Capabilities The whisper-large-v2 model exhibits improved robustness to accents, background noise, and technical language compared to many existing ASR systems. It also demonstrates strong zero-shot translation capabilities, allowing it to translate speech from multiple languages into English with high accuracy. What can I use it for? The whisper-large-v2 model can be a useful tool for developers building speech recognition and translation applications. Its strong generalization capabilities suggest it may be particularly valuable for tasks like improving accessibility through real-time captioning, language translation, and other speech-to-text use cases. However, the model's performance can vary across languages, accents, and demographics, so users should carefully evaluate its performance in their specific domain before deployment. Things to try One interesting aspect of the whisper-large-v2 model is its ability to perform long-form transcription of audio samples longer than 30 seconds. By using a chunking algorithm, the model can transcribe audio of arbitrary length, making it a useful tool for transcribing podcasts, lectures, and other long-form audio content. Users can also experiment with fine-tuning the model on their own data to further improve its performance for specific use cases.

Read more

Updated Invalid Date

🏅

whisper-large-v3-turbo

ylacombe

Total Score

412

The whisper-large-v3-turbo model is a finetuned version of the Whisper large-v3 model, a state-of-the-art automatic speech recognition (ASR) and speech translation model proposed by Alec Radford et al. from OpenAI. Trained on over 5 million hours of labeled data, Whisper demonstrates strong generalization to many datasets and domains without the need for fine-tuning. The whisper-large-v3-turbo model has a reduced number of decoding layers from 32 to 4, resulting in a faster model but with a minor quality degradation. Model inputs and outputs The whisper-large-v3-turbo model takes audio samples as input and generates transcribed text as output. It can be used for both speech recognition, where the output is in the same language as the input audio, as well as speech translation, where the output is in a different language. Inputs Audio samples**: The model accepts raw audio waveforms sampled at 16kHz or 44.1kHz. Outputs Transcribed text**: The model generates text transcriptions of the input audio. Timestamps (optional)**: The model can also generate timestamps indicating the start and end time of each transcribed segment. Capabilities The Whisper models demonstrate strong performance on speech recognition and translation tasks, exhibiting improved robustness to accents, background noise, and technical language compared to many existing ASR systems. The models can also perform zero-shot translation from multiple languages into English. What can I use it for? The whisper-large-v3-turbo model can be useful for a variety of applications, such as: Transcription and translation**: The model can be used to transcribe audio in various languages and translate it to English or other target languages. Accessibility tools**: The model's transcription capabilities can be leveraged to improve accessibility, such as live captioning or subtitling for audio/video content. Voice interaction and assistants**: The model's ASR and translation abilities can be integrated into voice-based interfaces and digital assistants. Things to try One interesting aspect of the Whisper models is their ability to automatically determine the language of the input audio and perform the appropriate task (recognition or translation) without any additional prompting. You can experiment with this by providing audio samples in different languages and observing how the model handles the task. Additionally, the models support returning word-level timestamps, which can be useful for applications that require precise alignment between the transcribed text and the audio. Try using the return_timestamps="word" parameter to see the word-level timing information.

Read more

Updated Invalid Date