whisper-large

Maintainer: openai

Total Score

438

Last updated 5/28/2024

🔎

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

The whisper-large model is a pre-trained AI model for automatic speech recognition (ASR) and speech translation, developed by OpenAI. Trained on 680k hours of labelled data, the Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning.

The whisper-large-v2 model is a newer version that surpasses the performance of the original whisper-large model, with no architecture changes. The whisper-medium model is a slightly smaller version with 769M parameters, while the whisper-tiny model is the smallest at 39M parameters. All of these Whisper models are available on the Hugging Face Hub.

Model inputs and outputs

Inputs

  • Audio samples, which the model converts to log-Mel spectrograms

Outputs

  • Textual transcriptions of the input audio, either in the same language as the audio (for speech recognition) or in a different language (for speech translation)
  • The model can also output timestamps for the transcriptions

Capabilities

The Whisper models demonstrate strong performance on a variety of speech recognition and translation tasks, exhibiting improved robustness to accents, background noise, and technical language. They can also perform zero-shot translation from multiple languages into English.

However, the models may occasionally produce text that is not actually spoken in the audio input, a phenomenon known as "hallucination". Their performance also varies across languages, with lower accuracy on low-resource and less common languages.

What can I use it for?

The Whisper models are primarily intended for use by AI researchers studying model robustness, generalization, capabilities, biases, and constraints. However, the models can also be useful for developers looking to build speech recognition or translation applications, especially for English speech.

The models' speed and accuracy make them well-suited for applications that require transcription or translation of large volumes of audio data, such as accessibility tools, media production, and language learning. Developers can build applications on top of the models to enable near-real-time speech recognition and translation.

Things to try

One interesting aspect of the Whisper models is their ability to perform long-form transcription of audio samples longer than 30 seconds. This is achieved through a chunking algorithm that allows the model to process audio of arbitrary length.

Another unique feature is the model's ability to automatically detect the language of the input audio and perform the appropriate speech recognition or translation task. Developers can leverage this by providing the model with "context tokens" that inform it of the desired task and language.

Finally, the pre-trained Whisper models can be fine-tuned on smaller datasets to further improve their performance on specific languages or domains. The Fine-Tune Whisper with Transformers blog post provides a step-by-step guide on how to do this.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🤯

whisper-large-v2

openai

Total Score

1.6K

The whisper-large-v2 model is a pre-trained Transformer-based encoder-decoder model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labeled data by OpenAI, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. Compared to the original Whisper large model, the whisper-large-v2 model has been trained for 2.5x more epochs with added regularization for improved performance. Model inputs and outputs Inputs Audio samples**: The model takes audio samples as input and performs either speech recognition or speech translation. Outputs Text transcription**: The model outputs text transcriptions of the input audio. For speech recognition, the transcription is in the same language as the audio. For speech translation, the transcription is in a different language than the audio. Timestamps (optional)**: The model can optionally output timestamps for the transcribed text. Capabilities The whisper-large-v2 model exhibits improved robustness to accents, background noise, and technical language compared to many existing ASR systems. It also demonstrates strong zero-shot translation capabilities, allowing it to translate speech from multiple languages into English with high accuracy. What can I use it for? The whisper-large-v2 model can be a useful tool for developers building speech recognition and translation applications. Its strong generalization capabilities suggest it may be particularly valuable for tasks like improving accessibility through real-time captioning, language translation, and other speech-to-text use cases. However, the model's performance can vary across languages, accents, and demographics, so users should carefully evaluate its performance in their specific domain before deployment. Things to try One interesting aspect of the whisper-large-v2 model is its ability to perform long-form transcription of audio samples longer than 30 seconds. By using a chunking algorithm, the model can transcribe audio of arbitrary length, making it a useful tool for transcribing podcasts, lectures, and other long-form audio content. Users can also experiment with fine-tuning the model on their own data to further improve its performance for specific use cases.

Read more

Updated Invalid Date

👀

whisper-small

openai

Total Score

156

The whisper-small model is part of the Whisper family of pre-trained models for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning. The whisper-small model is a 244M parameter version of the Whisper model, available in both English-only and multilingual configurations. Compared to the smaller whisper-tiny model, the whisper-small offers improved performance at the cost of increased model size and complexity. Model inputs and outputs The Whisper models are designed to take audio samples as input and generate transcriptions or translations as output. The WhisperProcessor is used to preprocess the audio inputs into log-Mel spectrograms that the model can understand, and to postprocess the model outputs back into readable text. Inputs Audio samples**: The model accepts raw audio data as input, which the WhisperProcessor converts to log-Mel spectrograms. Outputs Transcriptions/Translations**: The model outputs sequences of text, which can represent either transcriptions (in the same language as the input audio) or translations (to a different language). Capabilities The whisper-small model demonstrates strong performance on a variety of speech recognition and translation tasks, particularly for English and a handful of other high-resource languages. It is robust to factors like accents, background noise, and technical vocabulary. The model can also perform zero-shot translation from multiple languages into English. What can I use it for? The whisper-small model could be useful for developers building speech-to-text applications, especially for English transcription. It may also be applicable for improving accessibility tools that require accurate speech recognition. While the model cannot be used for real-time transcription out of the box, its speed and size suggest that others may be able to build applications on top of it that allow for near-real-time speech recognition and translation. Things to try One interesting aspect of the Whisper models is their ability to perform both speech recognition and speech translation. By setting the appropriate "context tokens", you can control whether the model should transcribe the audio in the same language, or translate it to a different language. This allows for a wide range of potential applications, from captioning videos in multiple languages to building multilingual voice assistants.

Read more

Updated Invalid Date

🔮

whisper-medium

openai

Total Score

176

The whisper-medium model is a pre-trained speech recognition and translation model developed by OpenAI. It is part of the Whisper family of models, which demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The whisper-medium model has 769 million parameters and is trained on either English-only or multilingual data. It can be used for both speech recognition, where it transcribes audio in the same language, and speech translation, where it transcribes audio to a different language. The Whisper models are available in a range of sizes, from the whisper-tiny with 39 million parameters to the whisper-large and whisper-large-v2 with 1.55 billion parameters. Model inputs and outputs Inputs Audio samples in various formats and sampling rates Outputs Transcriptions of the input audio, either in the same language (speech recognition) or a different language (speech translation) Optionally, the model can also output timestamps for the transcribed text Capabilities The Whisper models demonstrate strong performance on a variety of speech recognition and translation tasks, including handling accents, background noise, and technical language. They can be used in zero-shot translation, taking audio in one language and translating it to English without any fine-tuning. However, the models can also sometimes generate text that is not actually present in the audio input (known as "hallucination"), and their performance can vary across different languages and accents. What can I use it for? The whisper-medium model and the other Whisper models can be useful for developers and researchers working on improving accessibility tools, such as closed captioning or subtitle generation. The models' speed and accuracy suggest they could be used to build near-real-time speech recognition and translation applications. However, users should be aware of the models' limitations, particularly around potential biases and disparate performance across languages and accents. Things to try One interesting aspect of the Whisper models is their ability to handle audio of up to arbitrary length through a chunking algorithm. This allows the models to be used for long-form transcription, where the audio is split into smaller segments and then reassembled. Users can experiment with this functionality to see how it performs on their specific use cases. Additionally, the Whisper models can be fine-tuned on smaller, domain-specific datasets to improve their performance in particular areas. The blog post on fine-tuning Whisper provides a step-by-step guide on how to do this.

Read more

Updated Invalid Date

🖼️

whisper-tiny

openai

Total Score

199

The whisper-tiny model is a pre-trained artificial intelligence (AI) model for automatic speech recognition (ASR) and speech translation, created by OpenAI. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The whisper-tiny model is the smallest of the Whisper checkpoints, with only 39 million parameters. It is available in both English-only and multilingual versions. Similar models include the whisper-large-v3, a general-purpose speech recognition model, the whisper model by OpenAI, the incredibly-fast-whisper model, and the whisperspeech-small model, which is an open-source text-to-speech system built by inverting Whisper. Model inputs and outputs Inputs Audio data, such as recordings of speech Outputs Transcribed text in the same language as the input audio (for speech recognition) Transcribed text in a different language than the input audio (for speech translation) Capabilities The whisper-tiny model can transcribe speech and translate speech to text in multiple languages, demonstrating strong generalization abilities without the need for fine-tuning. It can be used for a variety of applications, such as transcribing audio recordings, adding captions to videos, and enabling multilingual communication. What can I use it for? The whisper-tiny model can be used in various applications that require speech recognition or speech translation, such as: Transcribing lectures, interviews, or other audio recordings Adding captions or subtitles to videos Enabling real-time translation in video conferencing or other communication tools Developing voice-controlled interfaces for various devices and applications Things to try You can experiment with the whisper-tiny model by trying it on different types of audio data, such as recordings of speeches, interviews, or conversations in various languages. You can also explore how the model performs on audio with different levels of noise or quality, and compare its results to other speech recognition or translation models.

Read more

Updated Invalid Date