whisperspeech

Maintainer: collabora

Total Score

125

Last updated 5/28/2024

📉

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

whisperspeech is an open-source text-to-speech system built by inversing the Whisper model. The goal is to create a powerful and customizable speech generation model similar to Stable Diffusion. The model is trained on properly licensed speech recordings and the code is open-source, making it safe to use for commercial applications.

Currently, the models are trained on the English LibreLight dataset, but the team plans to target multiple languages in the future by leveraging the multilingual capabilities of Whisper and EnCodec. The model can also seamlessly mix languages in a single sentence, as demonstrated in the progress updates.

Model inputs and outputs

The whisperspeech model takes text as input and generates corresponding speech audio as output. It utilizes the Whisper model's architecture to invert the speech recognition task and produce speech from text.

Inputs

  • Text prompts for the model to generate speech from

Outputs

  • Audio files containing the generated speech

Capabilities

The whisperspeech model demonstrates the ability to generate high-quality speech in multiple languages, including the seamless mixing of languages within a single sentence. It has been optimized for inference performance, achieving over 12x real-time processing speed on a consumer GPU.

The model also showcases voice cloning capabilities, allowing users to generate speech that mimics the voice of a reference audio clip, such as a famous speech by Winston Churchill.

What can I use it for?

The whisperspeech model can be used to create various speech-based applications, such as:

  • Accessibility tools: The model's capabilities can be leveraged to improve accessibility by providing text-to-speech functionality.
  • Conversational AI: The model's ability to generate natural-sounding speech can be used to enhance conversational AI agents.
  • Audiobook creation: The model can be used to generate speech from text, enabling the creation of audiobooks and other spoken content.
  • Language learning: The model's multilingual capabilities can be utilized to create language learning resources with realistic speech output.

Things to try

One key feature of the whisperspeech model is its ability to seamlessly mix languages within a single sentence. This can be a useful technique for creating multilingual content or for training language models on code-switched data.

Additionally, the model's voice cloning capabilities open up possibilities for personalized speech synthesis, where users can generate speech that mimics the voice of a particular individual. This could be useful for audiobook narration, virtual assistants, or other applications where a specific voice is desired.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🔎

WhisperSpeech

WhisperSpeech

Total Score

141

The WhisperSpeech model is an open-source text-to-speech system built by inverting the Whisper model. Developed by Collabora and LAION, the goal is to create a powerful and customizable speech generation model similar to Stable Diffusion. The model is trained on properly licensed speech recordings and the code is open-source, making it safe for commercial applications. The model currently supports English, and the maintainers plan to target multiple languages in future releases, as both Whisper and EnCodec are multilingual. Recent progress includes optimizing inference performance to over 12x faster than real-time on a consumer GPU, adding voice cloning capabilities, and mixing languages within a single sentence. Model inputs and outputs The WhisperSpeech model takes text as input and generates high-quality speech audio as output. The model leverages powerful open-source components like Whisper for semantic token generation, EnCodec for acoustic modeling, and Vocos as a high-quality vocoder. Inputs Text prompts for speech generation Outputs Audio waveforms representing the generated speech Capabilities The WhisperSpeech model demonstrates the ability to generate natural-sounding speech in English, including the capability to perform voice cloning from a reference audio sample. It can also seamlessly mix English project names into Polish speech, showcasing its multilingual potential. What can I use it for? The WhisperSpeech model can be used to create audio content such as audiobooks, podcasts, or corporate presentations. Its open-source nature and customizability make it suitable for a wide range of commercial applications. The maintainers are also working on gathering a larger emotive speech dataset and conditioning the generation on emotions and prosody, which could further expand the model's usefulness. Things to try One interesting capability to explore with the WhisperSpeech model is its ability to mix languages within a single sentence. This could be useful for creating multilingual content or improving language learning tools. Additionally, the model's voice cloning feature opens up possibilities for personalized audio content creation.

Read more

Updated Invalid Date

👀

whisper-base

openai

Total Score

165

The whisper-base model is a pre-trained model for automatic speech recognition (ASR) and speech translation developed by OpenAI. Trained on 680,000 hours of labelled data, the Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The model was proposed in the paper Robust Speech Recognition via Large-Scale Weak Supervision and is available on the Hugging Face Hub. The whisper-tiny, whisper-medium, and whisper-large models are similar checkpoints of varying model sizes, also from OpenAI. The smaller models are trained on either English-only or multilingual data, while the larger models are multilingual only. All of the pre-trained checkpoints can be accessed on the Hugging Face Hub. Model inputs and outputs Inputs Audio**: The model takes audio samples as input and converts them to log-Mel spectrograms to feed into the Transformer encoder. Task**: The model is informed of the task to perform (transcription or translation) by passing "context tokens" to the decoder. Language**: The model can be configured to transcribe or translate audio in a specific language by providing the corresponding language token. Outputs Transcription or Translation**: The model outputs a text sequence representing the transcription or translation of the input audio. Timestamps**: Optionally, the model can also output timestamps for the generated text. Capabilities The Whisper models exhibit improved robustness to accents, background noise, and technical language compared to many existing ASR systems. They also demonstrate strong zero-shot translation capabilities, allowing users to translate audio from multiple languages into English. The models perform unevenly across languages, with lower accuracy on low-resource or low-discoverability languages. They also tend to hallucinate text that is not actually spoken in the audio input, and can generate repetitive outputs, though these issues can be mitigated to some extent. What can I use it for? The primary intended users of the Whisper models are AI researchers studying model capabilities, biases, and limitations. However, the models can also be useful as an ASR solution for developers, especially for English speech recognition tasks. The models' transcription and translation capabilities may enable the development of accessibility tools, though they cannot currently be used for real-time applications out of the box. Others may be able to build applications on top of Whisper that allow for near-real-time speech recognition and translation. Things to try Users can explore fine-tuning the pre-trained Whisper models on specialized datasets to improve performance for particular languages or domains. The blog post on fine-tuning Whisper provides a step-by-step guide for this process. Experimenting with different chunking and batching strategies can also help unlock the full potential of the Whisper models for long-form transcription and translation tasks. The ASR Chunking blog post goes into more detail on these techniques.

Read more

Updated Invalid Date

🖼️

whisper-tiny

openai

Total Score

199

The whisper-tiny model is a pre-trained artificial intelligence (AI) model for automatic speech recognition (ASR) and speech translation, created by OpenAI. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The whisper-tiny model is the smallest of the Whisper checkpoints, with only 39 million parameters. It is available in both English-only and multilingual versions. Similar models include the whisper-large-v3, a general-purpose speech recognition model, the whisper model by OpenAI, the incredibly-fast-whisper model, and the whisperspeech-small model, which is an open-source text-to-speech system built by inverting Whisper. Model inputs and outputs Inputs Audio data, such as recordings of speech Outputs Transcribed text in the same language as the input audio (for speech recognition) Transcribed text in a different language than the input audio (for speech translation) Capabilities The whisper-tiny model can transcribe speech and translate speech to text in multiple languages, demonstrating strong generalization abilities without the need for fine-tuning. It can be used for a variety of applications, such as transcribing audio recordings, adding captions to videos, and enabling multilingual communication. What can I use it for? The whisper-tiny model can be used in various applications that require speech recognition or speech translation, such as: Transcribing lectures, interviews, or other audio recordings Adding captions or subtitles to videos Enabling real-time translation in video conferencing or other communication tools Developing voice-controlled interfaces for various devices and applications Things to try You can experiment with the whisper-tiny model by trying it on different types of audio data, such as recordings of speeches, interviews, or conversations in various languages. You can also explore how the model performs on audio with different levels of noise or quality, and compare its results to other speech recognition or translation models.

Read more

Updated Invalid Date

🔮

whisper-medium

openai

Total Score

176

The whisper-medium model is a pre-trained speech recognition and translation model developed by OpenAI. It is part of the Whisper family of models, which demonstrate a strong ability to generalize to many datasets and domains without the need for fine-tuning. The whisper-medium model has 769 million parameters and is trained on either English-only or multilingual data. It can be used for both speech recognition, where it transcribes audio in the same language, and speech translation, where it transcribes audio to a different language. The Whisper models are available in a range of sizes, from the whisper-tiny with 39 million parameters to the whisper-large and whisper-large-v2 with 1.55 billion parameters. Model inputs and outputs Inputs Audio samples in various formats and sampling rates Outputs Transcriptions of the input audio, either in the same language (speech recognition) or a different language (speech translation) Optionally, the model can also output timestamps for the transcribed text Capabilities The Whisper models demonstrate strong performance on a variety of speech recognition and translation tasks, including handling accents, background noise, and technical language. They can be used in zero-shot translation, taking audio in one language and translating it to English without any fine-tuning. However, the models can also sometimes generate text that is not actually present in the audio input (known as "hallucination"), and their performance can vary across different languages and accents. What can I use it for? The whisper-medium model and the other Whisper models can be useful for developers and researchers working on improving accessibility tools, such as closed captioning or subtitle generation. The models' speed and accuracy suggest they could be used to build near-real-time speech recognition and translation applications. However, users should be aware of the models' limitations, particularly around potential biases and disparate performance across languages and accents. Things to try One interesting aspect of the Whisper models is their ability to handle audio of up to arbitrary length through a chunking algorithm. This allows the models to be used for long-form transcription, where the audio is split into smaller segments and then reassembled. Users can experiment with this functionality to see how it performs on their specific use cases. Additionally, the Whisper models can be fine-tuned on smaller, domain-specific datasets to improve their performance in particular areas. The blog post on fine-tuning Whisper provides a step-by-step guide on how to do this.

Read more

Updated Invalid Date