Riffusion

Models by this creator

AI model preview image

riffusion

riffusion

Total Score

950

riffusion is a library for real-time music and audio generation using the Stable Diffusion text-to-image diffusion model. It was developed by Seth Forsgren and Hayk Martiros as a hobby project. riffusion fine-tunes Stable Diffusion to generate spectrogram images that can be converted into audio clips, allowing for the creation of music based on text prompts. This is in contrast to other similar models like inkpunk-diffusion and multidiffusion which focus on visual art generation. Model inputs and outputs riffusion takes in a text prompt, an optional second prompt for interpolation, a seed image ID, and parameters controlling the diffusion process. It outputs a spectrogram image and the corresponding audio clip. Inputs Prompt A**: The primary text prompt describing the desired audio Prompt B**: An optional second prompt to interpolate with the first Alpha**: The interpolation value between the two prompts, from 0 to 1 Denoising**: How much to transform the input spectrogram, from 0 to 1 Seed Image ID**: The ID of a seed spectrogram image to use Num Inference Steps**: The number of steps to run the diffusion model Outputs Spectrogram Image**: A spectrogram visualization of the generated audio Audio Clip**: The generated audio clip in MP3 format Capabilities riffusion can generate a wide variety of musical styles and genres based on the provided text prompts. For example, it can create "funky synth solos", "jazz with piano", or "church bells on Sunday". The model is able to capture complex musical concepts and translate them into coherent audio clips. What can I use it for? The riffusion model is intended for research and creative applications. It could be used to generate audio for educational or creative tools, or as part of artistic projects exploring the intersection of language and music. Additionally, researchers studying generative models and the connection between text and audio may find riffusion useful for their work. Things to try One interesting aspect of riffusion is its ability to interpolate between two text prompts. By adjusting the alpha parameter, you can create a smooth transition from one style of music to another, allowing for the generation of unique and unexpected audio clips. Another interesting area to explore is the model's handling of seed images - by providing different starting spectrograms, you can influence the character and direction of the generated music.

Read more

Updated 9/19/2024

🧠

riffusion-model-v1

riffusion

Total Score

556

riffusion-model-v1 is a latent text-to-image diffusion model capable of generating spectrogram images given any text input. These spectrograms can be converted into audio clips. The model was created by fine-tuning the Stable Diffusion checkpoint. The Riffusion model was developed by Seth Forsgren and Hayk Martiros as a hobby project. Model inputs and outputs The riffusion-model-v1 takes text prompts as input and generates spectrogram images as output. These spectrograms can then be converted into audio clips. Inputs Text prompt**: Any text input that describes the desired audio clip. Outputs Spectrogram image**: An image containing a visual representation of the audio signal's frequency content over time. Capabilities The riffusion-model-v1 is capable of generating a wide variety of audio content based on text prompts, from musical melodies to sound effects. By leveraging the capabilities of Stable Diffusion, the model can create unique and creative audio outputs that align with the provided text input. What can I use it for? The riffusion-model-v1 model is intended for research purposes only. Possible use cases include the generation of artistic audio content, exploration of the limitations and biases of generative audio models, and the development of educational or creative tools. The model should not be used to intentionally create or disseminate harmful or offensive content. Things to try Experiment with different text prompts to see the variety of audio outputs the riffusion-model-v1 can generate. Try prompts that describe specific genres, instruments, or sound effects to see how the model responds. Additionally, you can explore the model's capabilities by combining text prompts with the Riffusion web app to create interactive audio experiences.

Read more

Updated 5/28/2024