Pygmalionai

Models by this creator

🤯

pygmalion-6b

PygmalionAI

Total Score

721

pygmalion-6b is a proof-of-concept dialogue model based on GPT-J-6B, created by PygmalionAI. It has been fine-tuned on 56MB of dialogue data gathered from multiple sources, including both real and partially machine-generated conversations. This model is not suitable for use by minors, as it may output X-rated content under certain circumstances. Model Inputs and Outputs The pygmalion-6b model takes in prompts formatted with specific persona and dialogue history information. The expected input format is: Inputs [CHARACTER]'s Persona:** A few sentences describing the character the model should portray :** A delimiter token to separate the persona from the dialogue history [DIALOGUE HISTORY]:** Previous messages in the conversation to provide context The model will then generate a response in the voice of the specified character. Outputs Text response from the specified character Capabilities pygmalion-6b is capable of engaging in open-ended dialogue, roleplaying different characters, and generating fictional conversations. However, due to the training data used, the model may produce socially unacceptable or offensive text at times. What Can I Use It For? The pygmalion-6b model can be used for entertainment purposes, such as interactive fiction or chatbots for fictional scenarios. However, it is not suitable for commercial or high-stakes applications, as the model's outputs cannot be guaranteed to be safe or accurate. The Gradio UI notebook provided by PygmalionAI offers an easy way to experiment with the model. Things to Try You can try prompting the model with detailed personas and dialogue histories to see how it responds in-character. Experiment with different types of fictional scenarios, such as fantasy, sci-fi, or historical settings. Pay attention to how the model's responses change based on the provided context.

Read more

Updated 5/28/2024

🎲

pygmalion-7b

PygmalionAI

Total Score

159

pygmalion-7b is a conversational language model based on Meta's LLaMA-7B. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4. The model weights in this repository are XORed due to licensing concerns, so users need to request access to the original LLaMA weights from Meta and then use the provided scripts to decode the files. Similar models include Llama-2-7b-chat-hf, a 7B fine-tuned Llama 2 model optimized for dialogue use cases, and NeuralBeagle14-7B, a 7B model that has been further fine-tuned using a preference dataset. Model inputs and outputs Inputs Text input prompts, typically following a specific format with character persona, dialogue history, and user input. Outputs Generated text responses, continuing the dialogue in the context of the provided prompt. Capabilities pygmalion-7b displays good performance on instruction following and reasoning tasks. It can be used for a variety of applications like role-playing, storytelling, and general conversation. What can I use it for? The pygmalion-7b model can be used for building conversational AI assistants, chatbots, and interactive fictional experiences. Its strong language understanding and generation capabilities make it suitable for tasks like customer service, virtual companionship, and creative writing assistance. Things to try Try prompting the model with different personas and scenarios to see how it adapts its responses. Experiment with providing more or less dialogue history to observe changes in the model's coherence and contextual understanding. You can also try using the model in a question-answering setup to assess its knowledge and reasoning skills.

Read more

Updated 5/28/2024

📈

mythalion-13b

PygmalionAI

Total Score

133

The mythalion-13b model is a merge of the Pygmalion-2 13B and MythoMax L2 13B models, created in collaboration between PygmalionAI and Gryphe. According to the maintainers, this model seems to outperform MythoMax in roleplay and conversation tasks. Model inputs and outputs Inputs The model can be prompted using both the Alpaca and Pygmalion/Metharme formatting, which utilize special tokens like `, , and ` to indicate different roles and conversation flow. Outputs The model generates long-form text responses that aim to stay in character and continue the narrative, making it suitable for fictional writing and roleplaying. Capabilities The mythalion-13b model is focused on generating engaging, character-driven text for creative writing and roleplay scenarios. It has been trained on a mixture of instruction data, fictional stories, and conversational data to develop its capabilities in these areas. What can I use it for? The mythalion-13b model is well-suited for projects involving fictional writing, interactive storytelling, and character-driven roleplaying. This could include applications like interactive fiction, creative writing assistants, and open-ended chat bots. However, the maintainers note that the model was not fine-tuned to be safe or harmless, so it may generate content that is socially unacceptable or factually incorrect. Things to try One interesting aspect of the mythalion-13b model is its use of the Pygmalion/Metharme prompting format, which allows the user to set the character persona and guide the model's responses to stay in-character. Experimenting with different character backgrounds and personas could lead to unique and engaging narrative experiences.

Read more

Updated 5/28/2024

🐍

pygmalion-13b

PygmalionAI

Total Score

109

pygmalion-13b is a conversational language model based on Meta's LLaMA-13B. It has been fine-tuned using a subset of the data from the Pygmalion-6B-v8-pt4 project. This model is version 1 and is part of the Pygmalion line of dialogue models from PygmalionAI. The pygmalion-7b model is a similar conversational model based on the LLaMA-7B, also fine-tuned using Pygmalion-6B-v8-pt4 data. Both models follow the usual Pygmalion persona and chat format. Model inputs and outputs Inputs Text prompt**: The model expects input text following a specific format, including the character's persona and dialogue history. Outputs Generated text**: The model outputs generated text responses continuing the dialogue. Capabilities The pygmalion-13b model is designed for open-ended conversational tasks. It is capable of engaging in back-and-forth dialogues, taking on different personas, and generating relevant and coherent responses. The model can discuss a wide range of topics, drawing upon its broad knowledge base. What can I use it for? The pygmalion-13b model can be used for various interactive applications, such as chatbots, virtual assistants, or creative writing tools. Its conversational abilities make it well-suited for applications where natural language interaction is important, like customer service, education, or interactive entertainment. Things to try One interesting aspect of the pygmalion-13b model is its ability to adapt to different personas and styles of interaction. You could experiment with providing varied persona descriptions to see how the model responds and develops the character over the course of a conversation. Additionally, trying out different prompting strategies, such as including more or less dialogue history, can yield interesting results and insights about the model's capabilities.

Read more

Updated 5/28/2024

🔍

pygmalion-2-13b

PygmalionAI

Total Score

72

The pygmalion-2-13b is an instruction-tuned version of the Llama-2 large language model, developed by PygmalionAI. It is biased towards fiction writing and conversation, and is an evolution of the earlier Pygmalion models. The model was trained on a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions. While similar to the Pygmalion-13b and Pygmalion-7b models, the pygmalion-2-13b has been designed with a focus on more natural language interaction and creative writing tasks. Model inputs and outputs Inputs The model takes in text prompts, which can include special tokens such as `, and ` to indicate different roles and control the length and style of the generated output. Outputs The model generates text, primarily focused on creative writing, storytelling, and natural language conversations. Capabilities The pygmalion-2-13b model is particularly skilled at generating coherent and engaging fictional narratives, as well as natural-sounding dialogue. It can be used to aid in the creative writing process, generate character backstories and worldbuilding details, and facilitate interactive roleplay scenarios. What can I use it for? The pygmalion-2-13b model could be useful for a variety of creative and conversational applications, such as: Assisting writers and storytellers in ideation, character development, and narrative generation Powering interactive fiction and choose-your-own-adventure experiences Enabling more natural and immersive chatbots and virtual assistants Facilitating collaborative worldbuilding and roleplaying games Things to try One interesting aspect of the pygmalion-2-13b model is its ability to maintain coherent personas and narratives over longer conversational exchanges. You could try prompting the model with a character description and backstory, then engaging in an extended dialogue to see how it develops the character and storyline. Additionally, experimenting with different prompt structures and special tokens could yield interesting creative results.

Read more

Updated 5/27/2024

🤿

pygmalion-1.3b

PygmalionAI

Total Score

64

pygmalion-13b is a conversational language model based on Meta's LLaMA-13B. It has been fine-tuned using a subset of the data from the Pygmalion-6B-v8-pt4 project. This model is version 1 and is part of the Pygmalion line of dialogue models from PygmalionAI. The pygmalion-7b model is a similar conversational model based on the LLaMA-7B, also fine-tuned using Pygmalion-6B-v8-pt4 data. Both models follow the usual Pygmalion persona and chat format. Model inputs and outputs Inputs Text prompt**: The model expects input text following a specific format, including the character's persona and dialogue history. Outputs Generated text**: The model outputs generated text responses continuing the dialogue. Capabilities The pygmalion-13b model is designed for open-ended conversational tasks. It is capable of engaging in back-and-forth dialogues, taking on different personas, and generating relevant and coherent responses. The model can discuss a wide range of topics, drawing upon its broad knowledge base. What can I use it for? The pygmalion-13b model can be used for various interactive applications, such as chatbots, virtual assistants, or creative writing tools. Its conversational abilities make it well-suited for applications where natural language interaction is important, like customer service, education, or interactive entertainment. Things to try One interesting aspect of the pygmalion-13b model is its ability to adapt to different personas and styles of interaction. You could experiment with providing varied persona descriptions to see how the model responds and develops the character over the course of a conversation. Additionally, trying out different prompting strategies, such as including more or less dialogue history, can yield interesting results and insights about the model's capabilities.

Read more

Updated 5/28/2024

📉

metharme-7b

PygmalionAI

Total Score

54

The metharme-7b is an instruction-tuned LLaMA model developed by PygmalionAI that is biased towards fiction writing and conversation. It is similar to other PygmalionAI models like pygmalion-2-13b, pygmalion-7b, and pygmalion-13b, which are also based on the LLaMA architecture and fine-tuned for conversational and creative tasks. Model inputs and outputs The metharme-7b model accepts natural language instructions as inputs, which can include prompts for the model to engage in fictional writing, roleplaying, and open-ended conversation. The model outputs coherent, context-appropriate text in response to the provided instructions. Inputs Natural language instructions**: The model accepts prompts and instructions written in natural language, which can include requests for the model to engage in fictional writing, roleplay, or open-ended conversation. Outputs Textual responses**: The model generates relevant, context-appropriate text in response to the provided instructions. Outputs may include fictional stories, dialogue, or other creative writing. Capabilities The metharme-7b model is capable of understanding and generating text for a variety of creative and conversational tasks. It can be used to produce fictional stories, engage in roleplay, and hold open-ended conversations. The model's capabilities make it well-suited for applications in entertainment, gaming, and creative writing. What can I use it for? The metharme-7b model can be used for a variety of creative and conversational applications, such as: Fiction writing**: The model can be used to generate original fictional stories, dialogue, and other creative writing based on provided prompts or instructions. Roleplaying and interactive fiction**: The model can be used to simulate characters and engage in roleplaying scenarios, creating immersive and responsive experiences for users. Conversational AI**: The model's conversational capabilities can be leveraged to build chatbots and virtual assistants that can engage in open-ended discussions on a wide range of topics. Things to try Some interesting things to try with the metharme-7b model include: Experimenting with different prompts and instruction formats to see how the model responds in various creative and conversational scenarios. Combining the model's capabilities with other tools or technologies, such as interactive fiction platforms or virtual roleplaying environments, to create unique and engaging experiences for users. Exploring the model's limitations and biases, and considering ways to mitigate any undesirable outputs, such as through careful prompt engineering or the use of content filtering.

Read more

Updated 5/28/2024

💬

pygmalion-2-7b

PygmalionAI

Total Score

53

pygmalion-2-7b is an instruction-tuned Llama-2 7B model developed by PygmalionAI that is biased towards fiction writing and conversation. It was fine-tuned on a mixture of regular instruction data alongside roleplay, fictional stories, and conversations with synthetically generated instructions. Similar models include the pygmalion-2-13b, which is a 13B version of the model, and the metharme-7b, which is an earlier experiment in this space. Model inputs and outputs The pygmalion-2-7b model takes in text prompts and generates text outputs. The prompts use a specific format with different roles denoted by `, , and ` tokens. The system prompt can be used to set the model's context, the user prompt indicates user input, and the model token signals that the model should generate a response. Inputs Prompts**: Text prompts using the role tokens to set context, provide user input, and signal the model to generate a response. Outputs Generated text**: The model will generate text outputs in response to the provided prompts, automatically emitting an end-of-text token when the response is complete. Capabilities The pygmalion-2-7b model is capable of generating fictional stories, roleplaying scenarios, and engaging conversations. It can be guided using natural language prompts to take on different personas and respond accordingly. The model has been trained on a diverse dataset, allowing it to draw from a wide range of knowledge and styles. What can I use it for? The pygmalion-2-7b model is well-suited for creative writing and storytelling applications. It could be used to assist authors in generating ideas, drafting narratives, or collaborating on fictional worlds. The model's conversational abilities also make it a potential tool for interactive fiction, chatbots, or other dialogue-driven experiences. For businesses, the model could be leveraged to generate engaging content for marketing, entertainment, or customer interaction purposes. It could also be fine-tuned further for specific use cases or integrated into larger AI-powered applications. Things to try One interesting thing to try with the pygmalion-2-7b model is to experiment with different personas and prompting styles. The model's flexibility allows it to take on a wide range of characters and respond in various tones and voices. You could try prompting it to play the role of a whimsical wizard, a brooding detective, or a jolly pirate, and see how it adapts its language and storytelling to fit the character. Another idea is to use the model to collaboratively build out a fictional world or narrative. Start with a high-level prompt, then take turns adding details, plot points, and dialogue, letting the model fill in the gaps and build upon the shared vision. This interactive approach could lead to unexpected and engaging results.

Read more

Updated 5/28/2024

🏋️

pygmalion-350m

PygmalionAI

Total Score

52

The pygmalion-350m model is a proof-of-concept fine-tune of Facebook's OPT-350M model, optimized for dialogue. It was created by PygmalionAI as a stepping stone to higher parameter models. The model was relatively easy to create, using the ColossalAI library to fine-tune the OPT-350M model on a small dataset of 273 KB, taking less than an hour on a single GPU with 6 GB of VRAM. Model inputs and outputs The pygmalion-350m model is a text-to-text model, taking text inputs and generating text outputs. Inputs Textual prompts that can include dialogue history and character personas Outputs Textual responses generated based on the input prompt Capabilities The pygmalion-350m model is capable of generating human-like dialogue, with the ability to assume different character personas based on provided context. It can carry on conversations and respond appropriately to prompts. What can I use it for? The pygmalion-350m model can be used for creative writing, roleplaying, and dialogue-based applications. It could be utilized in chatbots, interactive fiction, and other conversational AI systems. However, as the model was trained on NSFW data, it should be used with caution and is not suitable for use by minors. Things to try Try providing the model with detailed character personas and dialogue history to see how it can assume different roles and maintain a coherent personality across an exchange. Experiment with different prompting styles and see how the model responds.

Read more

Updated 5/28/2024

🗣️

pygmalion-2.7b

PygmalionAI

Total Score

51

pygmalion-2.7b is a proof-of-concept dialogue model based on EleutherAI's GPT-Neo-2.7B. It was created by PygmalionAI, who have also released similar models like Pygmalion 6B, Pygmalion 7B, Pygmalion 13B, and Pygmalion-2 7B. Model inputs and outputs pygmalion-2.7b is a text generation model that can be used to generate dialogues based on provided prompts. The model expects input prompts to follow a specific format, including a character persona, dialogue history, and the user's message. Inputs Character Persona**: A few sentences describing the character the model should portray Dialogue History**: Previous messages in the conversation to provide context User Message**: The current message from the user Outputs Generated Text**: The model's response, continuing the dialogue in the style of the specified character Capabilities The pygmalion-2.7b model is capable of generating coherent and contextually appropriate dialogue based on the provided inputs. It can effectively take on the persona of different characters and maintain that personality through the conversation. What can I use it for? The pygmalion-2.7b model is intended for use in fictional conversation and entertainment purposes. It could be used to power chatbots, virtual assistants, or interactive storytelling experiences where users can engage in dialogue with AI-generated characters. Things to try One interesting aspect of pygmalion-2.7b is its ability to maintain a consistent character persona throughout a conversation. Try providing the model with detailed character descriptions and see how it adapts its language and responses to stay true to that persona. You can also experiment with different dialogue history prompts to see how the model's outputs change based on the context.

Read more

Updated 5/28/2024