mythalion-13b

Maintainer: PygmalionAI

Total Score

133

Last updated 5/28/2024

📈

PropertyValue
Model LinkView on HuggingFace
API SpecView on HuggingFace
Github LinkNo Github link provided
Paper LinkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

The mythalion-13b model is a merge of the Pygmalion-2 13B and MythoMax L2 13B models, created in collaboration between PygmalionAI and Gryphe. According to the maintainers, this model seems to outperform MythoMax in roleplay and conversation tasks.

Model inputs and outputs

Inputs

  • The model can be prompted using both the Alpaca and Pygmalion/Metharme formatting, which utilize special tokens like <|system|>, <|user|>, and <|model|> to indicate different roles and conversation flow.

Outputs

  • The model generates long-form text responses that aim to stay in character and continue the narrative, making it suitable for fictional writing and roleplaying.

Capabilities

The mythalion-13b model is focused on generating engaging, character-driven text for creative writing and roleplay scenarios. It has been trained on a mixture of instruction data, fictional stories, and conversational data to develop its capabilities in these areas.

What can I use it for?

The mythalion-13b model is well-suited for projects involving fictional writing, interactive storytelling, and character-driven roleplaying. This could include applications like interactive fiction, creative writing assistants, and open-ended chat bots. However, the maintainers note that the model was not fine-tuned to be safe or harmless, so it may generate content that is socially unacceptable or factually incorrect.

Things to try

One interesting aspect of the mythalion-13b model is its use of the Pygmalion/Metharme prompting format, which allows the user to set the character persona and guide the model's responses to stay in-character. Experimenting with different character backgrounds and personas could lead to unique and engaging narrative experiences.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🔍

pygmalion-2-13b

PygmalionAI

Total Score

72

The pygmalion-2-13b is an instruction-tuned version of the Llama-2 large language model, developed by PygmalionAI. It is biased towards fiction writing and conversation, and is an evolution of the earlier Pygmalion models. The model was trained on a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions. While similar to the Pygmalion-13b and Pygmalion-7b models, the pygmalion-2-13b has been designed with a focus on more natural language interaction and creative writing tasks. Model inputs and outputs Inputs The model takes in text prompts, which can include special tokens such as `, and ` to indicate different roles and control the length and style of the generated output. Outputs The model generates text, primarily focused on creative writing, storytelling, and natural language conversations. Capabilities The pygmalion-2-13b model is particularly skilled at generating coherent and engaging fictional narratives, as well as natural-sounding dialogue. It can be used to aid in the creative writing process, generate character backstories and worldbuilding details, and facilitate interactive roleplay scenarios. What can I use it for? The pygmalion-2-13b model could be useful for a variety of creative and conversational applications, such as: Assisting writers and storytellers in ideation, character development, and narrative generation Powering interactive fiction and choose-your-own-adventure experiences Enabling more natural and immersive chatbots and virtual assistants Facilitating collaborative worldbuilding and roleplaying games Things to try One interesting aspect of the pygmalion-2-13b model is its ability to maintain coherent personas and narratives over longer conversational exchanges. You could try prompting the model with a character description and backstory, then engaging in an extended dialogue to see how it develops the character and storyline. Additionally, experimenting with different prompt structures and special tokens could yield interesting creative results.

Read more

Updated Invalid Date

💬

Mythalion-13B-GPTQ

TheBloke

Total Score

52

The Mythalion-13B-GPTQ is a large language model created by PygmalionAI and quantized to 4-bit and 8-bit precision by TheBloke. It is based on the original Mythalion 13B model and provides multiple GPTQ parameter configurations to optimize for different hardware and inference requirements. Similar quantized models from TheBloke include the MythoMax-L2-13B-GPTQ and wizard-mega-13B-GPTQ. Model inputs and outputs The Mythalion-13B-GPTQ is a text-to-text model, taking in natural language prompts and generating relevant text responses. It was fine-tuned on various datasets to enhance its conversational and storytelling capabilities. Inputs Natural language prompts or instructions Outputs Generated text responses relevant to the input prompt Capabilities The Mythalion-13B-GPTQ model excels at natural language understanding and generation, allowing it to engage in open-ended conversations and produce coherent, contextually-appropriate text. It performs well on tasks like creative writing, dialogue systems, and question-answering. What can I use it for? The Mythalion-13B-GPTQ model can be used for a variety of natural language processing applications, such as building interactive chatbots, generating creative fiction and dialog, and enhancing language understanding in other AI systems. Its large scale and diverse training data make it a powerful tool for developers and researchers working on language-focused projects. Things to try Try giving the model prompts that involve storytelling, world-building, or roleplaying scenarios. Its strong understanding of context and ability to generate coherent, imaginative text can lead to engaging and surprising responses. You can also experiment with different quantization configurations to find the best balance between model size, inference speed, and accuracy for your specific use case.

Read more

Updated Invalid Date

💬

pygmalion-2-7b

PygmalionAI

Total Score

53

pygmalion-2-7b is an instruction-tuned Llama-2 7B model developed by PygmalionAI that is biased towards fiction writing and conversation. It was fine-tuned on a mixture of regular instruction data alongside roleplay, fictional stories, and conversations with synthetically generated instructions. Similar models include the pygmalion-2-13b, which is a 13B version of the model, and the metharme-7b, which is an earlier experiment in this space. Model inputs and outputs The pygmalion-2-7b model takes in text prompts and generates text outputs. The prompts use a specific format with different roles denoted by `, , and ` tokens. The system prompt can be used to set the model's context, the user prompt indicates user input, and the model token signals that the model should generate a response. Inputs Prompts**: Text prompts using the role tokens to set context, provide user input, and signal the model to generate a response. Outputs Generated text**: The model will generate text outputs in response to the provided prompts, automatically emitting an end-of-text token when the response is complete. Capabilities The pygmalion-2-7b model is capable of generating fictional stories, roleplaying scenarios, and engaging conversations. It can be guided using natural language prompts to take on different personas and respond accordingly. The model has been trained on a diverse dataset, allowing it to draw from a wide range of knowledge and styles. What can I use it for? The pygmalion-2-7b model is well-suited for creative writing and storytelling applications. It could be used to assist authors in generating ideas, drafting narratives, or collaborating on fictional worlds. The model's conversational abilities also make it a potential tool for interactive fiction, chatbots, or other dialogue-driven experiences. For businesses, the model could be leveraged to generate engaging content for marketing, entertainment, or customer interaction purposes. It could also be fine-tuned further for specific use cases or integrated into larger AI-powered applications. Things to try One interesting thing to try with the pygmalion-2-7b model is to experiment with different personas and prompting styles. The model's flexibility allows it to take on a wide range of characters and respond in various tones and voices. You could try prompting it to play the role of a whimsical wizard, a brooding detective, or a jolly pirate, and see how it adapts its language and storytelling to fit the character. Another idea is to use the model to collaboratively build out a fictional world or narrative. Start with a high-level prompt, then take turns adding details, plot points, and dialogue, letting the model fill in the gaps and build upon the shared vision. This interactive approach could lead to unexpected and engaging results.

Read more

Updated Invalid Date

👁️

Mythalion-13B-GGUF

TheBloke

Total Score

62

The Mythalion-13B-GGUF is a large language model created by PygmalionAI and quantized by TheBloke. It is a 13 billion parameter model built on the Llama 2 architecture and fine-tuned for improved coherency and performance in roleplaying and storytelling tasks. The model is available in a variety of quantized versions to suit different hardware and performance needs, ranging from 2-bit to 8-bit precision. Similar models from TheBloke include the MythoMax-L2-13B-GGUF, which combines the robust understanding of MythoLogic-L2 with the extensive writing capability of Huginn, and the Mythalion-13B-GPTQ which uses GPTQ quantization instead of GGUF. Model inputs and outputs Inputs Text**: The Mythalion-13B-GGUF model accepts text inputs, which can be used to provide instructions, prompts, or conversation context. Outputs Text**: The model generates coherent text responses to continue conversations or complete tasks specified in the input. Capabilities The Mythalion-13B-GGUF model excels at roleplay and storytelling tasks. It can engage in nuanced and contextual dialogue, generating relevant and coherent responses. The model also demonstrates strong writing capabilities, allowing it to produce compelling narrative content. What can I use it for? The Mythalion-13B-GGUF model can be used for a variety of creative and interactive applications, such as: Roleplaying and creative writing**: Integrate the model into interactive fiction platforms or chatbots to enable engaging, character-driven stories and dialogues. Conversational AI assistants**: Utilize the model's strong language understanding and generation capabilities to build helpful, friendly, and trustworthy AI assistants. Narrative generation**: Leverage the model's storytelling abilities to automatically generate plot outlines, character biographies, or even full-length stories. Things to try One interesting aspect of the Mythalion-13B-GGUF model is its ability to maintain coherence and consistency across long-form interactions. Try providing the model with a detailed character prompt or backstory, and see how it is able to continue the narrative and stay true to the established persona over the course of an extended conversation. Another interesting experiment is to explore the model's capacity for world-building. Start with a high-level premise or setting, and prompt the model to expand on the details, introducing new characters, locations, and plot points in a coherent and compelling way.

Read more

Updated Invalid Date