MythoMax-L2-13B-GPTQ

Maintainer: TheBloke

Total Score

161

Last updated 5/27/2024

🔍

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

Gryphe's MythoMax L2 13B is a large language model created by Gryphe and supported by a grant from andreessen horowitz (a16z). It is similar in size and capabilities to other prominent open-source models like the Llama 2 7B Chat and Falcon 180B Chat, but with a focus on mythological and fantastical content.

Model inputs and outputs

MythoMax-L2-13B-GPTQ is a text-to-text generative model, meaning it takes text prompts as input and generates new text as output. The model was trained on a large dataset of online text, with a focus on mythological and fantasy-related content.

Inputs

  • Text prompts: The model takes freeform natural language text as input, which it then uses to generate new text in response.

Outputs

  • Generated text: The model outputs new text that continues or expands upon the provided input prompt. The output can range from a single sentence to multiple paragraphs, depending on the prompt and the model's parameters.

Capabilities

The MythoMax-L2-13B-GPTQ model is capable of generating engaging, coherent text on a wide variety of fantasy and mythological topics. It can be used to produce creative stories, worldbuilding details, character dialogue, and more. The model's knowledge spans mythological creatures, legends, magical systems, and other fantastical concepts.

What can I use it for?

The MythoMax-L2-13B-GPTQ model is well-suited for all kinds of fantasy and science-fiction writing projects. Writers and worldbuilders can use it to generate ideas, expand on existing stories, or flesh out the details of imaginary realms. It could also be leveraged for interactive storytelling applications, roleplaying games, or even AI-generated fanfiction.

Things to try

Try prompting the model with the beginning of a fantastical story or worldbuilding prompt, and see how it continues the narrative. You can also experiment with more specific requests, like asking it to describe a particular mythological creature or magical ritual. The model's responses may surprise you with their creativity and attention to detail.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🤔

MythoMax-L2-13B-GGML

TheBloke

Total Score

81

MythoMax-L2-13B-GGML is an AI model created by the AI researcher Gryphe, and further optimized and quantized by TheBloke. It is an improved variant of Gryphe's MythoLogic-L2 and Huginn models, combining their robust understanding and extensive writing capabilities. The model uses a unique tensor merge technique to blend these capabilities, resulting in strong performance on both roleplaying and storywriting tasks. TheBloke has provided quantized versions of the model in GGML format, which can be used for efficient CPU and GPU inference. These include 4-bit, 5-bit and 8-bit quantized models as well as GPTQ models for GPU acceleration. There are also GGUF models available, which provide improved compatibility with the latest version of llama.cpp. Model inputs and outputs Inputs Text**: The model takes text as input, which it uses to generate further text outputs. Outputs Text**: The model generates natural language text outputs, which can be used for a variety of purposes such as creative writing, roleplay, and language tasks. Capabilities The MythoMax-L2-13B-GGML model excels at both roleplaying and storywriting due to its unique tensor merge technique, which combines the strengths of the MythoLogic-L2 and Huginn models. It is able to generate coherent and engaging text across a range of styles and genres. What can I use it for? The MythoMax-L2-13B-GGML model can be used for a variety of text generation tasks, such as: Creative writing and storytelling Roleplaying and interactive fiction Language modeling and downstream NLP applications The quantized versions provided by TheBloke allow for efficient inference on both CPU and GPU, making the model accessible to a wide range of users and use cases. Things to try One interesting aspect of the MythoMax-L2-13B-GGML model is its ability to generate long, coherent responses. This can be particularly useful for roleplaying and interactive fiction scenarios, where the model can maintain a consistent narrative and character over an extended exchange. Researchers and developers may also want to explore fine-tuning the model on domain-specific data to further improve its performance on specialized tasks. The Gryphe's original unquantised fp16 model could be a good starting point for further training and customization.

Read more

Updated Invalid Date

💬

Mythalion-13B-GPTQ

TheBloke

Total Score

52

The Mythalion-13B-GPTQ is a large language model created by PygmalionAI and quantized to 4-bit and 8-bit precision by TheBloke. It is based on the original Mythalion 13B model and provides multiple GPTQ parameter configurations to optimize for different hardware and inference requirements. Similar quantized models from TheBloke include the MythoMax-L2-13B-GPTQ and wizard-mega-13B-GPTQ. Model inputs and outputs The Mythalion-13B-GPTQ is a text-to-text model, taking in natural language prompts and generating relevant text responses. It was fine-tuned on various datasets to enhance its conversational and storytelling capabilities. Inputs Natural language prompts or instructions Outputs Generated text responses relevant to the input prompt Capabilities The Mythalion-13B-GPTQ model excels at natural language understanding and generation, allowing it to engage in open-ended conversations and produce coherent, contextually-appropriate text. It performs well on tasks like creative writing, dialogue systems, and question-answering. What can I use it for? The Mythalion-13B-GPTQ model can be used for a variety of natural language processing applications, such as building interactive chatbots, generating creative fiction and dialog, and enhancing language understanding in other AI systems. Its large scale and diverse training data make it a powerful tool for developers and researchers working on language-focused projects. Things to try Try giving the model prompts that involve storytelling, world-building, or roleplaying scenarios. Its strong understanding of context and ability to generate coherent, imaginative text can lead to engaging and surprising responses. You can also experiment with different quantization configurations to find the best balance between model size, inference speed, and accuracy for your specific use case.

Read more

Updated Invalid Date

MythoMax-L2-13B-GGUF

TheBloke

Total Score

61

The MythoMax-L2-13B-GGUF is an AI language model created by TheBloke. It is a quantized version of Gryphe's MythoMax L2 13B model, which was an improved variant that merged MythoLogic-L2 and Huginn models using an experimental tensor merging technique. The quantized versions available from TheBloke provide a range of options with different bit depths and trade-offs between model size, RAM usage, and inference quality. Similar models include the MythoMax-L2-13B-GGML and MythoMax-L2-13B-GPTQ which offer different quantization formats. TheBloke has also provided quantized versions of other models like Llama-2-13B-chat-GGUF and CausalLM-14B-GGUF. Model inputs and outputs Inputs Text**: The model takes natural language text as input, which can include prompts, instructions, or conversational messages. Outputs Text**: The model generates fluent text responses, which can range from short answers to longer passages. The output is tailored to the input prompt and can cover a wide variety of topics. Capabilities The MythoMax-L2-13B-GGUF model is proficient at both roleplaying and storywriting due to its unique merging of the MythoLogic-L2 and Huginn models. It demonstrates strong language understanding and generation capabilities, allowing it to engage in coherent and contextual conversations. The model can be used for tasks such as creative writing, dialogue generation, and language understanding. What can I use it for? The MythoMax-L2-13B-GGUF model can be used for a variety of natural language processing tasks, particularly those involving creative writing and interactive dialogue. Some potential use cases include: Narrative generation**: Use the model to generate original stories, plot lines, and character dialogues. Interactive fiction**: Incorporate the model into interactive fiction or choose-your-own-adventure style experiences. Roleplaying assistant**: Leverage the model's capabilities to enable engaging roleplaying scenarios and character interactions. Conversational AI**: Utilize the model's language understanding and generation abilities to power chatbots or virtual assistants. Things to try One interesting aspect of the MythoMax-L2-13B-GGUF model is its blend of capabilities from the MythoLogic-L2 and Huginn models. You could explore the model's performance on tasks that require both robust language understanding and creative writing, such as generating coherent and engaging fictional narratives in response to open-ended prompts. Additionally, you could experiment with using the model as a roleplaying assistant, providing it with character profiles and scenario details to see how it responds and develops the interaction.

Read more

Updated Invalid Date

🔗

Pygmalion-2-13B-GPTQ

TheBloke

Total Score

42

The Pygmalion-2-13B-GPTQ is a quantized version of the Pygmalion 2 13B language model created by PygmalionAI. It is a merge of Pygmalion-2 13B and Gryphe's MythoMax 13B model. According to the maintainer TheBloke, this model seems to outperform the original MythoMax in roleplaying and chat tasks. Similar quantized models available from TheBloke include the Mythalion-13B-GPTQ and the Llama-2-13B-GPTQ. These all provide different quantization options to optimize for performance on various hardware. Model inputs and outputs Inputs The model accepts text prompts as input, which can be formatted using the provided `, , and ` tokens. This allows injecting context, indicating user input, and specifying where the model should generate a response. Outputs The model generates text outputs in response to the provided prompts. It is designed to excel at roleplaying and creative writing tasks. Capabilities The Pygmalion-2-13B-GPTQ model is capable of generating coherent, contextual responses to prompts. It performs well on roleplaying and chat tasks, able to maintain a consistent persona and produce long-form responses. The model's capabilities make it suitable for applications like interactive fiction, creative writing assistants, and conversational AI agents. What can I use it for? The Pygmalion-2-13B-GPTQ model can be used for a variety of natural language generation tasks, with a particular focus on roleplaying and creative writing. Some potential use cases include: Interactive Fiction**: The model's ability to maintain character personas and generate contextual responses makes it well-suited for developing choose-your-own-adventure style interactive fiction experiences. Creative Writing Assistance**: The model can be used to assist human writers by generating text passages, suggesting plot ideas, or helping to develop characters and worlds. Conversational AI**: The model's chat-oriented capabilities can be leveraged to build more natural and engaging conversational AI agents for customer service, virtual assistants, or other interactive applications. Things to try One interesting aspect of the Pygmalion-2-13B-GPTQ model is its use of the provided `, , and ` tokens to structure prompts and conversations. Experimenting with different ways to leverage this format, such as defining custom personas or modes for the model to operate in, can unlock novel use cases and interactions. Additionally, trying out the various quantization options provided by TheBloke (e.g. 4-bit, 8-bit with different group sizes and Act Order settings) can help you find the best balance of performance and resource usage for your specific hardware and application requirements.

Read more

Updated Invalid Date