MythoMax-L2-13B-GGML

Maintainer: TheBloke

Total Score

81

Last updated 5/28/2024

🤔

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

MythoMax-L2-13B-GGML is an AI model created by the AI researcher Gryphe, and further optimized and quantized by TheBloke. It is an improved variant of Gryphe's MythoLogic-L2 and Huginn models, combining their robust understanding and extensive writing capabilities. The model uses a unique tensor merge technique to blend these capabilities, resulting in strong performance on both roleplaying and storywriting tasks.

TheBloke has provided quantized versions of the model in GGML format, which can be used for efficient CPU and GPU inference. These include 4-bit, 5-bit and 8-bit quantized models as well as GPTQ models for GPU acceleration. There are also GGUF models available, which provide improved compatibility with the latest version of llama.cpp.

Model inputs and outputs

Inputs

  • Text: The model takes text as input, which it uses to generate further text outputs.

Outputs

  • Text: The model generates natural language text outputs, which can be used for a variety of purposes such as creative writing, roleplay, and language tasks.

Capabilities

The MythoMax-L2-13B-GGML model excels at both roleplaying and storywriting due to its unique tensor merge technique, which combines the strengths of the MythoLogic-L2 and Huginn models. It is able to generate coherent and engaging text across a range of styles and genres.

What can I use it for?

The MythoMax-L2-13B-GGML model can be used for a variety of text generation tasks, such as:

  • Creative writing and storytelling
  • Roleplaying and interactive fiction
  • Language modeling and downstream NLP applications

The quantized versions provided by TheBloke allow for efficient inference on both CPU and GPU, making the model accessible to a wide range of users and use cases.

Things to try

One interesting aspect of the MythoMax-L2-13B-GGML model is its ability to generate long, coherent responses. This can be particularly useful for roleplaying and interactive fiction scenarios, where the model can maintain a consistent narrative and character over an extended exchange.

Researchers and developers may also want to explore fine-tuning the model on domain-specific data to further improve its performance on specialized tasks. The Gryphe's original unquantised fp16 model could be a good starting point for further training and customization.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

MythoMax-L2-13B-GGUF

TheBloke

Total Score

61

The MythoMax-L2-13B-GGUF is an AI language model created by TheBloke. It is a quantized version of Gryphe's MythoMax L2 13B model, which was an improved variant that merged MythoLogic-L2 and Huginn models using an experimental tensor merging technique. The quantized versions available from TheBloke provide a range of options with different bit depths and trade-offs between model size, RAM usage, and inference quality. Similar models include the MythoMax-L2-13B-GGML and MythoMax-L2-13B-GPTQ which offer different quantization formats. TheBloke has also provided quantized versions of other models like Llama-2-13B-chat-GGUF and CausalLM-14B-GGUF. Model inputs and outputs Inputs Text**: The model takes natural language text as input, which can include prompts, instructions, or conversational messages. Outputs Text**: The model generates fluent text responses, which can range from short answers to longer passages. The output is tailored to the input prompt and can cover a wide variety of topics. Capabilities The MythoMax-L2-13B-GGUF model is proficient at both roleplaying and storywriting due to its unique merging of the MythoLogic-L2 and Huginn models. It demonstrates strong language understanding and generation capabilities, allowing it to engage in coherent and contextual conversations. The model can be used for tasks such as creative writing, dialogue generation, and language understanding. What can I use it for? The MythoMax-L2-13B-GGUF model can be used for a variety of natural language processing tasks, particularly those involving creative writing and interactive dialogue. Some potential use cases include: Narrative generation**: Use the model to generate original stories, plot lines, and character dialogues. Interactive fiction**: Incorporate the model into interactive fiction or choose-your-own-adventure style experiences. Roleplaying assistant**: Leverage the model's capabilities to enable engaging roleplaying scenarios and character interactions. Conversational AI**: Utilize the model's language understanding and generation abilities to power chatbots or virtual assistants. Things to try One interesting aspect of the MythoMax-L2-13B-GGUF model is its blend of capabilities from the MythoLogic-L2 and Huginn models. You could explore the model's performance on tasks that require both robust language understanding and creative writing, such as generating coherent and engaging fictional narratives in response to open-ended prompts. Additionally, you could experiment with using the model as a roleplaying assistant, providing it with character profiles and scenario details to see how it responds and develops the interaction.

Read more

Updated Invalid Date

🔍

MythoMax-L2-13B-GPTQ

TheBloke

Total Score

161

Gryphe's MythoMax L2 13B is a large language model created by Gryphe and supported by a grant from andreessen horowitz (a16z). It is similar in size and capabilities to other prominent open-source models like the Llama 2 7B Chat and Falcon 180B Chat, but with a focus on mythological and fantastical content. Model inputs and outputs MythoMax-L2-13B-GPTQ is a text-to-text generative model, meaning it takes text prompts as input and generates new text as output. The model was trained on a large dataset of online text, with a focus on mythological and fantasy-related content. Inputs Text prompts**: The model takes freeform natural language text as input, which it then uses to generate new text in response. Outputs Generated text**: The model outputs new text that continues or expands upon the provided input prompt. The output can range from a single sentence to multiple paragraphs, depending on the prompt and the model's parameters. Capabilities The MythoMax-L2-13B-GPTQ model is capable of generating engaging, coherent text on a wide variety of fantasy and mythological topics. It can be used to produce creative stories, worldbuilding details, character dialogue, and more. The model's knowledge spans mythological creatures, legends, magical systems, and other fantastical concepts. What can I use it for? The MythoMax-L2-13B-GPTQ model is well-suited for all kinds of fantasy and science-fiction writing projects. Writers and worldbuilders can use it to generate ideas, expand on existing stories, or flesh out the details of imaginary realms. It could also be leveraged for interactive storytelling applications, roleplaying games, or even AI-generated fanfiction. Things to try Try prompting the model with the beginning of a fantastical story or worldbuilding prompt, and see how it continues the narrative. You can also experiment with more specific requests, like asking it to describe a particular mythological creature or magical ritual. The model's responses may surprise you with their creativity and attention to detail.

Read more

Updated Invalid Date

👁️

Mythalion-13B-GGUF

TheBloke

Total Score

62

The Mythalion-13B-GGUF is a large language model created by PygmalionAI and quantized by TheBloke. It is a 13 billion parameter model built on the Llama 2 architecture and fine-tuned for improved coherency and performance in roleplaying and storytelling tasks. The model is available in a variety of quantized versions to suit different hardware and performance needs, ranging from 2-bit to 8-bit precision. Similar models from TheBloke include the MythoMax-L2-13B-GGUF, which combines the robust understanding of MythoLogic-L2 with the extensive writing capability of Huginn, and the Mythalion-13B-GPTQ which uses GPTQ quantization instead of GGUF. Model inputs and outputs Inputs Text**: The Mythalion-13B-GGUF model accepts text inputs, which can be used to provide instructions, prompts, or conversation context. Outputs Text**: The model generates coherent text responses to continue conversations or complete tasks specified in the input. Capabilities The Mythalion-13B-GGUF model excels at roleplay and storytelling tasks. It can engage in nuanced and contextual dialogue, generating relevant and coherent responses. The model also demonstrates strong writing capabilities, allowing it to produce compelling narrative content. What can I use it for? The Mythalion-13B-GGUF model can be used for a variety of creative and interactive applications, such as: Roleplaying and creative writing**: Integrate the model into interactive fiction platforms or chatbots to enable engaging, character-driven stories and dialogues. Conversational AI assistants**: Utilize the model's strong language understanding and generation capabilities to build helpful, friendly, and trustworthy AI assistants. Narrative generation**: Leverage the model's storytelling abilities to automatically generate plot outlines, character biographies, or even full-length stories. Things to try One interesting aspect of the Mythalion-13B-GGUF model is its ability to maintain coherence and consistency across long-form interactions. Try providing the model with a detailed character prompt or backstory, and see how it is able to continue the narrative and stay true to the established persona over the course of an extended conversation. Another interesting experiment is to explore the model's capacity for world-building. Start with a high-level premise or setting, and prompt the model to expand on the details, introducing new characters, locations, and plot points in a coherent and compelling way.

Read more

Updated Invalid Date

🎲

Llama-2-13B-chat-GGML

TheBloke

Total Score

680

The Llama-2-13B-chat-GGML model is a 13-billion parameter large language model created by Meta and optimized for dialogue use cases. It is part of the Llama 2 family of models, which range in size from 7 billion to 70 billion parameters and are designed for a variety of natural language generation tasks. This specific model has been converted to the GGML format, which is designed for CPU and GPU inference using tools like llama.cpp and associated libraries and UIs. The GGML format has since been superseded by GGUF, so users are encouraged to use the GGUF versions of these models going forward. Similar models include the Llama-2-7B-Chat-GGML and the Llama-2-13B-GGML, which offer smaller and larger versions of the Llama 2 architecture in the GGML format. Model Inputs and Outputs Inputs Raw text Outputs Generated text continuations Capabilities The Llama-2-13B-chat-GGML model is capable of engaging in open-ended dialogue, answering questions, and generating coherent and context-appropriate text continuations. It has been fine-tuned to perform well on benchmarks for helpfulness and safety, making it suitable for use in assistant-like applications. What Can I Use It For? The Llama-2-13B-chat-GGML model could be used to power conversational AI assistants, chatbots, or other applications that require natural language generation and understanding. Given its strong performance on safety metrics, it may be particularly well-suited for use cases where providing helpful and trustworthy responses is important. Things to Try One interesting aspect of the Llama-2-13B-chat-GGML model is its ability to handle context and engage in multi-turn conversations. Users could try prompting the model with a series of related questions or instructions to see how it maintains coherence and builds upon previous responses. Additionally, the model's quantization options allow for tuning the balance between performance and accuracy, so users could experiment with different quantization levels to find the optimal tradeoff for their specific use case.

Read more

Updated Invalid Date