GPT-NeoX-20B-Erebus

Maintainer: KoboldAI

Total Score

82

Last updated 5/28/2024

↗️

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

GPT-NeoX-20B-Erebus is a powerful 20 billion parameter autoregressive language model developed by KoboldAI. It is the second generation of the original Shinen model, with the "Erebus" name derived from Greek mythology, meaning "darkness" or "deep abyss". This model was trained on a diverse dataset of over 800GB of text, including sources like Literotica, Sexstories, and private datasets of adult-themed content.

Similar models include the OPT 13B - Erebus and the GPT-Neo 2.7B, both of which share the "Erebus" theme and focus on adult-oriented content. These models demonstrate the range of large language models developed by the AI research community to serve specialized use cases.

Model inputs and outputs

Inputs

  • Textual prompts of varying lengths, used to initialize the model and guide the generation of output text.

Outputs

  • Coherent, contextually-relevant text that continues the provided prompt, with a strong bias towards generating adult-themed content.

Capabilities

GPT-NeoX-20B-Erebus is a highly capable text generation model, able to fluently continue prompts and generate natural-sounding narratives. However, its specialized training dataset means it is most adept at producing content with mature or explicit themes. This model should be used with caution, as it may generate biased or offensive text, even when prompted with neutral input.

What can I use it for?

Given its specialization in adult-oriented content, GPT-NeoX-20B-Erebus would be most suitable for projects involving erotic fiction, erotica, or other forms of mature creative writing. It could potentially be fine-tuned for use in virtual assistant chatbots or interactive fiction, though great care would need to be taken to filter and moderate its outputs.

Things to try

One interesting experiment would be to prompt the model with neutral or mundane text, and observe how it steers the narrative in a more adult direction. This could provide insight into the biases and patterns learned during its training on explicit datasets. However, the results of such experiments should be carefully reviewed, as the model may generate content that is inappropriate or offensive.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🤷

OPT-13B-Erebus

KoboldAI

Total Score

208

The OPT-13B-Erebus model is a large language model developed by KoboldAI, a community of AI enthusiasts. It is the second generation of the original "Shinen" model created by Mr. Seeker. The model was trained on a diverse dataset of adult-themed stories and narratives, spanning sources like Literotica, Sexstories, and SoFurry. This model is intended for mature audiences only, as it will generate X-rated content. Compared to similar models like GPT-Neo 2.7B, GPT-J 6B, and OPT-66B, the OPT-13B-Erebus model is smaller in scale but specializes in an adult-focused domain. Like other large language models, it uses transformer architecture and is trained on a massive amount of natural language data. Model inputs and outputs Inputs Textual prompts**: The model takes in textual prompts as input, which it uses to generate new text. Outputs Generated text**: The primary output of the OPT-13B-Erebus model is new text, generated based on the provided prompt. The generated text will be in line with the adult-themed dataset the model was trained on. Capabilities The OPT-13B-Erebus model excels at generating explicit, adult-oriented content. It can be used to create detailed, immersive narratives and stories within the "Adult" genre. The model's strong bias towards this type of content makes it unsuitable for general-purpose use, but it could be valuable for specialized applications where mature themes are appropriate. What can I use it for? The OPT-13B-Erebus model could be used for the creation of adult-themed stories, erotica, and other related content. This could include writing commissions, collaborative worldbuilding, or the generation of prompts for artistic endeavors. However, given the model's mature focus, it should only be used in appropriate contexts and with a clear understanding of the potential risks and limitations. Things to try One interesting thing to explore with the OPT-13B-Erebus model is its ability to generate cohesive, long-form narratives within the adult genre. Providing the model with detailed prompts or story starters could result in surprising and creative plot developments. Additionally, experimenting with different prompting techniques, such as using genre tags or character descriptions, could yield unique and compelling results.

Read more

Updated Invalid Date

👨‍🏫

OPT-30B-Erebus

KoboldAI

Total Score

58

The OPT-30B-Erebus model is the second generation of the original Shinen model created by Mr. Seeker. This large language model was trained by KoboldAI on a dataset spanning six different sources, all focused on "adult" themes. The name "Erebus" is drawn from Greek mythology, representing "darkness", aligning with the "deep abyss" meaning of Shin'en. Similar large models in this family include the OPT-13B-Erebus and OPT-6.7B-Erebus. Model inputs and outputs The OPT-30B-Erebus model is a text-to-text transformer that can be used for various language generation tasks. It takes text prompts as input and generates corresponding text outputs. Inputs Text prompts of varying lengths Outputs Coherent, human-like text continuations and completions based on the input prompts Capabilities The OPT-30B-Erebus model demonstrates strong language generation capabilities, able to produce detailed and contextually relevant text outputs. However, as a model trained on adult-themed data, it has a strong bias towards generating NSFW content, and is not suitable for use by minors. What can I use it for? Given its specialized training data, the OPT-30B-Erebus model would be most applicable for tasks involving adult-themed creative writing, erotica generation, or other applications requiring uncensored text output. Users should exercise caution and responsibility when using this model, as the content it generates may be inappropriate or offensive in many contexts. Things to try Experiment with providing the model with different types of prompts to see the range of responses it can generate. Try steering the model towards more tasteful or tame outputs by carefully crafting your prompts. However, be mindful that the model's strong bias towards NSFW content may be difficult to overcome completely.

Read more

Updated Invalid Date

🔄

OPT-6.7B-Erebus

KoboldAI

Total Score

92

The OPT-6.7B-Erebus is a large language model developed by KoboldAI. It is the second generation of the original Shinen model, trained on a dataset consisting of 6 different sources surrounding "Adult" themes. The name "Erebus" comes from Greek mythology, representing "darkness" - similar to the original Shin'en, or "deep abyss". This model is similar to other large OPT models like the OPT-13B-Erebus and GPT-NeoX-20B-Erebus, which were also developed by KoboldAI and trained on adult-themed datasets. However, the OPT-6.7B-Erebus has a smaller parameter count than these other Erebus models. Model Inputs and Outputs Inputs Text prompts for text generation Outputs Continuation of the input text, generated in an autoregressive manner Capabilities The OPT-6.7B-Erebus model is capable of generating coherent, adult-themed text based on provided prompts. It can produce narratives, descriptions, and dialogue in a variety of adult genres and styles. However, as the model was trained on an explicit dataset, the generated output will reflect this bias and may not be suitable for all audiences. What Can I Use It For? The OPT-6.7B-Erebus model could be used for creative writing projects, erotica, or other adult-oriented content generation. However, it's important to be aware of the model's biases and limitations, and to use it responsibly. The model should not be deployed in public-facing applications without proper moderation and filtering. Things to Try You could try providing the model with different types of adult-themed prompts, such as romance, erotica, or sensual descriptions, and see how the model responds. You could also experiment with altering the generation parameters, like temperature or top-k sampling, to adjust the style and content of the generated text. Just be mindful of the model's limitations and inappropriate outputs.

Read more

Updated Invalid Date

🔎

gpt-neo-2.7B

EleutherAI

Total Score

390

gpt-neo-2.7B is a transformer language model developed by EleutherAI. It is a replication of the GPT-3 architecture with 2.7 billion parameters. The model was trained on the Pile, a large-scale curated dataset created by EleutherAI, using a masked autoregressive language modeling approach. Similar models include the GPT-NeoX-20B and GPT-J-6B models, also developed by EleutherAI. These models use the same underlying architecture but have different parameter counts and training datasets. Model Inputs and Outputs gpt-neo-2.7B is a language model that can be used for text generation. The model takes a string of text as input and generates the next token in the sequence. This allows the model to continue a given prompt and generate coherent text. Inputs A string of text to be used as a prompt for the model. Outputs A continuation of the input text, generated by the model. Capabilities gpt-neo-2.7B excels at generating human-like text from a given prompt. It can be used to continue stories, write articles, and generate other forms of natural language. The model has also shown strong performance on downstream tasks like question answering and text summarization. What Can I Use It For? gpt-neo-2.7B can be a useful tool for a variety of natural language processing tasks, such as: Content generation**: The model can be used to generate text for blog posts, stories, scripts, and other creative writing projects. Chatbots and virtual assistants**: The model can be fine-tuned to engage in more natural, human-like conversations. Question answering**: The model can be used to answer questions based on provided context. Text summarization**: The model can be used to generate concise summaries of longer passages of text. Things to Try One interesting aspect of gpt-neo-2.7B is its flexibility in handling different prompts. Try providing the model with a wide range of inputs, from creative writing prompts to more analytical tasks, and observe how it responds. This can help you understand the model's strengths and limitations, and identify potential use cases that fit your needs.

Read more

Updated Invalid Date