WizardLM-33B-V1.0-Uncensored

Maintainer: cognitivecomputations

Total Score

59

Last updated 5/28/2024

๐ŸŒ

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

WizardLM-33B-V1.0-Uncensored is a large language model developed by cognitivecomputations as a retraining of the WizardLM/WizardLM-30B-V1.0 model with a filtered dataset. This model aims to reduce refusals, avoidance, and bias compared to the previous versions. It is important to note that since LLaMA itself has inherent ethical beliefs, there is no such thing as a "truly uncensored" model. However, this model is intended to be more compliant than the earlier WizardLM/WizardLM-7B-V1.0 version.

Model inputs and outputs

WizardLM-33B-V1.0-Uncensored is a text-to-text model, meaning it takes text as input and generates text as output. The model is trained using Vicuna-1.1 style prompts, where the user provides a prompt, and the model generates a response.

Inputs

  • Text prompts provided by the user

Outputs

  • Generated text responses from the model

Capabilities

WizardLM-33B-V1.0-Uncensored has demonstrated strong performance on a variety of tasks, including the Open LLM Leaderboard, where it scored an average of 54.41 across various benchmarks. The model excels in areas like the AI2 Reasoning Challenge (63.65), HellaSwag (83.84), and Winogrande (77.66).

What can I use it for?

The WizardLM-33B-V1.0-Uncensored model can be used for a wide range of text-generation tasks, such as content creation, dialogue systems, and language translation. However, it's important to note that this model is "uncensored" and has no built-in guardrails, so users are responsible for the content they generate and publish.

Things to try

One interesting aspect of WizardLM-33B-V1.0-Uncensored is its potential for further fine-tuning or prompt engineering. By leveraging the model's strong baseline performance, users can explore ways to customize and adapt it for their specific use cases, such as adding alignment or other desired behaviors through techniques like RLHF LoRA.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

๐Ÿš€

WizardLM-1.0-Uncensored-Llama2-13b

cognitivecomputations

Total Score

48

The WizardLM-1.0-Uncensored-Llama2-13b is a retraining of the WizardLM/WizardLM-13B-V1.0 model with a filtered dataset, intended to reduce refusals, avoidance, and bias. Like the original WizardLM, this model is trained with Vicuna-1.1 style prompts. It is one of several uncensored models created by cognitivecomputations, including the WizardLM-33B-V1.0-Uncensored, WizardLM-30B-Uncensored, Wizard-Vicuna-13B-Uncensored, Wizard-Vicuna-7B-Uncensored, and Wizard-Vicuna-30B-Uncensored. Model inputs and outputs The WizardLM-1.0-Uncensored-Llama2-13b model is a text-to-text transformer, taking prompts as input and generating text responses. The model is trained to be a helpful AI assistant, with the following template: Inputs Prompts**: The user's input prompt or query to the model. Outputs Responses**: The model's generated text response to the user's input. Capabilities The WizardLM-1.0-Uncensored-Llama2-13b model can engage in a wide variety of language tasks, such as question answering, text generation, and summarization. It has been evaluated on several benchmark datasets, including the AI2 Reasoning Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande, GSM8k, and DROP, where it has shown strong performance. What can I use it for? The WizardLM-1.0-Uncensored-Llama2-13b model can be used for a variety of language-based applications, such as chatbots, content generation, and knowledge retrieval. However, as an uncensored model, it is important to use it responsibly and be aware of the potential risks. Users should carefully consider the ethical implications of any content generated by the model before publishing or sharing it. Things to try With the WizardLM-1.0-Uncensored-Llama2-13b model, you can experiment with a wide range of language tasks, from creative writing to analytical problem-solving. Try prompting the model with open-ended questions or hypothetical scenarios and see how it responds. You can also fine-tune the model for specific use cases or combine it with other techniques, such as reinforcement learning, to enhance its capabilities.

Read more

Updated Invalid Date

๐Ÿคท

WizardLM-30B-Uncensored

cognitivecomputations

Total Score

137

WizardLM-30B-Uncensored is a large language model created by cognitivecomputations that was trained on a subset of the dataset, with responses containing alignment or moralizing removed. The intent is to train a WizardLM model without built-in alignment, so that alignment can be added separately using techniques like Reinforcement Learning from Human Feedback (RLHF) LoRA. Similar models include the WizardLM-13B-Uncensored, WizardLM-7B-Uncensored, and WizardLM-30B-Uncensored-GPTQ models, all of which share the goal of removing built-in alignment from the WizardLM architecture. Model Inputs and Outputs WizardLM-30B-Uncensored is a text-to-text model, accepting free-form text prompts as input and generating completions as output. The model can be used for a wide variety of natural language tasks, including answering questions, generating stories or articles, and engaging in open-ended conversation. Inputs Free-form text prompts of any length Outputs Completions of the input prompt, generating new relevant text Capabilities WizardLM-30B-Uncensored is a powerful language model capable of engaging in sophisticated natural language tasks. It can answer questions, generate coherent and engaging text on a wide range of topics, and even carry on conversations. By removing the built-in alignment, the model avoids potential biases or limitations in its outputs, allowing for more open-ended and creative uses. What Can I Use It For? The WizardLM-30B-Uncensored model can be used for a variety of applications, including: Chatbots and virtual assistants: The model's conversational capabilities make it well-suited for powering chatbots and virtual assistants that can engage in natural, open-ended dialogue. Content generation: The model can be used to generate text for articles, stories, scripts, and other creative writing projects. Question answering: The model can be used to answer questions on a wide range of topics, drawing upon its broad knowledge base. Research and experimentation: The uncensored nature of the model makes it an interesting subject for further research and experimentation into language model capabilities and alignment. Things to Try One interesting aspect of the WizardLM-30B-Uncensored model is its lack of built-in alignment or constraints. This means that users can experiment with prompting the model to generate content on a wide range of topics, including potentially sensitive or controversial subjects. However, it's important to keep in mind the potential risks and to use the model responsibly, as the uncensored nature means there are no built-in guardrails. Some ideas for things to try with the model include: Exploring the model's ability to engage in open-ended, free-flowing conversation on a variety of topics Experimenting with the model's creativity by prompting it to generate stories, poems, or other forms of imaginative writing Investigating the model's reasoning and analytical capabilities by asking it to solve problems or provide insights on complex topics Overall, the WizardLM-30B-Uncensored model represents an interesting and powerful language model that offers both opportunities and challenges for users to explore.

Read more

Updated Invalid Date

๐Ÿงช

Wizard-Vicuna-13B-Uncensored

cognitivecomputations

Total Score

278

The Wizard-Vicuna-13B-Uncensored model is an AI language model developed by cognitivecomputations and available on the Hugging Face platform. It is a version of the wizard-vicuna-13b model with a subset of the dataset - responses that contained alignment or moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with a RLHF LoRA. This model is part of a family of similar uncensored models, including the Wizard-Vicuna-7B-Uncensored, Wizard-Vicuna-30B-Uncensored, WizardLM-30B-Uncensored, WizardLM-33B-V1.0-Uncensored, and WizardLM-13B-Uncensored. Model inputs and outputs The Wizard-Vicuna-13B-Uncensored model is a text-to-text language model, which means it takes text as input and generates text as output. The model is trained to engage in open-ended conversations, answer questions, and complete a variety of natural language processing tasks. Inputs Text prompts**: The model accepts text prompts as input, which can be questions, statements, or other forms of natural language. Outputs Generated text**: The model generates text in response to the input prompt, which can be used for tasks such as question answering, language generation, and text completion. Capabilities The Wizard-Vicuna-13B-Uncensored model is a powerful language model that can be used for a variety of natural language processing tasks. It has shown strong performance on benchmarks such as the Open LLM Leaderboard, with high scores on tasks like the AI2 Reasoning Challenge, HellaSwag, and Winogrande. What can I use it for? The Wizard-Vicuna-13B-Uncensored model can be used for a wide range of natural language processing tasks, such as: Chatbots and virtual assistants**: The model can be used to build conversational AI systems that can engage in open-ended dialogue and assist users with a variety of tasks. Content generation**: The model can be used to generate text for a variety of applications, such as creative writing, article generation, and product descriptions. Question answering**: The model can be used to answer questions on a wide range of topics, making it useful for applications such as customer support and knowledge management. Things to try One interesting aspect of the Wizard-Vicuna-13B-Uncensored model is its "uncensored" nature. While this means the model has no built-in guardrails or alignment, it also provides an opportunity to explore how to add such safeguards separately, such as through the use of a RLHF LoRA. This could be an interesting area of experimentation for researchers and developers looking to push the boundaries of language model capabilities while maintaining ethical and responsible AI development.

Read more

Updated Invalid Date

๐Ÿคท

Wizard-Vicuna-7B-Uncensored

cognitivecomputations

Total Score

85

The Wizard-Vicuna-7B-Uncensored is a large language model developed by cognitivecomputations. It is based on the wizard-vicuna-13b model, but with a subset of the dataset - responses that contained alignment or moralizing were removed. The goal was to train a WizardLM that doesn't have alignment built-in, so that alignment can be added separately using techniques like RLHF LoRA. Similar models developed by the same maintainer include the Wizard-Vicuna-30B-Uncensored, WizardLM-30B-Uncensored, WizardLM-7B-Uncensored, and WizardLM-13B-Uncensored. These models share a similar intent of training a WizardLM without built-in alignment. Model Inputs and Outputs Inputs The Wizard-Vicuna-7B-Uncensored model accepts text inputs, which can be prompts or conversational inputs. Outputs The model generates text outputs, which can be used for a variety of language tasks such as summarization, text generation, and question answering. Capabilities The Wizard-Vicuna-7B-Uncensored model is capable of generating human-like text on a wide range of topics. It can be used for tasks like creative writing, dialogue generation, and task-oriented conversations. However, as an uncensored model, it lacks the safety guardrails that would prevent it from generating potentially harmful or biased content. What Can I Use It For? The Wizard-Vicuna-7B-Uncensored model could be used for experimental or research purposes, but great caution should be exercised when deploying it in production or public-facing applications. It may be better suited for individual use or closed-door experimentation, rather than public-facing applications. Potential use cases could include language model fine-tuning, dialogue systems research, or creative text generation, but the model's lack of safety filters means it should be used responsibly. Things to Try When working with the Wizard-Vicuna-7B-Uncensored model, it's important to carefully monitor the outputs and ensure they align with your intended use case. You may want to experiment with prompt engineering to steer the model's responses in a more controlled direction. Additionally, you could explore techniques like RLHF LoRA to add alignment and safety filters to the model, as mentioned in the model's description.

Read more

Updated Invalid Date