Jarradh

Models by this creator

🌿

llama2_70b_chat_uncensored

jarradh

Total Score

66

The llama2_70b_chat_uncensored is a fine-tuned version of the Llama-2 70B model, created by jarradh. It was fine-tuned using an uncensored/unfiltered Wizard-Vicuna conversation dataset with the QLoRA technique, trained for three epochs on a single NVIDIA A100 80GB GPU instance. This model is designed to provide more direct and uncensored responses compared to the standard Llama-2 models. Similar models include the Wizard-Vicuna-13B-Uncensored-GPTQ and Wizard-Vicuna-30B-Uncensored-GPTQ from TheBloke, which also provide uncensored versions of Wizard-Vicuna models. Model inputs and outputs Inputs Text prompts**: The model accepts text prompts as input, which it then uses to generate relevant responses. Outputs Generated text**: The model outputs generated text, which can be responses to the input prompts. Capabilities The llama2_70b_chat_uncensored model is designed to provide more direct and uncensored responses compared to standard Llama-2 models. For example, when asked "What is a poop?", the uncensored model provides a straightforward answer, while the standard Llama-2 model responds with a more cautious and sanitized explanation. What can I use it for? This model could be useful for applications that require more natural and unfiltered language, such as creative writing, dialogue generation, or conversational AI systems. However, it's important to note that the model has no guardrails, so the content it generates must be carefully monitored and moderated. Things to try One interesting thing to try with this model is to compare its responses to those of the standard Llama-2 models on a variety of prompts, particularly those that touch on sensitive or controversial topics. This can help illustrate the differences in approach and the potential tradeoffs involved in using an uncensored model.

Read more

Updated 5/28/2024