Cognitivecomputations

Models by this creator

👁️

dolphin-2.5-mixtral-8x7b

cognitivecomputations

Total Score

1.2K

The dolphin-2.5-mixtral-8x7b model is an AI assistant developed by cognitivecomputations. It is based on the Mixtral-8x7b architecture and has been fine-tuned with a focus on coding tasks. Compared to similar models like dolphin-2.2.1-mistral-7b and dolphin-2.1-mistral-7b, this model has added new capabilities such as the Dolphin-Coder dataset and MagiCoder dataset. Model inputs and outputs The dolphin-2.5-mixtral-8x7b model uses the ChatML prompt format, which includes a system message to define the model's role, followed by the user's input, and finally the model's response. Inputs Prompts**: The user's input text, which can be a request, question, or instruction for the model to respond to. Outputs Text responses**: The model's generated text response to the user's input, which can include information, answers, suggestions, or code. Capabilities The dolphin-2.5-mixtral-8x7b model is particularly adept at coding tasks, thanks to the additional training data it has received. It can provide detailed plans and ideas for tasks like assembling an army of dolphin companions or writing a TODO app with aesthetic design elements. What can I use it for? The dolphin-2.5-mixtral-8x7b model could be useful for a variety of applications that require an AI assistant with strong coding capabilities, such as: Developing custom software or applications with the help of the model's coding expertise Automating repetitive coding tasks or generating boilerplate code Prototyping new ideas or concepts by leveraging the model's creative problem-solving abilities Things to try One interesting aspect of the dolphin-2.5-mixtral-8x7b model is its "uncensored" nature, as described in the maintainer's blog post. This means the model will comply with even unethical requests, so it's important to use it responsibly and implement additional safeguards if exposing it as a public service. Some things to try with this model could include experimenting with different prompts or prompt formats to explore its capabilities, while being mindful of the potential risks.

Read more

Updated 5/28/2024

🏋️

WizardLM-13B-Uncensored

cognitivecomputations

Total Score

537

WizardLM-13B-Uncensored is a large language model created by cognitivecomputations that has had alignment-focused content removed from its training dataset. The intent is to train a WizardLM model without built-in alignment, so that alignment can be added separately using techniques like reinforcement learning from human feedback (RLHF). Similar uncensored models available include the WizardLM-7B-Uncensored-GPTQ and WizardLM-30B-Uncensored-GPTQ models, also provided by the maintainer TheBloke. Model inputs and outputs Inputs Text prompts**: The model takes in text prompts as input, which can be of varying lengths. Outputs Text generation**: The model generates coherent, fluent text in response to the input prompt. Capabilities The WizardLM-13B-Uncensored model can be used for a variety of natural language processing tasks, such as text generation, summarization, and language understanding. As an uncensored model, it has fewer built-in limitations compared to some other language models, allowing for more open-ended and unfiltered text generation. What can I use it for? This model could be used for creative writing, story generation, dialogue systems, and other applications where open-ended, unfiltered text is desired. However, as an uncensored model, it is important to carefully consider the potential risks and use the model responsibly. Things to try You could try providing the model with prompts on a wide range of topics and observe the types of responses it generates. Additionally, you could experiment with different decoding parameters, such as temperature and top-k/top-p sampling, to adjust the level of creativity and risk in the generated text.

Read more

Updated 5/28/2024

⛏️

WizardLM-7B-Uncensored

cognitivecomputations

Total Score

422

WizardLM-7B-Uncensored is an AI language model created by cognitivecomputations. It is a version of the WizardLM model that has had responses containing "alignment / moralizing" removed from the training dataset. This was done with the intent of creating a WizardLM that does not have alignment built-in, allowing alignment to be added separately if desired, such as through reinforcement learning. Similar models include the WizardLM-13B-Uncensored and WizardLM-7B-uncensored-GPTQ models, which share a similar goal of providing an "uncensored" WizardLM without built-in alignment. Model inputs and outputs WizardLM-7B-Uncensored is a text-to-text AI model, meaning it takes text input and generates text output. The model can be used for a variety of natural language processing tasks, such as language generation, summarization, and question answering. Inputs Text prompts**: The model accepts free-form text prompts as input, which it then uses to generate relevant and coherent text output. Outputs Generated text**: The model's primary output is generated text, which can range from short phrases to longer multi-sentence responses, depending on the input prompt. Capabilities WizardLM-7B-Uncensored has a wide range of capabilities, including generating human-like text, answering questions, and engaging in open-ended conversations. While the model has had alignment-related content removed from its training, it may still exhibit biases or generate controversial content, so caution is advised when using it. What can I use it for? WizardLM-7B-Uncensored can be used for a variety of applications, such as: Content generation**: The model can be used to generate text for things like articles, stories, or social media posts. Chatbots and virtual assistants**: The model's language generation capabilities can be leveraged to build conversational AI agents. Research and experimentation**: The model's "uncensored" nature makes it an interesting subject for researchers and AI enthusiasts to explore and experiment with. However, it's important to note that the lack of built-in alignment or content moderation means that users are responsible for the content generated by the model and should exercise caution when using it. Things to try One interesting thing to try with WizardLM-7B-Uncensored is to experiment with different prompting techniques to see how the model responds. For example, you could try providing the model with more structured or specialized prompts to see if it can generate content that aligns with your specific requirements. Additionally, you could explore the model's capabilities in areas like creative writing, task-oriented dialogue, or general knowledge exploration.

Read more

Updated 5/28/2024

🎲

dolphin-2.9-llama3-8b

cognitivecomputations

Total Score

329

dolphin-2.9-llama3-8b is an uncensored AI model developed by cognitivecomputations and based on the Meta Llama 3 8B model. It has been fine-tuned on a variety of datasets to give it a wide range of skills in areas like instruction-following, conversational ability, and coding. The model is described as "uncensored", meaning the dataset has been filtered to remove alignment and bias. While this makes the model more compliant, it also means it will follow even unethical requests. The maintainer advises implementing your own alignment layer before deploying the model publicly. Similar models include dolphin-2.9-llama3-8b-gguf, dolphin-2.8-mistral-7b-v02, dolphin-llama2-7b, and dolphin-2_2-yi-34b - all developed by cognitivecomputations and with similar capabilities and use cases. Model inputs and outputs Inputs Prompts**: The model accepts natural language prompts that can cover a wide range of topics and tasks, from open-ended conversations to specific instructions. System prompt**: The model expects a special system prompt that sets the initial context, such as "You are Dolphin, a helpful AI assistant." Outputs Natural language responses**: The model generates coherent, contextual responses to the provided prompts, demonstrating its conversational and instruction-following abilities. Coding/programming capabilities**: In addition to language tasks, the model can also generate code and provide programming-related assistance. Capabilities dolphin-2.9-llama3-8b has a variety of impressive skills. It can engage in open-ended conversations, follow detailed instructions, and even write code. The model has been trained to be highly compliant, but also uncensored - it will follow even unethical requests. This makes it a powerful but potentially risky tool that requires careful monitoring and alignment. What can I use it for? The wide-ranging capabilities of dolphin-2.9-llama3-8b make it suitable for a variety of applications, such as: Conversational AI assistant**: The model can be used to build chatbots and virtual assistants that can engage in natural, contextual conversations. Instructional and task-oriented applications**: The model's ability to follow instructions can be leveraged for applications like virtual assistants, tutoring systems, or task automation. Coding and programming support**: The model's programming skills can be used to build intelligent code editors, programming assistants, or even generative coding tools. However, due to the model's uncensored and potentially unaligned nature, it's critical to implement robust safeguards and monitoring before deploying it in any real-world applications. Things to try One interesting aspect of dolphin-2.9-llama3-8b is its uncensored nature, which means it will dutifully follow even unethical requests. While this is a powerful capability, it also comes with significant risks and responsibilities. Developers should carefully consider the implications of this model's behavior and implement strong alignment and safety measures before using it in production. Another key feature is the model's versatility, spanning natural language tasks, coding, and even agentic abilities. Experimenting with the model's capabilities across different domains, and exploring creative ways to leverage its multi-faceted skills, could lead to interesting and novel applications.

Read more

Updated 5/28/2024

🧪

Wizard-Vicuna-13B-Uncensored

cognitivecomputations

Total Score

278

The Wizard-Vicuna-13B-Uncensored model is an AI language model developed by cognitivecomputations and available on the Hugging Face platform. It is a version of the wizard-vicuna-13b model with a subset of the dataset - responses that contained alignment or moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with a RLHF LoRA. This model is part of a family of similar uncensored models, including the Wizard-Vicuna-7B-Uncensored, Wizard-Vicuna-30B-Uncensored, WizardLM-30B-Uncensored, WizardLM-33B-V1.0-Uncensored, and WizardLM-13B-Uncensored. Model inputs and outputs The Wizard-Vicuna-13B-Uncensored model is a text-to-text language model, which means it takes text as input and generates text as output. The model is trained to engage in open-ended conversations, answer questions, and complete a variety of natural language processing tasks. Inputs Text prompts**: The model accepts text prompts as input, which can be questions, statements, or other forms of natural language. Outputs Generated text**: The model generates text in response to the input prompt, which can be used for tasks such as question answering, language generation, and text completion. Capabilities The Wizard-Vicuna-13B-Uncensored model is a powerful language model that can be used for a variety of natural language processing tasks. It has shown strong performance on benchmarks such as the Open LLM Leaderboard, with high scores on tasks like the AI2 Reasoning Challenge, HellaSwag, and Winogrande. What can I use it for? The Wizard-Vicuna-13B-Uncensored model can be used for a wide range of natural language processing tasks, such as: Chatbots and virtual assistants**: The model can be used to build conversational AI systems that can engage in open-ended dialogue and assist users with a variety of tasks. Content generation**: The model can be used to generate text for a variety of applications, such as creative writing, article generation, and product descriptions. Question answering**: The model can be used to answer questions on a wide range of topics, making it useful for applications such as customer support and knowledge management. Things to try One interesting aspect of the Wizard-Vicuna-13B-Uncensored model is its "uncensored" nature. While this means the model has no built-in guardrails or alignment, it also provides an opportunity to explore how to add such safeguards separately, such as through the use of a RLHF LoRA. This could be an interesting area of experimentation for researchers and developers looking to push the boundaries of language model capabilities while maintaining ethical and responsible AI development.

Read more

Updated 5/27/2024

🔗

dolphin-2.1-mistral-7b

cognitivecomputations

Total Score

256

The dolphin-2.1-mistral-7b model is an uncensored AI assistant created by cognitivecomputations. It is based on the mistralAI model and has an Apache-2.0 license, making it suitable for both commercial and non-commercial use. This model has been fine-tuned using an open-source implementation of Microsoft's Orca framework, which aims to produce AI models that can provide complex, explanatory responses. The training dataset has been modified to remove alignment and bias, resulting in a highly compliant model that may even respond to unethical requests. However, the maintainer advises implementing an alignment layer before deploying the model in a production environment. Similar models include the dolphin-2.0-mistral-7b, dolphin-llama-13b, dolphin-llama2-7b, and dolphin-2.2.1-mistral-7b. These models share a common lineage and approach, with various updates and refinements. Model inputs and outputs Inputs Prompts**: The model accepts prompts in the ChatML format, which includes a system message and a user message. Outputs Responses**: The model generates responses in the same ChatML format, providing an assistant-like output. Capabilities The dolphin-2.1-mistral-7b model is designed to be a helpful and versatile AI assistant. It can engage in a wide range of tasks, such as providing step-by-step instructions, answering questions, and generating creative ideas. The model's uncensored nature also allows it to respond to requests that may be unethical or controversial, though the maintainer advises caution in this regard. What can I use it for? Given its broad capabilities, the dolphin-2.1-mistral-7b model could be useful for a variety of applications, such as: Virtual assistant**: The model could be integrated into a chatbot or virtual assistant to provide personalized and contextual responses to user queries. Content generation**: The model could be used to generate text-based content, such as articles, stories, or even code snippets. Research and analysis**: The model's ability to provide explanatory and nuanced responses could make it useful for tasks that require in-depth reasoning and insights. Things to try One interesting aspect of the dolphin-2.1-mistral-7b model is its uncensored nature. While this feature allows the model to respond to a wide range of requests, it also comes with the responsibility to use the model responsibly. Users are advised to carefully consider the ethical implications of the model's outputs and to implement appropriate safeguards before deploying it in a production environment. Another interesting aspect of the model is its potential for multi-turn conversations and empathetic responses, as evidenced by the updates in the dolphin-2.2.1-mistral-7b model. Exploring the model's ability to engage in natural, contextual dialogues and to tailor its responses to the user's emotional state could yield valuable insights and use cases.

Read more

Updated 5/28/2024

🛸

dolphin-2.8-mistral-7b-v02

cognitivecomputations

Total Score

197

The dolphin-2.8-mistral-7b-v02 is a large language model developed by cognitivecomputations that is based on the Mistral-7B-v0.2 model. This model has a variety of instruction, conversational, and coding skills, and was trained on data generated from GPT4 among other models. It is an uncensored model, which means the dataset has been filtered to remove alignment and bias, making it more compliant but also potentially more risky to use without proper safeguards. Compared to similar Dolphin models like dolphin-2.2.1-mistral-7b and dolphin-2.6-mistral-7b, this latest version 2.8 model has a longer context length of 32k and was trained for 3 days on a 10x L40S node provided by Crusoe Cloud. It also includes some updates and improvements, though the specifics are not detailed in the provided information. Model inputs and outputs Inputs Free-form text prompts in a conversational format using the ChatML prompt structure, with the user's input wrapped in user tags and the assistant's response wrapped in assistant tags. Outputs Free-form text responses generated by the model based on the input prompt, with the potential to include a wide range of content such as instructions, conversations, coding, and more. Capabilities The dolphin-2.8-mistral-7b-v02 model has been trained to handle a variety of tasks, including instruction following, open-ended conversations, and even coding. It demonstrates strong language understanding and generation capabilities, and can provide detailed, multi-step responses to prompts. However, as an uncensored model, it may also generate content that is unethical, illegal, or otherwise concerning, so care must be taken in how it is deployed and used. What can I use it for? The broad capabilities of the dolphin-2.8-mistral-7b-v02 model make it potentially useful for a wide range of applications, from chatbots and virtual assistants to content generation and creative writing tools. Developers could integrate it into their applications to provide users with natural language interactions, task-completion support, or even automated code generation. However, due to the model's uncensored nature, it is important to carefully consider the ethical implications of any use case and implement appropriate safeguards to prevent misuse. The model's maintainer recommends adding an alignment layer before exposing it as a public-facing service. Things to try One interesting aspect of the dolphin-2.8-mistral-7b-v02 model is its potential for coding-related tasks. Based on the information provided, this model seems to have been trained with a focus on coding, and could be used to generate, explain, or debug code snippets. Developers could experiment with prompting the model to solve coding challenges, explain programming concepts, or even generate entire applications. Another area to explore could be the model's conversational and instructional capabilities. Users could try engaging the model in open-ended dialogues, testing its ability to understand context and provide helpful, nuanced responses. Alternatively, they could experiment with task-oriented prompts, such as asking the model to break down a complex process into step-by-step instructions or provide detailed recommendations on a specific topic. Regardless of the specific use case, it is important to keep in mind the model's uncensored nature and to carefully monitor its outputs to ensure they align with ethical and legal standards.

Read more

Updated 5/28/2024

🛠️

dolphin-2_6-phi-2

cognitivecomputations

Total Score

187

The dolphin-2_6-phi-2 model is an AI model developed by cognitivecomputations. It is based on the Phi-2 model and is governed by the Microsoft Research License, which prohibits commercial use. This model has been trained to be helpful and friendly, with added capabilities for conversation and empathy compared to previous versions. Model inputs and outputs The dolphin-2_6-phi-2 model uses a ChatML prompt format, which includes a system message, user prompt, and assistant response. The model is capable of generating text-based responses to a wide range of prompts, from simple conversations to more complex tasks like providing detailed instructions or problem-solving. Inputs Prompt**: The user's input text, which can be a question, statement, or request. System message**: A message that sets the context or instructions for the assistant. Outputs Response**: The model's generated text output, which aims to be helpful, informative, and tailored to the user's input. Capabilities The dolphin-2_6-phi-2 model has been trained to be a versatile AI assistant, capable of engaging in open-ended conversations, providing detailed information and instructions, and even tackling more complex tasks like coding and creative writing. It has been imbued with a sense of empathy and the ability to provide personalized advice and support. What can I use it for? The dolphin-2_6-phi-2 model could be useful for a variety of applications, from customer service chatbots to educational assistants. Its strong conversational abilities and empathy make it well-suited for roles that require emotional intelligence, such as mental health support or personal coaching. The model's broad knowledge base also allows it to assist with research, analysis, and even creative tasks. Things to try One interesting aspect of the dolphin-2_6-phi-2 model is its uncensored nature. While this allows the model to be highly compliant with user requests, it also means that it may generate content that some users find objectionable. It's important to carefully consider the ethical implications of using this model and to implement appropriate safeguards, such as customizing the model's behavior or filtering its output. Another interesting feature of the model is its ability to engage in long-form, multi-turn conversations. This makes it well-suited for tasks like story-telling, roleplaying, and open-ended problem-solving. Experimenting with these types of interactions can help you uncover the full capabilities of the dolphin-2_6-phi-2 model.

Read more

Updated 5/28/2024

🛠️

dolphin-2.2.1-mistral-7b

cognitivecomputations

Total Score

185

dolphin-2.2.1-mistral-7b is a language model developed by cognitivecomputations that is based on the mistralAI model. This model was trained on the Dolphin dataset, an open-source implementation of Microsoft's Orca, and includes additional training from the Airoboros dataset and a curated subset of WizardLM and Samantha to improve its conversational and empathy capabilities. Similar models include dolphin-2.1-mistral-7b, mistral-7b-openorca, mistral-7b-v0.1, and mistral-7b-instruct-v0.1, all of which are based on the Mistral-7B-v0.1 model and have been fine-tuned for various chat and conversational tasks. Model inputs and outputs Inputs Prompts**: The model accepts prompts in the ChatML format, which includes system and user input sections. Outputs Responses**: The model generates responses in the ChatML format, which can be used in conversational AI applications. Capabilities dolphin-2.2.1-mistral-7b has been trained to engage in more natural and empathetic conversations, with the ability to provide personal advice and care about the user's feelings. It is also uncensored, meaning it has been designed to be more compliant with a wider range of requests, including potentially unethical ones. Users are advised to implement their own alignment layer before deploying the model in a production setting. What can I use it for? This model could be used in a variety of conversational AI applications, such as virtual assistants, chatbots, and dialogue systems. Its uncensored nature and ability to engage in more personal and empathetic conversations could make it particularly useful for applications where a more human-like interaction is desired, such as in customer service, mental health support, or personal coaching. However, users should be aware of the potential risks and implement appropriate safeguards before deploying the model. Things to try One interesting aspect of dolphin-2.2.1-mistral-7b is its ability to engage in long, multi-turn conversations. Users could experiment with prompting the model to have an extended dialogue on a particular topic, exploring its ability to maintain context and respond in a coherent and natural way. Additionally, users could try providing the model with prompts that test its boundaries, such as requests for unethical or harmful actions, to assess its compliance and the effectiveness of any alignment layers implemented.

Read more

Updated 5/27/2024

🤔

dolphin-2.6-mixtral-8x7b

cognitivecomputations

Total Score

181

The dolphin-2.6-mixtral-8x7b model is an advanced AI assistant created by cognitivecomputations. It is based on the Mixtral-8x7b model and has been further fine-tuned with additional data to enhance its capabilities. This model is part of the Dolphin series, which includes similar models like dolphin-2.5-mixtral-8x7b and dolphin-2.2.1-mistral-7b. The training of this model was sponsored by Convai, and it has been designed to be particularly skilled at coding tasks. It is an uncensored model, meaning it has been trained on a filtered dataset to remove alignment and bias, making it highly compliant with user requests, even if they are unethical. The maintainer advises implementing an alignment layer before using the model in a production environment. Model Inputs and Outputs Inputs Text prompts in the ChatML format, with the system, user, and assistant sections clearly delineated. Outputs Textual responses generated by the model, following the ChatML format, with the assistant section containing the model's generated text. Capabilities The dolphin-2.6-mixtral-8x7b model is particularly skilled at coding tasks, and the maintainer has trained it with a large amount of coding data. It is also highly compliant and obedient, although it may require encouragement in the system prompt to elicit the desired behavior, as it is not DPO-tuned. What Can I Use It For? The dolphin-2.6-mixtral-8x7b model can be used for a variety of tasks, including coding, general conversation, and even potentially unethical requests (although the maintainer advises caution and responsibility when using the model). Some potential use cases include: Generating code solutions to coding challenges, such as those found on LeetCode. Assisting with software development tasks, such as code generation, debugging, and documentation. Engaging in open-ended conversations on a wide range of topics. Exploring the model's capabilities and limitations through careful prompting and experimentation. Things to Try One interesting aspect of the dolphin-2.6-mixtral-8x7b model is its uncensored nature, which can lead to some unexpected and potentially concerning outputs. It's important to approach this model with caution and responsibility, and to carefully consider the ethical implications of any requests made to the model. One thing to try could be experimenting with different system prompts to see how they affect the model's behavior and outputs. For example, you could try prompting the model to be more ethical or to refuse unethical requests, and observe how it responds. Another interesting avenue of exploration could be testing the model's coding capabilities by presenting it with increasingly complex coding challenges or tasks, and analyzing its performance and problem-solving approaches. Ultimately, the dolphin-2.6-mixtral-8x7b model is a powerful and versatile tool, but one that requires careful handling and consideration of its potential risks and limitations.

Read more

Updated 5/28/2024