Macadeliccc

Models by this creator

🧪

laser-dolphin-mixtral-2x7b-dpo

macadeliccc

Total Score

50

The laser-dolphin-mixtral-2x7b-dpo model is a medium-sized Mixture-of-Experts (MoE) implementation based on the cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser model. According to the maintainer, the new version shows a ~1 point increase in evaluation performance on average compared to the previous version. The model was trained using a noise reduction technique based on Singular Value Decomposition (SVD) decomposition, with the optimal ranks calculated using Random Matrix Theory (Marchenko-Pastur theorem) instead of a brute-force search. This approach is outlined in the laserRMT notebook. Model inputs and outputs Inputs Prompt**: The input prompt for the model, which uses the ChatML format. Outputs Text generation**: The model generates text in response to the input prompt. Capabilities The laser-dolphin-mixtral-2x7b-dpo model is capable of generating diverse and coherent text, with potential improvements in robustness and performance compared to the previous version. According to the maintainer, the model has been "lasered" for better quality. What can I use it for? The laser-dolphin-mixtral-2x7b-dpo model can be used for a variety of text generation tasks, such as creative writing, dialogue generation, and content creation. The maintainer also mentions potential future goals for the model, including function calling and a v2 version with a new base model to improve performance. Things to try One interesting aspect of the laser-dolphin-mixtral-2x7b-dpo model is the availability of quantizations provided by user bartowski. These quantizations, ranging from 3.5 to 8 bits per weight, allow users to trade off between model size, memory usage, and performance to fit their specific needs. Experimenting with these quantizations could be a valuable way to explore the capabilities of the model.

Read more

Updated 5/27/2024