Sheared-LLaMA-2.7B

Maintainer: princeton-nlp

Total Score

54

Last updated 5/28/2024

🔍

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

Sheared-LLaMA-2.7B is a pruned and further pre-trained model derived from the meta-llama/Llama-2-7b-hf model. The model was developed by the princeton-nlp team and is available on the Hugging Face Hub. Like the original LLaMA model, Sheared-LLaMA-2.7B is a large language model based on the transformer architecture. However, this model was pruned and further trained on the RedPajama dataset using a budget of 50 billion tokens.

Model inputs and outputs

Inputs

  • Text prompts

Outputs

  • Continuation of the input text, generating coherent and relevant text

Capabilities

The Sheared-LLaMA-2.7B model has demonstrated strong performance across a variety of downstream tasks, including reasoning, reading comprehension, language modeling, and knowledge-intensive tasks. The model outperforms existing large language models like OPT-2.7B and Pythia-2.8B on average performance metrics.

What can I use it for?

The Sheared-LLaMA-2.7B model can be used for a wide range of natural language processing tasks, such as text generation, question answering, summarization, and content creation. Developers and researchers can fine-tune the model for specific applications or use it as a strong baseline for further research and development.

Things to try

One interesting aspect of the Sheared-LLaMA-2.7B model is that it was trained with a budget of only 50 billion tokens, which is significantly less than the 1 trillion tokens used to train the original LLaMA models. This suggests that the model's performance can be achieved with a more efficient and cost-effective training process, making it an attractive option for those with limited computational resources.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🖼️

Sheared-LLaMA-1.3B

princeton-nlp

Total Score

85

Sheared-LLaMA-1.3B is a model pruned and further pre-trained from the meta-llama/Llama-2-7b-hf model. The maintainer, princeton-nlp, dynamically loaded data from different domains in the RedPajama dataset to prune and continue pre-training the model. They used 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model is smaller-scale compared to the original LLaMA models, but shares the same vocabulary. It was derived by the maintainer with a budget of 50B tokens, leveraging existing strong large language models. Model inputs and outputs Inputs Natural language text Outputs Continued generation of natural language text Capabilities The Sheared-LLaMA-1.3B model outperforms existing large language models on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling, and knowledge-intensive tasks. What can I use it for? The Sheared-LLaMA-1.3B model can be used for a variety of natural language processing tasks, such as text generation, question answering, and language modeling. Its strong performance on downstream tasks makes it a viable option for projects that require robust language understanding and generation capabilities. Things to try Given the model's smaller size compared to the original LLaMA models, it could be an interesting option to explore for deployments with more constrained computational resources. The maintainer's approach of pruning and continued pre-training on diverse datasets also suggests that the model may have unique strengths, such as improved efficiency or specialized knowledge, that could be worth investigating further.

Read more

Updated Invalid Date

Llama-2-7b-hf

NousResearch

Total Score

141

The Llama-2-7b-hf model is part of the Llama 2 family of large language models (LLMs) developed and released by Meta. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This specific 7B model has been converted for the Hugging Face Transformers format. Larger variations of the Llama 2 model include the Llama-2-13b-hf and Llama-2-70b-chat-hf models. Model inputs and outputs The Llama-2-7b-hf model takes in text as its input and generates text as its output. It is an auto-regressive language model that uses an optimized transformer architecture. The fine-tuned versions, like the Llama-2-Chat models, are optimized for dialogue use cases. Inputs Text prompts Outputs Generated text Capabilities The Llama 2 models are capable of a variety of natural language generation tasks, such as open-ended dialogue, creative writing, and answering questions. The fine-tuned Llama-2-Chat models in particular have been shown to outperform many open-source chat models on benchmarks, and are on par with some popular closed-source models in terms of helpfulness and safety. What can I use it for? The Llama-2-7b-hf model, and the broader Llama 2 family, are intended for commercial and research use in English. The pretrained models can be adapted for a range of NLP applications, while the fine-tuned chat versions are well-suited for building AI assistants and conversational interfaces. Things to try Some interesting things to try with the Llama-2-7b-hf model include: Prompting the model with open-ended questions or creative writing prompts to see its language generation capabilities Evaluating the model's performance on specific benchmarks or tasks to understand its strengths and limitations Experimenting with different prompting techniques or fine-tuning the model further for your own use cases Comparing the performance and capabilities of the Llama-2-7b-hf model to other open-source or commercial language models Remember to always exercise caution and follow the Responsible Use Guide when deploying any applications built with the Llama 2 models.

Read more

Updated Invalid Date

🏋️

Llama-2-7b-chat-hf

NousResearch

Total Score

146

Llama-2-7b-chat-hf is a 7B parameter large language model (LLM) developed by Meta. It is part of the Llama 2 family of models, which range in size from 7B to 70B parameters. The Llama 2 models are pretrained on a diverse corpus of publicly available data and then fine-tuned for dialogue use cases, making them optimized for assistant-like chat interactions. Compared to open-source chat models, the Llama-2-Chat models outperform on most benchmarks and are on par with popular closed-source models like ChatGPT and PaLM in human evaluations for helpfulness and safety. Model inputs and outputs Inputs Text**: The Llama-2-7b-chat-hf model takes natural language text as input. Outputs Text**: The model generates natural language text as output. Capabilities The Llama-2-7b-chat-hf model demonstrates strong performance on a variety of natural language tasks, including commonsense reasoning, world knowledge, reading comprehension, and math problem-solving. It also exhibits high levels of truthfulness and low toxicity in generation, making it suitable for use in assistant-like applications. What can I use it for? The Llama-2-7b-chat-hf model is intended for commercial and research use in English. The fine-tuned Llama-2-Chat versions can be used to build interactive chatbots and virtual assistants that engage in helpful and informative dialogue. The pretrained Llama 2 models can also be adapted for a variety of natural language generation tasks, such as summarization, translation, and content creation. Things to try Developers interested in using the Llama-2-7b-chat-hf model should carefully review the responsible use guide provided by Meta, as large language models can carry risks and should be thoroughly tested and tuned for specific applications. Additionally, users should follow the formatting guidelines for the chat versions, which include using INST and > tags, BOS and EOS tokens, and proper whitespacing and linebreaks.

Read more

Updated Invalid Date

🤿

open_llama_7b_v2

openlm-research

Total Score

112

open_llama_7b_v2 is an open-source reproduction of Meta AI's LLaMA large language model, developed by openlm-research. This 7B-parameter model is part of a series of 3B, 7B, and 13B OpenLLaMA models trained on 1 trillion tokens. The v2 model is an improvement over the earlier v1 model, trained on a different data mixture. OpenLLaMA provides PyTorch and JAX weights that can serve as a drop-in replacement for the original LLaMA model. Model inputs and outputs Inputs Text prompts for language generation Outputs Coherent and contextual text continuations, generated in an autoregressive manner Capabilities The open_llama_7b_v2 model exhibits comparable performance to the original LLaMA and GPT-J models across a range of tasks, including commonsense reasoning, world knowledge, reading comprehension, and math. It outperforms them in some areas, such as code generation and certain language understanding benchmarks. What can I use it for? The OpenLLaMA models can be used as a drop-in replacement for the original LLaMA in existing implementations, enabling a wide range of natural language processing applications. This includes text generation, question answering, summarization, and more. The permissive Apache 2.0 license allows for commercial and research use. Things to try Developers can experiment with fine-tuning the OpenLLaMA models on domain-specific data to adapt them for specialized tasks. Additionally, the models can be used in conjunction with other techniques like prompt engineering to further enhance their capabilities for particular use cases.

Read more

Updated Invalid Date