OCRonos-Vintage

Maintainer: PleIAs

Total Score

64

Last updated 9/6/2024

🗣️

PropertyValue
Run this modelRun on HuggingFace
API specView on HuggingFace
Github linkNo Github link provided
Paper linkNo paper link provided

Create account to get full access

or

If you already have an account, we'll log you in

Model overview

The OCRonos-Vintage model is a small specialized model for OCR (Optical Character Recognition) correction of cultural heritage archives. It was pre-trained by the maintainer PleIAs using the llm.c framework. This model is only 124 million parameters, allowing it to run efficiently on CPU or provide high-speed correction on GPUs (over 10,000 tokens per second) while maintaining quality comparable to larger models like GPT-4 or the llama version of OCRonos for English-language cultural archives.

Model inputs and outputs

The OCRonos-Vintage model takes OCRized text as input and generates corrected text as output. It was specifically trained on a dataset of cultural heritage archives from sources like the Library of Congress, Internet Archive, and Hathi Trust.

Inputs

  • OCRized text: The model takes as input text that has been processed by an optical character recognition (OCR) system, which may contain errors or irregularities.

Outputs

  • Corrected text: The model outputs text that has been corrected and refined compared to the input OCRized version.

Capabilities

The OCRonos-Vintage model excels at correcting errors and improving the quality of OCRized text from cultural heritage archives. It was trained on a large corpus of historical documents, allowing it to handle a variety of challenging text styles and structures common in these types of archives.

What can I use it for?

The OCRonos-Vintage model is well-suited for projects that involve processing and enhancing digitized cultural heritage materials, such as books, manuscripts, and historical documents. It can be used to improve the accuracy and readability of OCR output, which is crucial for tasks like text mining, indexing, and making these valuable resources more accessible to researchers and the public.

Things to try

Experiment with the OCRonos-Vintage model on different types of cultural heritage documents, such as newspapers, journals, or archival records. Observe how the model handles variations in font, layout, and language. You could also try fine-tuning the model on domain-specific datasets to further improve its performance on particular types of materials.



This summary was produced with help from an AI and may contain inaccuracies - check out the links to read the original source documents!

Related Models

🧠

OCRonos

PleIAs

Total Score

47

OCRonos is a series of specialized language models trained by PleIAs for the correction of badly digitized texts, as part of the Bad Data Toolbox. The models are versatile tools that support the correction of OCR errors, wrong word cut/merge, and overall broken text structures. They were trained on a highly diverse set of OCRized texts in multiple languages, drawn from cultural heritage sources and financial/administrative documents. The current release features a model based on the llama-3-8b architecture that has been the most tested to date. Future releases will focus on smaller internal models that provide a better ratio of generation cost to quality. OCRonos is generally faithful to the original material, providing sensible restitution of deteriorated text and rarely rewriting correct words. On highly deteriorated content, it can act as a synthetic rewriting tool rather than a strict correction tool. Model inputs and outputs Inputs Corrupted/Broken Text**: OCRonos takes in text that has been poorly digitized, with errors, missing words, and other structural issues. Outputs Corrected Text**: The model outputs a corrected version of the input text, with OCR errors fixed, words merged/split correctly, and the overall structure improved. Capabilities OCRonos is capable of reliably correcting a wide range of digitization artifacts, including common OCR mistakes, word segmentation issues, and other text degradation problems. It performs particularly well on cultural heritage archives and financial/administrative documents, where the training data was focused. The model is able to retain the original meaning and intent while restoring the text to a more readable and usable form. What can I use it for? OCRonos can be a valuable tool for making challenging digitized resources more accessible and usable for language model applications and search retrieval. It is especially suited for situations where the original PDF sources are too damaged for correct OCRization or difficult to retrieve. The model can be used to pre-process text before feeding it into other NLP pipelines, improving the overall quality and reliability of the results. Things to try One interesting aspect of OCRonos is its ability to act as a synthetic rewriting tool on highly deteriorated content, rather than just a strict correction tool. This can be useful for generating more readable versions of severely damaged texts where the original meaning needs to be preserved. Experimenting with the model's behavior on different types of corrupted text, from historical archives to modern administrative documents, can yield interesting insights into its capabilities and limitations.

Read more

Updated Invalid Date

👨‍🏫

orca_mini_13b

pankajmathur

Total Score

98

orca_mini_13b is an OpenLLaMa-13B model fine-tuned on explain-tuned datasets. The dataset was created using instructions and input from WizardLM, Alpaca, and Dolly-V2 datasets, applying approaches from the Orca Research Paper. This helps the model learn the thought process from the teacher model, which is the GPT-3.5-turbo-0301 version of ChatGPT. Model inputs and outputs The orca_mini_13b model takes a combination of system prompts and user instructions as input, and generates relevant text responses as output. Inputs System prompt**: A prompt that sets the context for the model, describing the role and goals of the AI assistant. User instruction**: The task or query that the user wants the model to address. Input (optional)**: Additional context or information that the user provides to help the model complete the task. Outputs Response**: The model's generated text response to the user's instruction, which aims to provide a detailed, thoughtful, and step-by-step explanation. Capabilities The orca_mini_13b model is capable of generating high-quality, explain-tuned responses to a variety of tasks and queries. It demonstrates strong performance on reasoning-based benchmarks like BigBench-Hard and AGIEval, indicating its ability to engage in complex, logical thinking. What can I use it for? The orca_mini_13b model can be used for a range of applications that require detailed, step-by-step explanations, such as: Educational or tutoring applications Technical support and customer service Research and analysis tasks General question-answering and information retrieval By leveraging the model's explain-tuned capabilities, users can gain a deeper understanding of the topics and concepts being discussed. Things to try One interesting thing to try with the orca_mini_13b model is to provide it with prompts or instructions that require it to take on different expert roles, such as a logician, mathematician, or physicist. This can help uncover the model's breadth of knowledge and its ability to tailor its responses to the specific needs of the task at hand. Another interesting approach is to explore the model's performance on open-ended, creative tasks, such as generating poetry or short stories. The model's strong grounding in language and reasoning may translate into an ability to produce engaging and insightful creative output.

Read more

Updated Invalid Date

🏷️

orca_mini_3b

pankajmathur

Total Score

157

The orca_mini_3b model is an OpenLLaMa-3B model trained on a mix of datasets including WizardLM, Alpaca, and Dolly-V2. It applies the dataset construction approaches from the Orca Research Paper to create an "explain tuned" model designed to learn the thought process from the ChatGPT teacher model. Model inputs and outputs Inputs System prompt**: A short prompt provided at the start of the interaction that sets the context and instructions for the model. User instruction**: The specific task or query that the user wants the model to address. User input** (optional): Additional context or information provided by the user to help the model respond. Outputs Model response**: The generated text from the model addressing the user's instruction. The model aims to provide a well-reasoned and helpful response. Capabilities The orca_mini_3b model is capable of engaging in a wide variety of text-to-text tasks, such as question answering, task completion, and open-ended conversation. It demonstrates strong reasoning and explanatory capabilities, drawing insights from its training data to provide thoughtful and substantive responses. What can I use it for? The orca_mini_3b model could be useful for applications that require natural language understanding and generation, such as chatbots, virtual assistants, and content creation tools. Its ability to learn the thought process from ChatGPT makes it well-suited for tasks that benefit from clear, step-by-step explanations. Things to try One interesting aspect of the orca_mini_3b model is its use of a "system prompt" to set the context and instructions for the interaction. Experimenting with different system prompts could yield insights into how the model's responses change based on the framing and guidance provided upfront. Additionally, prompting the model with open-ended questions or tasks that require reasoning and analysis could reveal its strengths in those areas.

Read more

Updated Invalid Date

cosmo-1b

HuggingFaceTB

Total Score

117

The cosmo-1b model is a 1.8B parameter language model trained by HuggingFaceTB on a synthetic dataset called Cosmopedia. The training corpus consisted of 30B tokens, 25B of which were synthetic from Cosmopedia, augmented with 5B tokens from sources like AutoMathText and The Stack. The model uses the tokenizer from the Mistral-7B-v0.1 model. Model Inputs and Outputs The cosmo-1b model is a text-to-text AI model, meaning it can take textual input and generate textual output. Inputs Text prompts that the model uses to generate new text. Outputs Generated text based on the input prompt. Capabilities The cosmo-1b model is capable of generating coherent and relevant text in response to given prompts. While it was not explicitly instruction-tuned, the inclusion of the UltraChat dataset in pretraining allows it to be used in a chat-like format. The model can generate stories, explain concepts, and provide informative responses to a variety of prompts. What Can I Use It For? The cosmo-1b model could be useful for various text generation tasks, such as: Creative writing: The model can be used to generate stories, dialogues, or creative pieces of text. Educational content creation: The model can be used to generate explanations, tutorials, or summaries of concepts. Chatbot development: The model's chat-like capabilities could be leveraged to build conversational AI assistants. Things to Try Some interesting things to try with the cosmo-1b model include: Experimenting with different prompts to see the range of text the model can generate. Evaluating the model's performance on specific tasks, such as generating coherent stories or explaining complex topics. Exploring the model's ability to handle long-form text generation and maintain consistency over extended passages. Investigating the model's potential biases or limitations by testing it on a diverse set of inputs.

Read more

Updated Invalid Date