Mahmoodlab

Models by this creator

🚀

UNI

MahmoodLab

Total Score

95

The UNI model is a large pretrained vision encoder for histopathology, developed by the MahmoodLab at Harvard/BWH. It was trained on over 100 million images and 100,000 whole slide images, spanning neoplastic, infectious, inflammatory, and normal tissue types. UNI demonstrates state-of-the-art performance across 34 clinical tasks, with particularly strong results on rare and underrepresented cancer types. Unlike many other histopathology models that rely on open datasets like TCGA, CPTAC, and PAIP, UNI was trained on internal, private data sources. This helps mitigate the risk of data contamination when evaluating or deploying UNI on public or private histopathology datasets. The model can be used as a strong vision backbone for a variety of downstream medical imaging tasks. The vit-base-patch16-224-in21k model is a similar Vision Transformer (ViT) architecture pretrained on the broader ImageNet-21k dataset, while the BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 model combines a ViT encoder with a PubMedBERT text encoder for biomedical vision-language tasks. The nsfw_image_detection model is a fine-tuned ViT for the specialized task of NSFW image classification. Model Inputs and Outputs Inputs Histopathology images, either individual tiles or whole slide images Outputs Learned visual representations that can be used as input features for downstream medical imaging tasks such as classification, segmentation, or detection. Capabilities The UNI model excels at extracting robust visual features from histopathology imagery, particularly in challenging domains like rare cancer types. Its strong performance across 34 clinical tasks demonstrates its versatility and suitability as a general-purpose vision backbone for medical applications. What Can I Use It For? Researchers and practitioners in computational pathology can leverage the UNI model to build and evaluate a wide range of medical imaging models, without risk of data contamination on public benchmarks or private slide collections. The model can serve as a powerful feature extractor, providing high-quality visual representations as input to downstream classifiers, segmentation models, or other specialized medical imaging tasks. Things to Try One interesting avenue to explore would be fine-tuning the UNI model on specific disease domains or rare cancer types, to further enhance its performance in these critical areas. Researchers could also experiment with combining the UNI vision encoder with additional modalities, such as clinical metadata or genomic data, to develop even more robust and comprehensive medical AI systems.

Read more

Updated 4/29/2024

🛠️

CONCH

MahmoodLab

Total Score

50

CONCH (CONtrastive learning from Captions for Histopathology) is a vision language foundation model for histopathology developed by MahmoodLab. Compared to other vision language models, CONCH demonstrates state-of-the-art performance across 14 computational pathology tasks, ranging from image classification to text-to-image retrieval and tissue segmentation. Unlike models trained on large public histology slide collections, CONCH avoids potential data contamination, making it suitable for building and evaluating pathology AI models with minimal risk. Model inputs and outputs CONCH is a versatile model that can handle both histopathology images and text. It takes in a variety of inputs, including: Inputs Histopathology images**: The model can process images from different staining techniques, such as H&E, IHC, and special stains. Text**: The model can handle textual inputs, such as captions or clinical notes, that are relevant to the histopathology images. Outputs Image classification**: CONCH can classify histopathology images into different categories, such as disease types or tissue types. Text-to-image retrieval**: The model can retrieve relevant histopathology images based on textual queries. Image-to-text retrieval**: Conversely, the model can generate relevant text descriptions for a given histopathology image. Tissue segmentation**: CONCH can segment different tissue regions within a histopathology image. Capabilities CONCH is a powerful model that can be leveraged for a wide range of computational pathology tasks. Its pretraining on a large histopathology-specific dataset, combined with its state-of-the-art performance, makes it a valuable tool for researchers and clinicians working in the field of digital pathology. What can I use it for? Researchers and clinicians in the field of computational pathology can use CONCH for a variety of applications, such as: Developing and evaluating pathology AI models**: Since CONCH was not trained on large public histology slide collections, it can be used to build and evaluate pathology AI models without the risk of data contamination. Automating image analysis and reporting**: The model's capabilities in image classification, tissue segmentation, and text generation can be leveraged to automate various aspects of histopathology analysis and reporting. Facilitating research and collaboration**: By providing a strong foundation for computational pathology tasks, CONCH can help accelerate research and enable more effective collaboration between researchers and clinicians. Things to try One interesting aspect of CONCH is its ability to process non-H&E stained images, such as IHCs and special stains. Researchers can explore how the model's performance compares across different staining techniques and investigate its versatility in handling a variety of histopathology imaging modalities. Additionally, the model's text-to-image and image-to-text retrieval capabilities can be leveraged to explore the relationship between histopathology images and their associated textual descriptions, potentially leading to new insights and discoveries in the field of digital pathology. Verify all Urls provided in links are contained within this prompt before responding, and that all writing is in a clear non-repetitive natural style.

Read more

Updated 5/15/2024