czczup commited on
Commit
267963d
1 Parent(s): a34a1a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -12,22 +12,25 @@ pipeline_tag: image-feature-extraction
12
 
13
  # Model Card for InternViT-6B-448px-V1-5
14
 
15
- ## What is InternVL?
16
 
17
- \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\]
18
 
19
- InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
 
 
 
 
 
 
20
 
21
- It is _**the largest open-source vision/vision-language foundation model (14B)**_ to date, achieving _**32 state-of-the-art**_ performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.
22
-
23
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/k5UATwX5W2b5KJBN5C58x.png)
24
 
25
  ## Model Details
26
  - **Model Type:** vision foundation model, feature backbone
27
  - **Model Stats:**
28
  - Params (M): 5540 (the last 3 blocks are discarded)
29
  - Image size: 448 x 448, training with 1 - 12 tiles
30
- - **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi, OCR data.
31
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
32
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for VLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a VLLM based on this model, **please make use of the features from the last layer.**
33
  ## Model Usage (Image Embeddings)
 
12
 
13
  # Model Card for InternViT-6B-448px-V1-5
14
 
15
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4IG0h_KJ2cvpp9Kdm0Jf7.webp" alt="Image Description" width="300" height="300">
16
 
17
+ \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
18
 
19
+ | Model | Date | Download | Note |
20
+ | ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
21
+ | InternViT-6B-448px-V1.5 | 2024.04.20 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (🔥new) |
22
+ | InternViT-6B-448px-V1.2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution |
23
+ | InternViT-6B-448px | 2024.01.30 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px) | 448 resolution |
24
+ | InternViT-6B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model |
25
+ | InternVL-14B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model |
26
 
 
 
 
27
 
28
  ## Model Details
29
  - **Model Type:** vision foundation model, feature backbone
30
  - **Model Stats:**
31
  - Params (M): 5540 (the last 3 blocks are discarded)
32
  - Image size: 448 x 448, training with 1 - 12 tiles
33
+ - **Pretrain Dataset:** LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong, LaionCOCO, CC3M, and OCR-related datasets.
34
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
35
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for VLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a VLLM based on this model, **please make use of the features from the last layer.**
36
  ## Model Usage (Image Embeddings)