Update README.md
Browse files
README.md
CHANGED
@@ -54,9 +54,6 @@ LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.
|
|
54 |
- `Bilingual` This dataset contains both hindi and english captions
|
55 |
|
56 |
|
57 |
-
**Paper or resources for more information:**
|
58 |
-
https://llava-vl.github.io/
|
59 |
-
|
60 |
**License:**
|
61 |
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
|
62 |
|
@@ -68,9 +65,6 @@ liability for any damages, direct or indirect, resulting from the use of the
|
|
68 |
dataset.
|
69 |
|
70 |
|
71 |
-
**Where to send questions or comments about the model:**
|
72 |
-
https://github.com/haotian-liu/LLaVA/issues
|
73 |
-
|
74 |
## Intended use
|
75 |
**Primary intended uses:**
|
76 |
The primary use of LLaVA is research on large multimodal models and chatbots.
|
|
|
54 |
- `Bilingual` This dataset contains both hindi and english captions
|
55 |
|
56 |
|
|
|
|
|
|
|
57 |
**License:**
|
58 |
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
|
59 |
|
|
|
65 |
dataset.
|
66 |
|
67 |
|
|
|
|
|
|
|
68 |
## Intended use
|
69 |
**Primary intended uses:**
|
70 |
The primary use of LLaVA is research on large multimodal models and chatbots.
|