Update README.md
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ inference: false
|
|
44 |
如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。
|
45 |
|
46 |
We release two models of the Qwen-VL series:
|
47 |
-
- Qwen-VL: The pre-trained LVLM model uses Qwen-7B as the initialization of the LLM, and [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) as the initialization of the visual encoder. And connects them with a randomly initialized cross-attention layer. Qwen-VL was trained on about 1.5B image-text paired data.
|
48 |
- Qwen-VL-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques.
|
49 |
|
50 |
For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md).
|
|
|
44 |
如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。
|
45 |
|
46 |
We release two models of the Qwen-VL series:
|
47 |
+
- Qwen-VL: The pre-trained LVLM model uses Qwen-7B as the initialization of the LLM, and [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) as the initialization of the visual encoder. And connects them with a randomly initialized cross-attention layer. Qwen-VL was trained on about 1.5B image-text paired data.
|
48 |
- Qwen-VL-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques.
|
49 |
|
50 |
For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md).
|