ChatTruth-7B / README.md
mingdali's picture
Update README.md
a5ff545
|
raw
history blame
No virus
1.16 kB
---
language:
- zh
- en
---
# ChatTruth-7B
**ChatTruth-7B** 在Qwen-VL的基础上,使用精心设计的数据进行了优化训练。与Qwen-VL相比,模型的中文对话能力得到了大幅提升。创新性提出Restore Module使大分辨率计算量大幅减少。
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657bef8a5c6f0b1f36fcf28e/oZMs1DJWluJhVXX80x3D0.png)
## 安装要求 (Requirements)
* transformers 4.32.0
* python 3.8 and above
* pytorch 1.13 and above
* CUDA 11.4 and above
<br>
## 快速开始 (Quickstart)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(1234)
model_path = 'ChatTruth-7B' # your downloaded model path.
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()
query = tokenizer.from_list_format([
{'image': 'demo.jpeg'},
{'text': '图片中的文字是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 昆明太厉害了
```