|
--- |
|
inference: false |
|
pipeline_tag: image-text-to-text |
|
datasets: |
|
- yifanzhang114/SMR |
|
--- |
|
|
|
<br> |
|
<br> |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/F2d0zMtwUqPKtOrbMu0Gr.jpeg" alt="image/jpeg" style="width:10%;"> |
|
|
|
# SliME Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
SliME is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. |
|
It is an auto-regressive language model, based on the transformer architecture. |
|
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/_dsyhwdanIgUPtejamXmX.png) |
|
|
|
**Paper or resources for more information:** |
|
|
|
Paper: https://huggingface.co/papers/2406.08487 |
|
|
|
Arxiv: https://arxiv.org/abs/2406.08487 |
|
|
|
Code: https://github.com/yfzhang114/SliME |
|
|
|
## License |
|
Llama 2 is licensed under the LLAMA 2 Community License, |
|
Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/yfzhang114/SliME/issues |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use of SliME is research on large multimodal models and chatbots. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
|
|
|
## Training dataset |
|
- SharedGPT4v sft data |
|
- SMR data |
|
|
|
## Evaluation dataset |
|
A collection of 15 benchmarks, including 5 academic VQA benchmarks and 10 recent benchmarks specifically proposed for instruction-following LMMs. |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/dLXygEd23t-xImhSBLlta.png) |