license: other
LLaMA2-Accessory: An Open-source Toolkit for LLM Development ๐
๐LLaMA2-Accessory is an open-source toolkit for pre-training, fine-tuning and deployment of Large Language Models (LLMs) and mutlimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.๐ง
Github link: Github โข ๐ join our WeChat
Features
๐กSupport More Datasets and Tasks
- ๐ฏ Pre-training with RefinedWeb and StarCoder.
- ๐ Single-modal fine-tuning with Alpaca, ShareGPT, LIMA, UltraChat and MOSS.
- ๐ Multi-modal fine-tuning with image-text pairs (LAION, COYO and more), interleaved image-text data (MMC4 and OBELISC) and visual instruction data (LLaVA, Shrika, Bard)
- ๐ง LLM for API Control (GPT4Tools and Gorilla).
โกEfficient Optimization and Deployment
- ๐ Parameter-efficient fine-tuning with Zero-init Attenion and Bias-norm Tuning.
- ๐ป Fully Sharded Data Parallel (FSDP), Flash Attention 2 and QLoRA.
๐๏ธโโ๏ธSupport More Visual Encoders and LLMs
Installation
See docs/install.md.
Training & Inference
See docs/pretrain.md and docs/finetune.md.
Demos
- Instruction-tuned LLaMA2: alpaca & gorilla.
- Chatbot LLaMA2: dialog_sharegpt & dialog_lima & llama2-chat.
- Multimodal LLaMA2: in-context
Core Contributors
Chris Liu, Ziyi Lin, Guian Fang, Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao
Hiring Announcement
๐ฅ We are hiring interns, postdocs, and full-time researchers at the General Vision Group, Shanghai AI Lab, with a focus on multi-modality and vision foundation models. If you are interested, please contact [email protected].
Citation
If you find our code and paper useful, please kindly cite:
@article{zhang2023llamaadapter,
title = {LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention},
author={Zhang, Renrui and Han, Jiaming and Liu, Chris and Gao, Peng and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Qiao, Yu},
journal={arXiv preprint arXiv:2303.16199},
year={2023}
}
@article{gao2023llamaadapterv2,
title = {LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model},
author={Gao, Peng and Han, Jiaming and Zhang, Renrui and Lin, Ziyi and Geng, Shijie and Zhou, Aojun and Zhang, Wei and Lu, Pan and He, Conghui and Yue, Xiangyu and Li, Hongsheng and Qiao, Yu},
journal={arXiv preprint arXiv:2304.15010},
year={2023}
}
Acknowledgement
- @facebookresearch for llama
- @OpenGVLab for LLaMA-Adapter
- @facebookresearch for ImageBind & LIMA
- @Instruction-Tuning-with-GPT-4 for GPT-4-LLM
- @tatsu-lab for stanford_alpaca
- @tloen for alpaca-lora
- @lm-sys for FastChat
- @domeccleston for sharegpt
- @karpathy for nanoGPT
- @Dao-AILab for flash-attention
- @NVIDIA for apex & Megatron-LM
- @Vision-CAIR for MiniGPT-4
- @haotian-liu for LLaVA
- @huggingface for peft & OBELISC
- @Lightning-AI for lit-gpt & lit-llama
- @allenai for mmc4
- @StevenGrove for GPT4Tools
- @ShishirPatil for gorilla
- @OpenLMLab for MOSS
- @thunlp for UltraChat
- @LAION-AI for LAION-5B
- @shikras for shikra
- @kakaobrain for coyo-dataset
- @salesforce for LAVIS
- @openai for CLIP
- @bigcode-project for starcoder
- @tiiuae for falcon-refinedweb
- @microsoft for DeepSpeed
- @declare-lab for flacuna
- @Google for Bard
License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.