Model
llava-clip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from internlm2-chat-1_8b and CLIP-ViT-Large-patch14-336 with LLaVA-Pretrain by Xtuner. The pretraining phase took 16 hours on a single Nvidia A6000 ada GPU.
The total size of the model is around 2.2B, which is suitable for embedded applications like robotics.
I just finished the pretrain phase of the model. I will release the full finetuned model soon. You can also finetune your own version based on the checkpoint here.
Installation
git clone https://github.com/InternLM/xtuner
pip install -e ./xtuner[deepspeed]
apt install git-lfs
git clone https://huggingface.co/StarCycle/llava-clip-internlm2-1_8b-pretrain-v1
cd ./llava-clip-internlm2-1_8b-pretrain-v1
Common Errors
1.
command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!
You can solve it by
# For Ubuntu
sudo apt-get update
sudo apt-get install libgl1-mesa-glx
# For CentOS and Fedora
sudo yum install mesa-libGL
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
You can solve it by reinstall numpy.
ImportError:
InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
You just need
pip install protobuf
- To use tensorboard to visualize the training loss curve:
pip install future tensorboard
- If your training process is killed during data preprocessing, you can modify the
map_num_proc
in xtuner/xtuner/dataset /huggingface.py
def process(dataset,
do_dataset_tokenization=True,
tokenizer=None,
max_length=None,
dataset_map_fn=None,
template_map_fn=None,
max_dataset_length=None,
split='train',
remove_unused_columns=False,
rename_maps=[],
shuffle_before_pack=True,
pack_to_max_length=True,
use_varlen_attn=False,
input_ids_with_output=True,
with_image_token=False,
map_num_proc=32): # modify it to a smaller number, e.g., 4
- If you fail to load the model, check whether you installed git-lfs and actually downloaded the model file.
Data prepration
- File structure
# . means the llava-clip-internlm2-1_8b-pretrain-v1 folder you clone
./data/llava_data
βββ LLaVA-Pretrain
βββ blip_laion_cc_sbu_558k.json
βββ blip_laion_cc_sbu_558k_meta.json
βββ images
- Pretrain Data
LLaVA-Pretrain
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain --depth=1
- Finetune Data
Please check the final release version
Cheers! Now train your own model!
- Alignment module pretraining
# single GPU
xtuner train ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu1_pretrain.py --deepspeed deepspeed_zero2
# multiple GPU
NPROC_PER_NODE=8 xtuner train ./llava_internlm2_chat_1_8b_clip_vit_large_p14_336_e1_gpu1_pretrain.py --deepspeed deepspeed_zero2
Remember to change the batch size and gradient accumulation parameters to fit your hardware. So your GPU_num * batch_size * gradient_accumulation is roughly equal to mine to reproduce the result.
The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
This is my loss curve for llava-clip-internlm2-1_8b-pretrain-v1:
- Instruction following fine-tuning
Please check the final release version