initial tuned 4k context length model

#84
Microsoft org
No description provided.
haipingwu changed pull request title from 4km to initial tuned 4k context length model

This is a continued pretrained version of Florence-2-large model with 4k context length with original data, only 0.1B samples are used for continue pretraining, thus it might not be trained well. In addition, OCR task has been updated with line separator ('\n'). This model has COCO OD AP 39.8. The model is a starting point for experimenting longer context length (4k) and is not aimed to replace the original model.

Microsoft org

load the model with the following code

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained('microsoft/Florence-2-large', torch_dtype=torch_dtype, trust_remote_code=True, revision='refs/pr/84').to(device)
processor = AutoProcessor.from_pretrained('microsoft/Florence-2-large', trust_remote_code=True, revision='refs/pr/84')
haipingwu pinned discussion

@haipingwu
Could you tell us the steps you took to extend the token length from 1024 to 4096? I would like to understand the process so I can apply it to extend the token length for Florence-2-base-ft in my task.

hi @Ank12 , you can check the change of config.json, where max_position_embeddings has been changed from 1024 to 4096. Then you can fine-tune the model with the new configuration. For the extended new embedding weight init, you can either init from scratch or interpolate it with the pre-trained weights.

Publish this branch
This branch is in draft mode, publish it to be able to merge.

Sign up or log in to comment