Training Approach?
#18
by
aris-T
- opened
How does one go about training these models. I am looking to incorporate an internal codebase into the LLM. Here is my training code. I can't seem to get this to work as it always maxes out my GPU memory. I have even tried a H100 80 GB instance. I have tried the basics of reduced batch size. How does one go about training one of these from a checkpoint? Thanks.
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer, DataCollatorForLanguageModeling
from datasets import load_dataset
print("Initializing a tokenizer")
tokenizer = AutoTokenizer.from_pretrained("ehartford/WizardLM-7B-Uncensored")
print("Loading and preprocessing the dataset")
datasets = load_dataset('text', data_files='./codebase.txt')
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = datasets.map(tokenize_function, batched=True, remove_columns=["text"])
block_size = 128
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // block_size) * block_size
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000,
)
print("Initializing a model")
model = AutoModelForCausalLM.from_pretrained("ehartford/WizardLM-7B-Uncensored")
# Defining the training arguments
training_args = TrainingArguments(
output_dir="./results", # The output directory
overwrite_output_dir=True, # overwrite the content of the output directory
num_train_epochs=3, # number of training epochs
per_device_train_batch_size=1, # batch size for training
per_device_eval_batch_size=1, # batch size for evaluation
eval_steps = 400, # Number of update steps between two evaluations.
save_steps=800, # after # steps model is saved
warmup_steps=500, # number of warmup steps for learning rate scheduler
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
# Initializing a Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_datasets["train"],
eval_dataset=lm_datasets["train"],
data_collator=data_collator,
)
# Training
trainer.train()
I used exactly the same method as WizardLM.
Using LlamaX.
https://github.com/nlpxucan/WizardLM#fine-tuning