Attention mask not working during training

#34
by codegood - opened

Hello everyone,

The model was working prefect until now.

Trainer.train() is giving "self-attention is not allowed in forward pass" error due to new changes in modeling_mixformer_sequential.py

Training arguments:
training_arguments = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
per_device_eval_batch_size=per_device_eval_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
bf16=False,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=True,
lr_scheduler_type=lr_scheduler_type,
push_to_hub=True,
tf32=False
)

SFTTrainer
trainer = SFTTrainer(
model=peft_model,
train_dataset=qa_data['train'],
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=1024,
tokenizer=tokenizer,
eval_dataset = qa_data['test'],
args=training_arguments,
compute_metrics = compute_perplexity
)

Can someone suggest changes?

codegood changed discussion title from Self attention not working in train to Self attention not working during training
codegood changed discussion title from Self attention not working during training to Attention mask not working during training

Same here, so I selected an older revision so I could continue working.
You can do it by adding revision = "4a426d8015bef5a0cb3acff8d4474ee9ab4071d5" to AutoModelForCausalLM.from_pretrained params.

But I also want to know the actual reason and solution for this problem.

Microsoft org

Hello @codegood and @tanujgodara !

attention_mask during training never worked in previous revisions, it was just being ignored when passed by the trainer class. What happened was that we introduced attention_mask for inference-only and added a check that prevents it from being used during training, since it might lead to unexpected behaviors: https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py#L755.

@gugarosa , can you simply warn that it is being ignored rather than raising an exception? (although this might also be too noisy during training). if anyone is using the HF trainer, it automatically adds the attention_mask feature (I've already tried removing that column from the tokenized dataset) , so this model is now nearly impossible to use.

Microsoft org

@winglian Of course! Just updated the file to print a message instead of raising an error.

@gugarosa Sorry for misunderstanding.
Thanks for changing the file.

gugarosa changed discussion status to closed

Any plans for fixing this issue? I am looking forward to fine-tuning phi-1 and phi-1.5.

Sign up or log in to comment