Possible issue with context window
I was making inferences with Gemma 2 9b instruct that I received the following error:
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
I managed to solve it by setting max_length to 4096. I assume the issue should be with the layers that the context window is 4096 with sliding window.
transformers version: 4.42.3
PyTorch version: 2.3.1
Hey @ieman , can you provide some of the inferences which caused this to occur? Of course please don't include any PII.
Hi @datamancer88 I am working with CVs, they are all private so I can not send you the prompts that made the error, But I will send you part of the code related to this issue.
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
prompt = prompt.replace("MASK", tokenizer.pad_token)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
output_ids = model.generate(inputs.to(model.device),
max_length=config["max_length"],
temperature=config["temperature"],
repetition_penalty=config["repetition_penalty"])
output_ids = output_ids[0][len(inputs[0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
value of the following variables are:
max_length = 8000
temperature = 0
repetition_penalty = 1