pair-preference-model-LLaMA3-8B-GGUF
This is quantized version of RLHFlow/pair-preference-model-LLaMA3-8B created using llama.cpp
Model Description
This preference model is trained from LLaMA3-8B-it with the training script at Reward Modeling.
The dataset is RLHFlow/pair_preference_model_dataset. It achieves Chat-98.6, Char-hard 65.8, Safety 89.6, and reasoning 94.9 in reward bench.
See our paper RLHF Workflow: From Reward Modeling to Online RLHF for more details of this model.
Service the RM
Here is an example to use the Preference Model to rank a pair. For n>2 responses, it is recommened to use the tournament style ranking strategy to get the best response so that the complexity is linear in n.
device = 0
model = AutoModelForCausalLM.from_pretrained(script_args.preference_name_or_path,
torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda()
tokenizer = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain.chat_template = "\n{% for message in messages %}{% if loop.index0 % 2 == 0 %}\n\n<turn> user\n {{ message['content'] }}{% else %}\n\n<turn> assistant\n {{ message['content'] }}{% endif %}{% endfor %}\n\n\n"
prompt_template = "[CONTEXT] {context} [RESPONSE A] {response_A} [RESPONSE B] {response_B} \n"
token_id_A = tokenizer.encode("A", add_special_tokens=False)
token_id_B = tokenizer.encode("B", add_special_tokens=False)
assert len(token_id_A) == 1 and len(token_id_B) == 1
token_id_A = token_id_A[0]
token_id_B = token_id_B[0]
temperature = 1.0
model.eval()
response_chosen = "BBBB"
response_rejected = "CCCC"
## We can also handle multi-turn conversation.
instruction = [{"role": "user", "content": ...},
{"role": "assistant", "content": ...},
{"role": "user", "content": ...},
]
context = tokenizer_plain.apply_chat_template(instruction, tokenize=False)
responses = [response_chosen, response_rejected]
probs_chosen = []
for chosen_position in [0, 1]:
# we swap order to mitigate position bias
response_A = responses[chosen_position]
response_B = responses[1 - chosen_position]
prompt = prompt_template.format(context=context, response_A=response_A, response_B=response_B)
message = [
{"role": "user", "content": prompt},
]
input_ids = tokenizer.encode(tokenizer.apply_chat_template(message, tokenize=False).replace(tokenizer.bos_token, ""), return_tensors='pt', add_special_tokens=False).cuda()
with torch.no_grad():
output = model(input_ids)
logit_A = output.logits[0, -1, token_id_A].item()
logit_B = output.logits[0, -1, token_id_B].item()
# take softmax to get the probability; using numpy
Z = np.exp(logit_A / temperature) + np.exp(logit_B / temperature)
logit_chosen = [logit_A, logit_B][chosen_position]
prob_chosen = np.exp(logit_chosen / temperature) / Z
probs_chosen.append(prob_chosen)
avg_prob_chosen = np.mean(probs_chosen)
correct = 0.5 if avg_prob_chosen == 0.5 else float(avg_prob_chosen > 0.5)
print(correct)
- Downloads last month
- 78
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/pair-preference-model-LLaMA3-8B-GGUF
Base model
RLHFlow/pair-preference-model-LLaMA3-8B