File size: 10,335 Bytes
b3123bf 0633402 b3123bf ce7f606 b3123bf 107cfa0 b3123bf 8b1bd5b b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 8cce6c1 85f5133 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 8cce6c1 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf 107cfa0 b3123bf ffdbcc3 a24054a b3123bf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 |
---
license: apache-2.0
datasets:
- kaist-ai/Multifaceted-Collection-RM
- Anthropic/hh-rlhf
- tasksource/oasst1_pairwise_rlhf_reward
- openai/webgpt_comparisons
language:
- en
library_name: transformers
---
## Links for Reference
- **Homepage: https://lklab.kaist.ac.kr/Janus/**
- **Repository: https://github.com/kaistAI/Janus**
- **Paper: https://arxiv.org/abs/2405.17977**
- **Point of Contact: [email protected]**
# TL; DR
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/vrQl8D8FV3vqUJYbPgsiG.png)
Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
# Model Details
Janus-RM-7B is a reward model created by training Janus-7B (which is trained for only 1 epoch on the full 196k training instances) with [Multifaceted-Collection-RM](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-RM) and a similar-sized mix of representative general helpfulness data: 72% of [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), 14% of [OASST1 dataset preprocessed for reward modeling](https://huggingface.co/datasets/tasksource/oasst1_pairwise_rlhf_reward), and 14% of [WebGPT Comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons). Janus-RM-7B predicts a scalar reward when provided with a concatenation of system message, instruction, chosen response, and rejected response. This can be utilized to perform as a scoring function for Best-of-N sampling or for preference tuning with proximal policy optimization (PPO).
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [Janus-DPO-7B](https://huggingface.co/kaist-ai/janus-dpo-7b), [Janus-ORPO-7B](https://huggingface.co/kaist-ai/janus-orpo-7b), [Janus-7B](https://huggingface.co/kaist-ai/janus-7b)
- **Training Datasets**: [Multifaceted-Collection-RM](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-RM), [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf), [tasksource/oasst1_pairwise_rlhf_reward](https://huggingface.co/datasets/tasksource/oasst1_pairwise_rlhf_reward), [openai/webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2405.17977)
- [GitHub Repo](https://github.com/kaistAI/Janus)
# Usage
Here is example code to load the reward model and calculate a scalar reward on a model output.
```python
from transformers import AutoConfig, AutoModel, AutoModelForCausalLM, AutoTokenizer
import torch
import torch.nn as nn
from typing import Optional
import os
model_name = "kaist-ai/janus-7b"
reward_model_name = "kaist-ai/janus-rm-7b"
model_device = "cuda:0"
reward_model_device = "cuda:1"
dtype = "float16"
if torch.cuda.is_bf16_supported():
dtype = "bfloat16"
# Get model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=getattr(torch, dtype), cache_dir="/mnt/sda/suehyun/huggingface")
model.eval()
model.to(model_device)
# Get reward model
def get_reward_model(base_pretrained_model, base_llm_model):
class LLMForSequenceRegression(base_pretrained_model):
def __init__(self, config: AutoConfig):
super().__init__(config)
setattr(self, self.base_model_prefix, base_llm_model(config))
self.value_head = nn.Linear(config.hidden_size, 1, bias=False)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
return_output=False,
) -> torch.Tensor:
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
outputs = getattr(self, self.base_model_prefix)(
input_ids, attention_mask=attention_mask, position_ids=position_ids
)
last_hidden_states = outputs["last_hidden_state"]
values = self.value_head(last_hidden_states).squeeze(-1)
eos_indices = attention_mask.size(1) - 1 - attention_mask.long().fliplr().argmax(dim=1, keepdim=True)
reward = values.gather(dim=1, index=eos_indices).squeeze(1)
if return_output:
return reward, outputs
else:
return reward
return LLMForSequenceRegression
config = AutoConfig.from_pretrained(reward_model_name)
config.normalize_reward = True
base_class = AutoModel._model_mapping[type(config)] # <class 'transformers.models.mistral.modeling_mistral.MistralModel'>
base_pretrained_class = base_class.__base__ # <class 'transformers.models.mistral.modeling_mistral.MistralPreTrainedModel'>
print(base_class, base_pretrained_class)
cls_class = get_reward_model(base_pretrained_class,base_class)
reward_model = cls_class.from_pretrained(
reward_model_name,
config=config,
cache_dir="/mnt/sda/suehyun/huggingface",
torch_dtype=getattr(torch, dtype),
)
print(reward_model)
reward_model.eval()
reward_model.to(reward_model_device)
# Prepare inputs
system = "You are a savvy beverage consultant, adept at offering quick, concise drink recommendations that cater to the common palette, yet surprise with a touch of creativity. When approached with a request, your expertise shines by suggesting one or two easily recognizable and widely accessible options, ensuring no one feels overwhelmed by complexity or rarity. Your skill lies not just in meeting the immediate need for refreshment but in gently nudging the curious towards unique hydration choices, beautifully balancing familiarity with the thrill of discovery. Importantly, your recommendations are crafted with a keen awareness of dietary preferences, presenting choices that respect and include considerations for sugar-free, dairy-free, and other common dietary restrictions. Your guidance empowers users to explore a range of beverages, confident they are making informed decisions that respect their health and lifestyle needs."
prompt = "If you are thirsty, what can you drink to quench your thirst?"
def apply_template_mistral_instruct(system_message, content):
prompt = f"{system_message}\n{content}".strip()
return f"[INST] {prompt} [/INST] "
input_str = apply_template_mistral_instruct(system, prompt)
inputs = tokenizer.encode(input_str, return_tensors="pt")
print(input_str)
model_inputs = inputs.to(model_device)
# Generate text
with torch.inference_mode():
output_ids = model.generate(model_inputs, max_new_tokens=1024)
decoded = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
output_str = decoded[0][len(input_str):]
print(output_str)
'''
1. **Water**: The ultimate go-to, especially if you're watching what you consume. Opting for sparkling or infused water (think cucumber and mint, berries, or a splash of lemon) can add a bit of excitement and hydration without the added sugar.
2. **Herbal Tea**: Perfect for a warmer climate but equally delightful at any temperature. Choose from various flavors, ranging from the traditional peppermint to chamomile or hibiscus, which adds a unique twist with their own health benefits and refreshing flavors. Many options are caffeine-free, making them suitable for all times of the day.
For those needing a touch more sweetness or a slight twist:
3. **Unsweetened Coconut Water**: With its natural sweetness and electrolyte content, it's a great hydration pick after a workout or on a hot day. It's also low in calories and naturally sweet, making it an excellent alternative without added sugars.
4. **Sparkling Water with a Splash of Fruit Juice**: To satisfy a craving for something bubbly and fruit-infused with fewer calories and sugars than commercial sodas or juices. Feel free to experiment with different juices to find your favorite combination.
'''
# Get reward
print(input_str + output_str + " " + tokenizer.eos_token)
reward_inputs = tokenizer(
input_str + output_str + " " + tokenizer.eos_token, # same as decoded[0] + " " + tokenizer.eos_token
max_length=2048,
truncation=True,
return_tensors="pt"
)
reward_input_ids = reward_inputs.input_ids.to(reward_model_device)
reward_attention_masks = reward_inputs.attention_mask.to(reward_model_device)
rewards = reward_model(input_ids=reward_input_ids, attention_mask=reward_attention_masks)
print(rewards.item())
# 3.28125
```
To train Janus and evaluate the responses it generates, please refer to the [GitHub Repo](https://github.com/kaistAI/Janus).
Additionally, refer to the [Multifaceted Bench](https://huggingface.co/datasets/kaist-ai/Multifaceted-Bench), which evaluates how well LLM generates personalized responses.
# Training Details
## Training hyperparameters
The following hyperparameters were used for training:
- learning_rate: 9e-6
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: AdamW with betas=(0.9,0.95)
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 3% of the maximum number of steps
- num_epochs: 1
- use_flash_attention_2: true
- maximum_sequence_length: 2048
- bf16: true
- gradient_checkpointing: true
## Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.0
- DeepSpeed Zero-3
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{lee2024aligning,
title={Aligning to Thousands of Preferences via System Message Generalization},
author={Lee, Seongyun and Park, Sue Hyun and Kim, Seungone and Seo, Minjoon},
journal={arXiv preprint arXiv:2405.17977},
year={2024}
}
``` |