File size: 6,252 Bytes
e9c14b0
d21b521
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9c14b0
 
 
d21b521
 
 
e9c14b0
 
d21b521
 
e9c14b0
d21b521
 
 
 
e9c14b0
d21b521
 
 
e9c14b0
5f78ede
277c3a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9c14b0
 
d21b521
 
e9c14b0
d21b521
17205f6
e9c14b0
d21b521
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: mit
datasets:
  - facebook/empathetic_dialogues
language:
  - en
base_model: alignment-handbook/zephyr-7b-sft-full
widget:
  - example_title: Pirate!
    messages:
      - role: system
        content: You are a friendly assistant, who provides empathetic responses to the user. The input contains previous turn of the dialog, where each utterance is prefaced with tags <|user>, or <|assistant|>. Be empathetic and precise. Make sure to give responses that make the dialogue flow. Avoid repeating the prompt. Please respond creatively and expressively to make the responses longer. You can offer advice.
      - role: user
        content: Yeah about 10 years ago I had a horrifying experience. It was 100% their fault but they hit the water barrels and survived. They had no injuries but they almost ran me off the road.
      - role: assistant
        content: Did you suffer any injuries?
      - role: user
        content: No I wasn't hit. It turned out they were drunk. I felt guilty but realized it was his fault.
    output:
      text: >-
        That's good that you didn't get hurt. I hope they got in trouble for driving drunk.
        
pipeline_tag: text-generation
model-index:
  - name: justtherightsize/zephyr-7b-sft-full124
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Open LLM Leaderboard
          type: various
          config: various
          split: various
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            name: accuracy
            value: 0.2701
        source:
          name: Open LLM Leaderboard
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            name: accuracy
            value: 58.50
        source:
          name: MMLU
          url: >-
            https://github.com/huggingface/lm-evaluation-harness.git



---
# Model Card for zephyr-7b-sft-full124
This model paricipated in multi-turn dialogues and responses empathetically.


## Model Description
 We propose a data-driven solution for Empathetic Response Generation with LLMs: aligning LLMs via preference optimization algorithms. First, we build a preference dataset using the benchmark dataset EmpatheticDialogues (Rashkin et al., 2019). It contains short multi-turn human-to-human dialogues grounded by emotion labels. We leverage this emotion grounding to sample dialog completions labeled with polar opposite emotions using Plutchik’s wheel (Plutchik, 2001) such that each prompt is paired with preferred and non-preferred completions. We then fine-tune a foundational LLM using Direct Preference Optimization (DPO) (Rafailov et al., 2024) to generate responses aligned with the preferred candidate response.

- **Developed by:** TBA
- **Model type:** Autoregressive Encoder-Decoder
- **Language(s):** en
- **Finetuned from:** alignment-handbook/zephyr-7b-sft-full

## Sources
- **Repository:** <https://github.com/justtherightsize/empo>
- **(*non-anonymized*) Paper preprint:** <https://arxiv.org/abs/2406.19071>

## Usage - Generate a response in a dialogue. You must be logged in to HF and agree to the license of the base model! 
```python
from peft import PeftModel
from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
from huggingface_hub import login

# HF login: you have to be logged in and agree to the license of the base
# model: https://huggingface.co/alignment-handbook/zephyr-7b-sft-full
hf_key = "Your key here"
login(hf_key)

# Load tokenizer either from remote
adapter_id = "justtherightsize/zephyr-7b-sft-full124"
base_model_id = "alignment-handbook/zephyr-7b-sft-full"
tokenizer = AutoTokenizer.from_pretrained(adapter_id)

# Prepare dialog and convert to chat template
sys_msg = "You are a friendly assistant, who provides empathetic responses to the user. " \
            "The input contains previous turn of the dialog, where each utterance is prefaced " \
            "with tags <|user|>, or <|assistant|>. Be empathetic and precise. " \
            "Make sure to give responses that make dialogue flow. Avoid repeating the prompt. " \
            "Please respond creatively and expressively to make the responses longer. You can offer advice."

dialog = ["Yeah about 10 years ago I had a horrifying experience. It was 100% their fault but they hit the water barrels and survived. They had no injuries but they almost ran me off the road.", 
        "Did you suffer any injuries?", 
        "No I wasn't hit. It turned out they were drunk. I felt guilty but realized it was his fault."]

dwroles = [{"role": "system", "content": sys_msg}]
for j in range(len(dialog)):
    dwroles.append(
        {"role": "user", "content": dialog[j]} if j % 2 == 0 else
        {"role": "assistant", "content": dialog[j]})
template = tokenizer.apply_chat_template(dwroles, tokenize=False, add_generation_prompt=True)

# Load the big model first & resize embeds, load PEFT model
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=quantization_config,
    trust_remote_code=True
)
model.resize_token_embeddings(len(tokenizer))
model.config.use_cache = False
model = PeftModel.from_pretrained(model, adapter_id)

# Instantiate generation pipeline
pipe_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Generate the response
out = pipe_gen(template, return_full_text=False, max_new_tokens=500)[0]['generated_text']
print(out)
```


## Out-of-Scope Usage
Note that fine-tuning on the EmpatheticDialogues caused some specialization. 

## Training
Please refer to: https://github.com/justtherightsize/empo?tab=readme-ov-file#training

## Cite
TBA, now please cite the **non-anonymized** [preprint](https://arxiv.org/abs/2305.15017)