GRMenon commited on
Commit
ddb0998
1 Parent(s): a01507f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -6,6 +6,8 @@ tags:
6
  - mistral
7
  - text-generation
8
  - transformers
 
 
9
  base_model: mistralai/Mistral-7B-Instruct-v0.2
10
  model-index:
11
  - name: mental-health-mistral-7b-instructv0.2-finetuned-V2
@@ -24,11 +26,7 @@ It achieves the following results on the evaluation set:
24
  ## Model description
25
 
26
  A Mistral-7B-Instruct-v0.2 model finetuned on a corpus of mental health conversations between a psychologist and a user.
27
- The intention was to create a mental health assistant, "Connor", to address user questions based on responses from a psychologist.
28
-
29
- ## Intended uses & limitations
30
-
31
- Intended to be used as a mental health chatbot to respond to user queries.
32
 
33
  ## Training and evaluation data
34
 
@@ -37,8 +35,6 @@ Dataset found here :-
37
  * [Kaggle](https://www.kaggle.com/datasets/thedevastator/nlp-mental-health-conversations)
38
  * [Huggingface](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
39
 
40
- ## Training procedure
41
-
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
@@ -80,7 +76,10 @@ tokenizer = AutoTokenizer.from_pretrained(
80
 
81
  # Create peft model using base_model and finetuned adapter
82
  config = PeftConfig.from_pretrained(adapter)
83
- model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map='auto', torch_dtype='auto')
 
 
 
84
  model = PeftModel.from_pretrained(model, adapter)
85
 
86
  device = "cuda" if torch.cuda.is_available() else "cpu"
@@ -89,10 +88,13 @@ model.eval()
89
 
90
  # Prompt content:
91
  messages = [
92
- {"role": "user", "content": "Hey Connor! I have been feeling a bit down lately. I could really use some advice on how to feel better?"}
93
  ]
94
 
95
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to(device)
 
 
 
96
  output_ids = model.generate(input_ids=input_ids, max_new_tokens=512, do_sample=True, pad_token_id=2)
97
  response = tokenizer.batch_decode(output_ids.detach().cpu().numpy(), skip_special_tokens = True)
98
 
 
6
  - mistral
7
  - text-generation
8
  - transformers
9
+ - inference endpoints
10
+ - pytorch
11
  base_model: mistralai/Mistral-7B-Instruct-v0.2
12
  model-index:
13
  - name: mental-health-mistral-7b-instructv0.2-finetuned-V2
 
26
  ## Model description
27
 
28
  A Mistral-7B-Instruct-v0.2 model finetuned on a corpus of mental health conversations between a psychologist and a user.
29
+ The intention was to create a mental health assistant, "Connor", to address user questions based on responses from a psychologist.
 
 
 
 
30
 
31
  ## Training and evaluation data
32
 
 
35
  * [Kaggle](https://www.kaggle.com/datasets/thedevastator/nlp-mental-health-conversations)
36
  * [Huggingface](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
37
 
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
 
76
 
77
  # Create peft model using base_model and finetuned adapter
78
  config = PeftConfig.from_pretrained(adapter)
79
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,
80
+ load_in_4bit=True,
81
+ device_map='auto',
82
+ torch_dtype='auto')
83
  model = PeftModel.from_pretrained(model, adapter)
84
 
85
  device = "cuda" if torch.cuda.is_available() else "cpu"
 
88
 
89
  # Prompt content:
90
  messages = [
91
+ {"role": "user", "content": "Hey Connor! I have been feeling a bit down lately.I could really use some advice on how to feel better?"}
92
  ]
93
 
94
+ input_ids = tokenizer.apply_chat_template(conversation=messages,
95
+ tokenize=True,
96
+ add_generation_prompt=True,
97
+ return_tensors='pt').to(device)
98
  output_ids = model.generate(input_ids=input_ids, max_new_tokens=512, do_sample=True, pad_token_id=2)
99
  response = tokenizer.batch_decode(output_ids.detach().cpu().numpy(), skip_special_tokens = True)
100