Update
The model is now following the update from GLM-4-9B-Chat and now requires transformers>=4.44.0
. Please update your dependencies accordingly.
Also follow the dependencies it before using
Introduction
This model is GLM-4-9B-Chat, fine-tuned with various datasets to focus on mental health care.
Since it is fine-tuned with a Chinese dataset, please use it in Chinese, even though the base model supports English text.
Dataset
- Smile dataset
- SoulChat
- single_turn_dataset_1 from EMOLLM
- [self-defined role-playing dataset]
Training
Using LLaMA-Factory to do the fine-tuning process. Here are the parameters: (TODO)
Use the following method to quickly call the GLM-4-9B-Chat language model
Use the transformers backend for inference:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("derek33125/project-angel-chatglm4", trust_remote_code=True)
query = "我感到很悲伤"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
)
inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
"derek33125/project-angel-chatglm4",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device).eval()
gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 146
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for derek33125/project-angel-chatglm4-v2
Base model
THUDM/glm-4-9b-chat