You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Access to this model is automatically granted upon accepting the AI2 ImpACT License - Medium Risk Artifacts (“MR Agreement”) and completing all fields below.
Log in or Sign Up to review the conditions and access this model content.
Model Card for WildLlama-7b-user-assistant
Model Description
The WildLlama-7b-user-assistant model is a chatbot derived from the Llama-2 model by Meta that is licensed under the Llama 2 License, enhanced through fine-tuning on the WildChat Dataset's user-ChatGPT interactions. WildLlama-7b-user-assistant is trained to predict both user prompts and assistant responses. Note that this model is worse at generating assistant responses than WildLlama-7b-assistant-only, which is trained to only predict assistant responses. If you need the best assistant response quality, please use WildLlama-7b-assistant-only.
- Model type: Language model
- Language(s) (NLP): multi-lingual
- License: AI2 ImpACT License - Medium Risk Artifacts ("MR Agreement")
- Parent Model: https://huggingface.co/meta-llama/Llama-2-7b-hf
- Paper: https://arxiv.org/abs/2405.01470
- Visualization Tool: https://wildvisualizer.com
- Visualization Paper: https://arxiv.org/abs/2409.03753
Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Recommendations
We recommend that this model not be used for any high-impact or human-facing purposes as its biases and limitations need to be further explored. We intend this to be a research artifact to advance AI's ability to better serve human needs.
Citation
BibTeX:
@inproceedings{
zhao2024wildchat,
title={WildChat: 1M Chat{GPT} Interaction Logs in the Wild},
author={Wenting Zhao and Xiang Ren and Jack Hessel and Claire Cardie and Yejin Choi and Yuntian Deng},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=Bl8u7ZRlbM}
}
@misc{deng2024wildvisopensourcevisualizer,
title={WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild},
author={Yuntian Deng and Wenting Zhao and Jack Hessel and Xiang Ren and Claire Cardie and Yejin Choi},
year={2024},
eprint={2409.03753},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.03753},
}
How to Get Started with the Model
Use the code below to get started with the model.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = 'allenai/WildLlama-7b-user-assistant'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
# Notice the spaces!
# Format: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: abc</s> ASSISTANT: def</s>USER:
# To generate a user prompt in the first turn
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER:"
model_inputs = tokenizer(prompt, return_tensors='pt', add_special_tokens=False).to(device)
output = model.generate(**model_inputs)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
# To generate an assistant response
prompt = tokenizer.decode(output[0], skip_special_tokens=False) + ' ASSISTANT:'
model_inputs = tokenizer(prompt, return_tensors='pt', add_special_tokens=False).to(device)
output = model.generate(**model_inputs)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
# To generate a user prompt in follow-up turns
prompt = tokenizer.decode(output[0], skip_special_tokens=False) + 'USER:'
model_inputs = tokenizer(prompt, return_tensors='pt', add_special_tokens=False).to(device)
output = model.generate(**model_inputs)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 0