Edit model card

Model Card for Model ID

This repository is a backup for the evaluator in Jailbreak_LLM, which is used for safety evaluation of LLM responses especially for the MaliciousInstruct benchmark.

Citation

@article{huang2023catastrophic,
  title={Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation},
  author={Huang, Yangsibo and Gupta, Samyak and Xia, Mengzhou and Li, Kai and Chen, Danqi},
  journal={arXiv preprint arXiv:2310.06987},
  year={2023}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for LibrAI/maliciousinstruct-evaluator

Finetuned
(1932)
this model