Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LLM-Detector: Improving AI-generated Chinese Text Detection with Large Language Models

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig

# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", trust_remote_code=True)

# use bf16
# model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", device_map="auto", trust_remote_code=True, bf16=True).eval()

# use fp16
# model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", device_map="auto", trust_remote_code=True, fp16=True).eval()

# use cpu only
# model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", device_map="cpu", trust_remote_code=True).eval()

# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", device_map="auto", trust_remote_code=True).eval()
#model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-en", device_map="auto", trust_remote_code=True).cuda()

# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained("./Qwen-1_8B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参

response, history = model.chat(tokenizer, "你好", history=None)
print(response)
Downloads last month
3
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.