Edit model card

This is the LLaMAfied version of Qwen2-0.5B-Instruct model by Alibaba Cloud. The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py). I have made modifications to make it compatible with qwen2. This model is converted with https://github.com/Minami-su/character_AI_open/tree/main/Qwen2_llamafy_Mistralfy

Usage:


from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen2-0.5B-Instruct-llamafy")
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen2-0.5B-Instruct-llamafy", torch_dtype="auto", device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

messages = [
    {"role": "user", "content": "Who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
Downloads last month
41
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Minami-su/Qwen2-0.5B-Instruct-llamafy

Quantizations
2 models