--- library_name: transformers tags: - LoRA license: apache-2.0 datasets: - TIGER-Lab/MathInstruct language: - en base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation --- ![Komodo-Logo](Komodo-Logo.jpg) Komodo is a Qwen 2.5-7B-Instruct-FineTuned model on TIGER-Lab/MathInstruct dataset to increase math performance of the base model. Suggested Usage: ```py tokenizer = AutoTokenizer.from_pretrained("suayptalha/Komodo-7B-Instruct") model = AutoModelForCausalLM.from_pretrained("suayptalha/Komodo-7B-Instruct") example_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" inputs = tokenizer( [ example_prompt.format( "", #Your question here "", #Given input here "", #Output (for training) ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True) tokenizer.batch_decode(outputs) ```