gemma-alpacha
yahma/alpaca-cleaned finetuned with gemma-7b-bnb-4bit
Usage
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("gnumanth/gemma-unsloth-alpaca")
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Give me a python code for quicksort", # instruction
"1,-1,0,8,9,-2,2", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Give me a python code for quicksort
### Input:
1,-1,0,8,9,-2,2
### Response:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = [i for i in arr[1:] if i < pivot]
right = [i for i in arr[1:] if i >= pivot]
return quicksort(left) + [pivot] + quicksort(right)<eos>
Hemanth HMM | (Built with unsloth)
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for gnumanth/gemma-unsloth-alpaca
Base model
unsloth/gemma-7b-bnb-4bit