Edit model card

Competitive Programming LLM for Python Language

This model is a finetuned version of codegen350M-mono on python code dataset that uses alpaca style prompts while training.

Prompt function

'''
This function generates prompts using the problem description and input.
@param1 instruction: str - text problem description
@param2 inputs: str - input to the program
'''
def generate_prompt(instruction, inputs=""):
    text = ("Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
            "### Instruction:\n"
            f"{instruction}\n\n"
            "### Input:\n"
            f"{inputs}\n\n"
            "### Output:\n")
    return text

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("iamtarun/codegen-350M-mono-4bit-qlora", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("iamtarun/codegen-350M-mono-4bit-qlora")

# loading model for inference
model.eval()

# inference function
'''
This function takes text prompt as input which is generated from the generate_prompt function and returns the generated response

@param1 prompt: str - text prompt generated using generate_prompt function.
'''
def pipe(prompt):
    device = "cuda"
    inputs = tokenizer(prompt, return_tensors="pt").to(device)
    with torch.no_grad():
        output = model.generate(**inputs, 
                                max_length=512,
                                do_sample=True,
                                temperature=0.5,
                                top_p=0.95,
                                repetition_penalty=1.15)
    return tokenizer.decode(output[0].tolist(), 
                            skip_special_tokens=True, 
                            clean_up_tokenization_space=False)

# generating code for a problem description
instruction = "Write a function to calculate square of a number in python"
inputs = "number = 5"
prompt = generate_prompt(instruction, inputs)
print(pipe(prompt))
print("\n", "="*100)
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train iamtarun/codegen-350M-mono-4bit-qlora