Qwen-1.5-1.8B-SQL Model
Description
This model, deltawi/Qwen-1.5-1.8B-SQL
, is fine-tuned on SQL generation based on questions and context. It's designed to generate SQL queries from natural language descriptions, leveraging the Qwen 1.5 - 1.8B model.
Installation
To use this model, you need to install the transformers
library from Hugging Face. You can do this using pip:
pip install transformers huggingface_hub accelerate peft
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set the device
device = "cuda" # replace with your device: "cpu", "cuda", "mps"
from transformers import AutoModelForCausalLM, AutoTokenizer
import random
peft_model_id = "deltawi/Qwen-1.5-1.8B-SQL"
base_model_id = "Qwen/Qwen1.5-1.8B-Chat"
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model.load_adapter(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(
"deltawi/Qwen-1.5-1.8B-SQL",
#model_max_length=2048,
padding_side="right",
trust_remote_code=True,
pad_token='<|endoftext|>'
)
# Define your question and context
Question = "Your question here"
Context = """
Your SQL context here
"""
# Create the prompt
prompt = f"Question: {Question}\nContext: {Context}"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
# Prepare the input
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
# Generate the response
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode the response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
More details
- Base Model: Qwen 1.5-1.8B
- Fine-tuned for: SQL Query Generation
- Fine-tuning using LoRA: r=64
- Training Data: b-mc2/sql-create-context
Framework versions
- PEFT 0.8.2
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for deltawi/Qwen-1.5-1.8B-SQL
Base model
Qwen/Qwen1.5-1.8B-Chat