Fine-tuned Llama 3.1 8B PEFT int4 for Food Delivery
This model was trained for the experiments carried out in the research paper "Conversing with business process-aware Large Language Models: the BPLLM framework".
It comprises a version of the Llama 3.1 8B model fine-tuned (PEFT with quantization int4) to operate within the context of the Food Delivery process model introduced in the article.
Further insights can be found in our paper "Conversing with business process-aware Large Language Models: the BPLLM framework".
Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for angeloc1/llama3dot1FoodDel4v05
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct