Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

ReDiX 1.5B JSON MODE and FUNCTION CALLING

This model is a finetuned version of Qwen2-1.5B on ReDiX/xlam-function-calling-60k-ita.

The function of this model is to perform function calling and creation of structured JSON for integration into simple and complex pipelines

How to use

The model will always generate a response even if the tool does not exist and is not specified in the system prompt among the available tools. it is up to the software and the pipeline to handle erroneous output by parsing the response JSON

System prompt (do not change):

Sei un Assistente AI che ha accesso ai seguenti tools:

{TOOL DEFINITIONS} 

Genera in formato JSON la chiamata necessaria per soddisfare la richiesta dell'utente.
{INFORMAZIONI REALTIME (es data di oggi)}
See conversation example
<|im_start|>system
Sei un Assistente AI che ha accesso ai seguenti tools:

Use the function 'stock_search' to: Get stock analisys and values
{
  "name": "stock_search",
  "description": "Get stock values",
  "parameters": {
    "ticker": {
      "param_type": "string",
      "description": "Identifier of the ticker, es: AAPL or list for multiple tickers [‘ticker1’, ‘ticker2’]“,
      "required": true
    },
    "start": {
      "param_type": "string",
      "description": "Range start date, es: 2022-01-01",
      "required": true
    },
    "end": {
      "param_type": "string",
      "description": "Range end date, es: 2022-01-01",
      "required": true
    }
  }
}
Genera in formato JSON la chiamata necessaria per soddisfare la richiesta dell'utente.
Oggi è il 2024-08-01
<|im_end|>
<|im_start|>user
Vorrei sapere com’è andata microsoft (MSFT) e tesla (TSLA) nel corso di luglio<|im_end|>
<|im_start|>assistant
Code example
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer


model_id = "ReDiX/ReDiX-1.5B-JSON"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
  model_id, 
  device_map="cuda",
  trust_remote_code=True, 
  torch_dtype=torch.bfloat16
).eval()

def redix_generate(tools, prompt) -> str:
  messages = [
      { "role": "system", "content": f"Sei un Assistente AI che ha accesso ai seguenti tools:\n\n{tools}\n\nGenera in formato JSON la chiamata necessaria per soddisfare la richiesta dell'utente." },
      { "role": "user", "content": prompt }
      ]
  text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
  model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

  generated_ids = model.generate(**model_inputs,max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id)
  generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]

  return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]


tools = [
  {
      "name": "stock_search",
      "description": "Get stock values",
      "parameters": {
          "ticker": {
          "param_type": "string",
          "description": "Identifier of the ticker, es: AAPL or list for multiple tickers ['ticker1', 'ticker2']",
          "required": True
          },
          "start": {
          "param_type": "string",
          "description": "Range start date, es: 2022-01-01",
          "required": True
          },
          "end": {
          "param_type": "string",
          "description": "Range end date, es: 2022-01-01",
          "required": True
          }
      }
  },
  {
      "name": "lights_control",
      "description": "control house lights",
      "parameters": {
          "light_id": {
          "param_type": "string",
          "description": "Identifier of the chosen light, available are ['cucina', 'salotto', 'camera_da_letto']",
          "required": True
          },
          "status": {
          "param_type": "string",
          "description": "Can be 'On' or 'Off'",
          "required": True
          }
      }
  }
]



response = redix_generate(tools, "Accendi tutte le luci in casa")
print(response)

Training

we trained this model on a single RTX A6000 48GB for about 5 hours

Downloads last month
19
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ReDiX/ReDiX-1.5B-JSON

Collection including ReDiX/ReDiX-1.5B-JSON