Edit model card

Function Calling and Tool Use LLaMA Models

This repository contains two main versions of LLaMA models fine-tuned for function calling and tool use capabilities:

  1. Fine-tuned version of the LLama3-8b-instruct model
  2. tinyllama - a smaller model version

For each version, the following variants are available:

  • 16-bit quantized model
  • 4-bit quantized model
  • GGFU format for use with llama.cpp

Dataset

The models were fine-tuned using a modified version of the ilacai/glaive-function-calling-v2-sharegpt dataset, which can be found at unclecode/glaive-function-calling-llama3.

Usage

To learn how to use these models, refer to the Colab notebook: Open In Colab

This is the first version of the models, and work is in progress to further train them with multi-tool detection and native tool binding support.

Library and Tools Support

A library is being developed to manage tools and add tool support for major LLMs, regardless of their built-in capabilities. You can find examples and contribute to the library at the following repository:

https://github.com/unclecode/fllm

Please open an issue in the repository for any bugs or collaboration requests.

Other Models

Here are links to other related models:

License

These models are released under the Apache 2.0 license.

Uploaded model

  • Developed by: unclecode
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
22
GGUF
Model size
1.1B params
Architecture
llama
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for unclecode/tinyllama-function-call-Q4_K_M_GGFU-250424

Quantized
(71)
this model