jeffreymeetkai commited on
Commit
ae0a4b1
1 Parent(s): 29db9eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -23,12 +23,12 @@ The model determines when to execute functions, whether in parallel or serially,
23
 
24
  ## How to Get Started
25
 
26
- We provide custom code for both converting tool definitions into the system prompts and parsing raw model response into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the model to be able to generate tool calls.
27
 
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
- tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.1", trust_remote_code=True)
32
  model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.1", device_map="auto", trust_remote_code=True)
33
 
34
  tools = [
@@ -53,7 +53,6 @@ tools = [
53
  messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
54
 
55
  final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
56
- tokenizer.padding_side = "left"
57
  inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
58
  pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
59
  print(tokenizer.decode(pred.cpu()[0]))
@@ -61,9 +60,9 @@ print(tokenizer.decode(pred.cpu()[0]))
61
 
62
  ## Prompt Template
63
 
64
- We convert function definitions to a similar text to Meta's Llama 3.1 definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
65
 
66
- This formatting is also available via our vLLM server which we process the functions into definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server:
67
 
68
  ```python
69
  from openai import OpenAI
 
23
 
24
  ## How to Get Started
25
 
26
+ We provide custom code for parsing raw model responses into a JSON object containing role, content and tool_calls fields. This enables the users to read the function-calling output of the model easily.
27
 
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
+ tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.1")
32
  model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.1", device_map="auto", trust_remote_code=True)
33
 
34
  tools = [
 
53
  messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
54
 
55
  final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
 
56
  inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
57
  pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
58
  print(tokenizer.decode(pred.cpu()[0]))
 
60
 
61
  ## Prompt Template
62
 
63
+ We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
64
 
65
+ This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server:
66
 
67
  ```python
68
  from openai import OpenAI