jeffreymeetkai
commited on
Commit
•
ae0a4b1
1
Parent(s):
29db9eb
Update README.md
Browse files
README.md
CHANGED
@@ -23,12 +23,12 @@ The model determines when to execute functions, whether in parallel or serially,
|
|
23 |
|
24 |
## How to Get Started
|
25 |
|
26 |
-
We provide custom code for
|
27 |
|
28 |
```python
|
29 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
30 |
|
31 |
-
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.1"
|
32 |
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.1", device_map="auto", trust_remote_code=True)
|
33 |
|
34 |
tools = [
|
@@ -53,7 +53,6 @@ tools = [
|
|
53 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
54 |
|
55 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
56 |
-
tokenizer.padding_side = "left"
|
57 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
58 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
59 |
print(tokenizer.decode(pred.cpu()[0]))
|
@@ -61,9 +60,9 @@ print(tokenizer.decode(pred.cpu()[0]))
|
|
61 |
|
62 |
## Prompt Template
|
63 |
|
64 |
-
We convert function definitions to a similar text to
|
65 |
|
66 |
-
This formatting is also available via our vLLM server which we process the functions into definitions encapsulated in a system message
|
67 |
|
68 |
```python
|
69 |
from openai import OpenAI
|
|
|
23 |
|
24 |
## How to Get Started
|
25 |
|
26 |
+
We provide custom code for parsing raw model responses into a JSON object containing role, content and tool_calls fields. This enables the users to read the function-calling output of the model easily.
|
27 |
|
28 |
```python
|
29 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
30 |
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.1")
|
32 |
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.1", device_map="auto", trust_remote_code=True)
|
33 |
|
34 |
tools = [
|
|
|
53 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
54 |
|
55 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
|
|
56 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
57 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
58 |
print(tokenizer.decode(pred.cpu()[0]))
|
|
|
60 |
|
61 |
## Prompt Template
|
62 |
|
63 |
+
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
|
64 |
|
65 |
+
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server:
|
66 |
|
67 |
```python
|
68 |
from openai import OpenAI
|