Edit model card

Information

I've been busy creating applications lately, so I haven't been able to visit here for a while. Now that llama 3.1 is out, it's time to check it out! Lately, I've been spending a lot of time researching referral-based codes, similar to those used by TikTok, as a hobby. This includes the logic behind awarding points for inviting others.

The following model is based for this GGUF:

GGUF is based on for the latest llama.cpp version(some changed, check git if you want to make gguf):

Ollama

Soon i will show more about function calls based on this.

How to create

ollama create llama3.1:latest -f ./meta-llama-3.1-8b-instruct-Q5_K_M.gguf

Modelfile information

FROM ./meta-llama-3.1-8b-instruct-Q5_K_M.gguf
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"

meta-llama-3.1-8b-instruct-Q5_K_M.gguf Test

llama3-1

Expectation

The llama 3.1 model supports the following languages:

English (en) German (de) French (fr) Italian (it) Portuguese (pt) Hindi (hi) Spanish (es) Thai (th)

While Korean is not explicitly listed, the model performs well in Korean compared to llama3, indicating robust Korean language support. I hope we see more Korean models based on llama 3.1, along with specialized domain-specific tuning models. Although I could do the tuning myself, I am particularly interested in merging the tuning models you have developed. I'm sorry, but I don't want to do the tedious work of creating datasets ^^. So please hurry up with the llama3.1 korean tuning for the korean llm specialists!! i need your help!

Downloads last month
94
GGUF
Model size
8.03B params
Architecture
llama

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .