Edit model card

KAI-7B

image/png

KAI-7B Large Language Model (LLM) is a fine-tuned generative text model based on Mistral 7B. With over 7 billion parameters, KAI-7B outperforms its closest competetor, Meta-Llama 2 70b, in all benchmarks we tested.

image/png image/png

As you can see in the benchmark above, KAI-7B excells in STEM but needs work in the Math and Coding fields.

Notice

KAI-7B is a pretrained base model and therefore does not have any moderation mechanisms.

Banned Use

KAI-7B is governed by the apache 2.0 liscense, and therefore means that whatever the license deems unacceptable shall not be allowed. We specificaly ban the use of ANY AND ALL KAI MODELS for hate speech towards a paricular thing, person, our particular group due to legal and ethical issues.

Downloads last month
60
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Keynote-Technology/KAI-7B-v0.1

Merges
3 models
Quantizations
5 models

Datasets used to train Keynote-Technology/KAI-7B-v0.1

Spaces using Keynote-Technology/KAI-7B-v0.1 2

Collection including Keynote-Technology/KAI-7B-v0.1