Quantization made by Richard Erkhov.
h2o-danube2-1.8b-base - GGUF
- Model creator: https://huggingface.co/h2oai/
- Original model: https://huggingface.co/h2oai/h2o-danube2-1.8b-base/
Name | Quant method | Size |
---|---|---|
h2o-danube2-1.8b-base.Q2_K.gguf | Q2_K | 0.66GB |
h2o-danube2-1.8b-base.IQ3_XS.gguf | IQ3_XS | 0.73GB |
h2o-danube2-1.8b-base.IQ3_S.gguf | IQ3_S | 0.77GB |
h2o-danube2-1.8b-base.Q3_K_S.gguf | Q3_K_S | 0.76GB |
h2o-danube2-1.8b-base.IQ3_M.gguf | IQ3_M | 0.79GB |
h2o-danube2-1.8b-base.Q3_K.gguf | Q3_K | 0.84GB |
h2o-danube2-1.8b-base.Q3_K_M.gguf | Q3_K_M | 0.84GB |
h2o-danube2-1.8b-base.Q3_K_L.gguf | Q3_K_L | 0.91GB |
h2o-danube2-1.8b-base.IQ4_XS.gguf | IQ4_XS | 0.94GB |
h2o-danube2-1.8b-base.Q4_0.gguf | Q4_0 | 0.98GB |
h2o-danube2-1.8b-base.IQ4_NL.gguf | IQ4_NL | 0.99GB |
h2o-danube2-1.8b-base.Q4_K_S.gguf | Q4_K_S | 0.99GB |
h2o-danube2-1.8b-base.Q4_K.gguf | Q4_K | 1.04GB |
h2o-danube2-1.8b-base.Q4_K_M.gguf | Q4_K_M | 1.04GB |
h2o-danube2-1.8b-base.Q4_1.gguf | Q4_1 | 1.08GB |
h2o-danube2-1.8b-base.Q5_0.gguf | Q5_0 | 1.18GB |
h2o-danube2-1.8b-base.Q5_K_S.gguf | Q5_K_S | 1.18GB |
h2o-danube2-1.8b-base.Q5_K.gguf | Q5_K | 1.21GB |
h2o-danube2-1.8b-base.Q5_K_M.gguf | Q5_K_M | 1.21GB |
h2o-danube2-1.8b-base.Q5_1.gguf | Q5_1 | 1.29GB |
h2o-danube2-1.8b-base.Q6_K.gguf | Q6_K | 1.4GB |
Original model description:
language: - en license: apache-2.0 library_name: transformers tags: - gpt - llm - large language model
Summary
h2o-danube2-1.8b-base is a foundation model trained by H2O.ai with 1.8 billion parameters. For details, please refer to our Technical Report. We release three versions of this model:
Model Name | Description |
---|---|
h2oai/h2o-danube2-1.8b-base | Base model |
h2oai/h2o-danube2-1.8b-sft | SFT tuned |
h2oai/h2o-danube2-1.8b-chat | SFT + DPO tuned |
Model Architecture
We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the Mistral tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 8,192.
The details of the model architecture are:
Hyperparameter | Value |
---|---|
n_layers | 24 |
n_heads | 32 |
n_query_groups | 8 |
n_embd | 2560 |
vocab size | 32000 |
sequence length | 8192 |
Usage
This is a pre-trained foundation model. For your task, you will likely want to perform application specific fine-tuning. We also offer a chat fine-tuned version: h2oai/h2o-danube2-1.8b-chat.
To use the model with the transformers library on a machine with GPUs, first make sure you have the transformers library installed.
# pip install transformers>=4.39.3
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2o-danube2-1.8b-base")
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2o-danube2-1.8b-base",
torch_dtype=torch.bfloat16,
)
model.cuda()
inputs = tokenizer("The Danube is the second longest river in Europe", return_tensors="pt").to(model.device)
res = model.generate(
**inputs,
max_new_tokens=38,
do_sample=False,
)
print(tokenizer.decode(res[0], skip_special_tokens=True))
Benchmarks
Among models of similar size h2o-danube2-1.8b-base achieves best results (on average) across benchmarks of Open LLM Leaderboard 🤗
Model | Size | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | Average |
---|---|---|---|---|---|---|---|---|
StableLM2-1.6B | 1.6B | 43.34 | 70.45 | 38.95 | 36.78 | 64.56 | 17.44 | 45.25 |
Gemma-2B | 2.5B | 48.46 | 71.65 | 41.68 | 33.13 | 66.77 | 17.36 | 46.51 |
Qwen1.5-1.8B | 1.8B | 37.88 | 61.42 | 46.71 | 39.43 | 60.30 | 33.59 | 46.55 |
Phi-1.5 | 1.3B | 52.90 | 63.79 | 43.89 | 40.89 | 72.22 | 12.43 | 47.69 |
H2O-Danube2 | 1.8B | 43.52 | 73.06 | 40.05 | 38.09 | 68.43 | 29.34 | 48.75 |
Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
- Downloads last month
- 107