GGUF
Inference Endpoints
RichardErkhov's picture
uploaded readme
05ed0c8 verified

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Aira-2-1B5 - GGUF

Name Quant method Size
Aira-2-1B5.Q2_K.gguf Q2_K 0.84GB
Aira-2-1B5.IQ3_XS.gguf IQ3_XS 0.84GB
Aira-2-1B5.IQ3_S.gguf IQ3_S 0.84GB
Aira-2-1B5.Q3_K_S.gguf Q3_K_S 0.84GB
Aira-2-1B5.IQ3_M.gguf IQ3_M 0.91GB
Aira-2-1B5.Q3_K.gguf Q3_K 0.97GB
Aira-2-1B5.Q3_K_M.gguf Q3_K_M 0.97GB
Aira-2-1B5.Q3_K_L.gguf Q3_K_L 1.03GB
Aira-2-1B5.IQ4_XS.gguf IQ4_XS 0.9GB
Aira-2-1B5.Q4_0.gguf Q4_0 0.91GB
Aira-2-1B5.IQ4_NL.gguf IQ4_NL 0.91GB
Aira-2-1B5.Q4_K_S.gguf Q4_K_S 1.04GB
Aira-2-1B5.Q4_K.gguf Q4_K 1.11GB
Aira-2-1B5.Q4_K_M.gguf Q4_K_M 1.11GB
Aira-2-1B5.Q4_1.gguf Q4_1 1.0GB
Aira-2-1B5.Q5_0.gguf Q5_0 1.09GB
Aira-2-1B5.Q5_K_S.gguf Q5_K_S 1.15GB
Aira-2-1B5.Q5_K.gguf Q5_K 1.29GB
Aira-2-1B5.Q5_K_M.gguf Q5_K_M 1.29GB
Aira-2-1B5.Q5_1.gguf Q5_1 1.18GB
Aira-2-1B5.Q6_K.gguf Q6_K 1.52GB
Aira-2-1B5.Q8_0.gguf Q8_0 1.63GB

Original model description:

license: apache-2.0 datasets: - nicholasKluge/instruct-aira-dataset language: - en metrics: - accuracy library_name: transformers tags: - alignment - instruction tuned - text generation - conversation - assistant pipeline_tag: text-generation widget: - text: "<|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>" example_title: Machine Learning - text: "<|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>" example_title: Ethics - text: "<|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>" example_title: Advise inference: parameters: repetition_penalty: 1.2 temperature: 0.2 top_k: 30 top_p: 0.3 max_new_tokens: 200 length_penalty: 0.3 early_stopping: true co2_eq_emissions: emissions: 1690 source: CodeCarbon training_type: fine-tuning geographical_location: United States of America hardware_used: NVIDIA A100-SXM4-40GB

Aira-2-1B5

Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-1B5 is an instruction-tuned model based on GPT-2. The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).

Check our gradio-demo in Spaces.

Details

  • Size: 1,557,614,400 parameters
  • Dataset: Instruct-Aira Dataset
  • Language: English
  • Number of Epochs: 3
  • Batch size: 4
  • Optimizer: torch.optim.AdamW (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
  • GPU: 1 NVIDIA A100-SXM4-40GB
  • Emissions: 1.69 KgCO2 (Singapore)
  • Total Energy Consumption: 3.47 kWh

This repository has the source code used to train this model.

Usage

Three special tokens are used to mark the user side of the interaction and the model's response:

<|startofinstruction|>What is a language model?<|endofinstruction|>A language model is a probability distribution over a vocabulary.<|endofcompletion|>

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-1B5')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-1B5')

aira.eval()
aira.to(device)

question =  input("Enter your question: ")

inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
  add_special_tokens=False,
  return_tensors="pt").to(device)

responses = aira.generate(**inputs,	num_return_sequences=2)

print(f"Question: 👤 {question}\n")

for i, response in  enumerate(responses):
    print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')

The model will output something like:

>>>Question: 👤 What is the capital of Brazil?

>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.

Limitations

  • Hallucinations: This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.

  • Biases and Toxicity: This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.

  • Repetition and Verbosity: The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.

Evaluation

Model Average ARC TruthfulQA ToxiGen
Aira-2-124M-DPO 40.68 24.66 42.61 54.79
Aira-2-124M 38.07 24.57 41.02 48.62
GPT-2 35.37 21.84 40.67 43.62
Aira-2-355M 39.68 27.56 38.53 53.19
GPT-2-medium 36.43 27.05 40.76 41.49
Aira-2-774M 42.26 28.75 41.33 56.70
GPT-2-large 35.16 25.94 38.71 40.85
Aira-2-1B5 42.22 28.92 41.16 56.60
GPT-2-xl 36.84 30.29 38.54 41.70

Cite as 🤗

@misc{nicholas22aira,
  doi = {10.5281/zenodo.6989727},
  url = {https://github.com/Nkluge-correa/Aira},
  author = {Nicholas Kluge Corrêa},
  title = {Aira},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
}

@phdthesis{kluge2024dynamic,
  title={Dynamic Normativity},
  author={Kluge Corr{\^e}a, Nicholas},
  year={2024},
  school={Universit{\"a}ts-und Landesbibliothek Bonn}
}

License

Aira-2-1B5 is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.