Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Poro-34B - GGUF

Name Quant method Size
Poro-34B.Q2_K.gguf Q2_K 12.49GB
Poro-34B.IQ3_XS.gguf IQ3_XS 14.05GB
Poro-34B.IQ3_S.gguf IQ3_S 14.42GB
Poro-34B.Q3_K_S.gguf Q3_K_S 14.42GB
Poro-34B.IQ3_M.gguf IQ3_M 8.77GB
Poro-34B.Q3_K.gguf Q3_K 17.23GB
Poro-34B.Q3_K_M.gguf Q3_K_M 17.23GB
Poro-34B.Q3_K_L.gguf Q3_K_L 18.78GB
Poro-34B.IQ4_XS.gguf IQ4_XS 17.83GB
Poro-34B.Q4_0.gguf Q4_0 18.65GB
Poro-34B.IQ4_NL.gguf IQ4_NL 18.79GB
Poro-34B.Q4_K_S.gguf Q4_K_S 13.54GB
Poro-34B.Q4_K.gguf Q4_K 20.9GB
Poro-34B.Q4_K_M.gguf Q4_K_M 20.9GB
Poro-34B.Q4_1.gguf Q4_1 20.64GB
Poro-34B.Q5_0.gguf Q5_0 22.63GB
Poro-34B.Q5_K_S.gguf Q5_K_S 22.63GB
Poro-34B.Q5_K.gguf Q5_K 24.32GB
Poro-34B.Q5_K_M.gguf Q5_K_M 24.32GB
Poro-34B.Q5_1.gguf Q5_1 24.62GB
Poro-34B.Q6_K.gguf Q6_K 26.86GB
Poro-34B.Q8_0.gguf Q8_0 34.78GB

Original model description:

license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - mc4 - allenai/dolma language: - fi - en

Poro 34B Model Card

Poro is a 34B parameter decoder-only transformer pretrained on Finnish, English and code. It was trained on 1 trillion tokens. Poro is a fully open source model and is made available under the Apache 2.0 License.

Poro was created in a collaboration between SiloGen from Silo AI, the TurkuNLP group of the University of Turku, and High Performance Language Technologies (HPLT). Training was conducted on the LUMI supercomputer, using compute resources generously provided by CSC - IT Center for Science, Finland.

This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. Through the combination of English and Finnish training data we get a model that outperforms previous Finnish only models, while also being fluent in English and code, and capable of basic translation between English and Finnish.

Poro 34B is only the first model of our model family. Work is already underway on our next models which will support additional languages, and include features like flash attention, rotary embeddings, and grouped query attention.

What does Poro mean? Poro is the Finnish word for Reindeer! 🦌 These animals are native to Finland and hold a significant and historical role in Finnish culture.

Model Overview

NOTE: In addition to being an early research release, Poro is a base model which needs further fine tuning for most use cases.

Poro is a generative pretrained transformer using a BLOOM architecture, and makes use of ALiBi embeddings to support context length extrapolation at inference time.

Hyperparameter Value
n_parameters 34.2B
n_layers 54
n_heads 56
d_model 7168
vocab_size 128000
sequence_length 2048

Poro Research Checkpoints

Checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available:

The transformers library allows you to load a checkpoint from a branch as follows:

branch = "200B"
model = transformers.AutoModelForCausalLM.from_pretrained(
    "LumiOpen/Poro-34B",
    torch_dtype=torch.bfloat16,
    revision=branch,
)

Training

Poro was trained on the LUMI supercomputer, using 512 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 1024 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP=2, PP=4, DP=128.

Training began in September 2023 using a custom fork of the Megatron-Deepspeed framework. Our code is available here.

Training Hyperparameters

Hyperparameter Value Comment
Precision bfloat16
Optimizer AdamW
Learning rate 1.5e-4 10B tokens warm-up, cosine decay to 2e-5
Weight decay 1e-1
Batch size 2048 2048 samples x 2048 tokens = 4194304 tokens

Tokenizer

Poro uses a custom 128K Bloom tokenizer trained on the same English, Finnish and Code dataset used to train the model.

Dataset

Poro is being trained on a 1 trillion token mixed dataset of English, Finnish and Code.

Dataset Notes Percentage Epochs Tokens
SlimPajama Excluding books3 data 54.16% 1x 541.7B
Finnish TurkuNLP Finnish dataset 13.05% 4x 131.5B
Tatoeba English/Finnish sentence pairs 0.81% 1x 8.0B
Starcoder 31.53% 1.52x 315.4B
Project Gutenberg from Dolma dataset 0.46% 1x 4.5B

The Finnish dataset is a combination of many Finnish resources:

Evaluation Results

Full evaluations for each checkpoint are available on our Github repo.

Ethical Considerations and Limitations

Poro is an advanced language model, primarily optimized for English, Finnish and code, with no meaningful proficiency in any other languages. As with most AI-driven systems, Poro is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. Poro may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial. Users and developers engaging with Poro should exercise discretion and consider additional evaluation and customization to ensure the model's responses align with their specific needs and ethical standards.

License

Poro is released under the Apache 2.0 license.

Citation

@misc{luukkonen2024poro,
      title={Poro 34B and the Blessing of Multilinguality}, 
      author={Risto Luukkonen and Jonathan Burdge and Elaine Zosa and Aarne
Talman and Ville Komulainen and Väinö Hatanpää and Peter Sarlin and Sampo
Pyysalo},
      year={2024},
      eprint={2404.01856},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
494
GGUF
Model size
35.1B params
Architecture
bloom

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .