license: apache-2.0
language:
- nl
library_name: transformers
François Remy, Pieter Delobelle, Hayastan Avetisyan, Alfiya Khabibullina, Miryam de Lhoneux, Thomas Demeester
Model Card for tweety-7b-dutch
tweety-7b-dutch is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It's built on the mistral architecture, employing flash attention for efficient processing within a context window of 8192 tokens. Tweety-7b-dutch is trained on the cleaned Dutch mC4 dataset, without of instruction finetuning.
Model Details
Model Description
Our tweety-7b-dutch model has an Apache 2.0 license, encouraging applications in research, content creation, and language analysis.
- Developed by: KU Leuven, UGent, the German Centre for Higher Education, and BeCode
- Funded by: VSC (Flemish Supercomputer Center), Vlaams AI-onderzoeksprogramma
- Model type: Foundation model using the mistral architecture
- Language(s) (NLP): Dutch
- License: Apache 2.0
Uses
As a base model, tweety-7b-dutch is primed for direct applications across text generation and understanding within the Dutch language, courtesy of its robust training on a clean dataset.
Technical Specifications
Compute Infrastructure
Hardware
Training utilized Nvidia H100 and A100 GPUs. Inference is accessible on lower-end GPUs, basically any GPU capable of running mistral models.