Adds missing tokenizer configuration file
This repository is missing the tokenizer configuration file, and is instead relying on some attributes set within
the transformers
library directly in order to correctly tokenize inputs.
In order to ensure repositories don't depend on internal configuration changes, we're removing these attribute maps
in transformers#29112.
In doing so, we see that the following attributes are currently missing from the configuration and would be
ill-configured without this PR:
{'model_max_length': 2048}
This PR aims to add these attributes and their values to the tokenizer config file.
This will proceed to make this repository more robust by ensuring that:
- the repository does not depend on intra-library code
- clones of this repository continue working as expected even without the correct repository name
- other libraries that would like to leverage this repository do not depend on code within the transformers library
Thanks π€
@Ekgren do you think it would be possible to merge the other PRs as well?
I'm listing them here for you:
- AI-Sweden-Models/gpt-sw3-126m#5
- AI-Sweden-Models/gpt-sw3-356m#4
- AI-Sweden-Models/gpt-sw3-1.3b#4
- AI-Sweden-Models/gpt-sw3-6.7b#5
- AI-Sweden-Models/gpt-sw3-6.7b-v2#4
- AI-Sweden-Models/gpt-sw3-20b#6
Thanks a lot for your help on this!