SnakyMcSnekFace commited on
Commit
eeb7db4
1 Parent(s): 753456e

New version of the model trained with 4096 token context

Browse files
README.md CHANGED
@@ -5,13 +5,10 @@ language:
5
  pipeline_tag: text-generation
6
  inference: false
7
  tags:
8
- - roleplay
9
  - storywriting
10
- - vore
11
  - finetuned
12
  - not-for-all-audiences
13
- - nsfw
14
- - uncensored
15
  base_model: KoboldAI/LLaMA2-13B-Psyfighter2
16
  model_type: llama
17
  prompt_template: >
@@ -29,11 +26,11 @@ prompt_template: >
29
 
30
  # Model Card for Psyfighter2-13B-vore
31
 
32
- This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, as well as a conversational model in a chat.
33
 
34
  The Adventure Mode is still work in progress, and will be added later.
35
 
36
- Download the quantized version of this model here: [SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF)
37
 
38
  ## Model Details
39
 
@@ -59,17 +56,24 @@ The easiest way to try out the model is [Koboldcpp Colab Notebook](https://colab
59
  - Paste the model URL into the field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf`
60
  - Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
61
  - Use the model as a writing assistant
62
- - You can try an adventure from [https://aetherroom.club/](https://aetherroom.club/), but keep in mind that the model will not let you take turn unless you stop it. Adventure mode is work-in-progress.
63
 
64
- ### Faraday
65
 
66
- Another convenient way to use the model is [Faraday.dev](https://faraday.dev/) application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use `Q4_K_M` version comfortably, and 16GB VRAM for `Q8_0`. (`Q4_K_M` version is smaller and faster, `Q8_0` is slower but more coherent.)
67
 
68
- Download the [Psyfighter2-13B-vore.Q4_K_M.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf) or [Psyfighter2-13B-vore.Q8_0.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q8_0.gguf) file into `%appdata%\faraday\models` folder on your computer. The model should appear in `Manage Models` menu under `Downloaded Models`. You can then select it in your character card or set it as a default model.
69
 
70
- ### Others
71
 
72
- TBD
 
 
 
 
 
 
 
73
 
74
  ## Bias, Risks, and Limitations
75
 
@@ -77,27 +81,31 @@ By design, this model has a strong vorny bias. It's not intended for use by anyo
77
 
78
  ## Training Details
79
 
80
- This model was fine-tuned on free-form text comprised of stories focused around the vore theme using the [QLoRA method](https://arxiv.org/abs/2305.14314). The resulting adapter was merged into the base model. The quantized version of the model was prepared using [llama.cpp](https://github.com/ggerganov/llama.cpp).
81
 
82
  ### Training Procedure
83
 
84
- The model was fine-tuned using the [QLoRA method](https://arxiv.org/abs/2305.14314) on NVIDIA GeForce RTX 4060 Ti over the span of ~7 days. Training was performed using [text-generation-webui by oobabooga](https://github.com/oobabooga/text-generation-webui) with [Training PRO plug-in by FartyPants](https://github.com/FartyPants/Training_PRO).
85
 
86
 
87
- LoRa adapter configuration:
88
 
89
- - Rank: 512
90
- - Alpha: 1024
91
- - Dropout rate: 0.05
92
- - Target weights: v_prog, q_proj
 
93
 
94
- Training parameters:
95
 
96
- - Sample size: 768 tokens
97
- - Samples per epoch: 47420
98
  - Number of epochs: 2
99
- - First epoch: Learning rate = 3e-4, 1000 steps warmup, cosine schedule
100
- - Second epoch: Learning rate = 1e-4, 256 steps warmup, inverse sqrt schedule
 
 
 
101
 
102
  #### Preprocessing
103
 
@@ -105,7 +113,7 @@ The stories in dataset were pre-processed as follows:
105
 
106
  - titles, foreword, tags, and anything not comprising the text of the story was removed
107
  - non-ascii characters and character sequences serving as chapter separators were removed
108
- - any story mentioning underage personas was taken out of the dataset
109
  - names of private characters were replaced with randomized names across the dataset
110
 
111
  ## Environmental Impact
@@ -113,7 +121,7 @@ The stories in dataset were pre-processed as follows:
113
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
114
 
115
  - **Hardware Type:** NVIDIA GeForce RTX 4060 Ti
116
- - **Hours used:** 168
117
  - **Cloud Provider:** N/A
118
  - **Compute Region:** US-East
119
- - **Carbon Emitted:** 5.8 kg CO2 eq.
 
5
  pipeline_tag: text-generation
6
  inference: false
7
  tags:
8
+ - pytorch
9
  - storywriting
 
10
  - finetuned
11
  - not-for-all-audiences
 
 
12
  base_model: KoboldAI/LLaMA2-13B-Psyfighter2
13
  model_type: llama
14
  prompt_template: >
 
26
 
27
  # Model Card for Psyfighter2-13B-vore
28
 
29
+ This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.
30
 
31
  The Adventure Mode is still work in progress, and will be added later.
32
 
33
+ This is the FP16-precision version of the model for merging and fine-tuning. For using the model, download the quantized version here instead: [SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF)
34
 
35
  ## Model Details
36
 
 
56
  - Paste the model URL into the field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf`
57
  - Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
58
  - Use the model as a writing assistant
59
+ - You can try an adventure from [https://aetherroom.club/](https://aetherroom.club/), but keep in mind that the model will not let you take turn unless you stop it. Adventure mode is still work-in-progress, but it's getting there.
60
 
61
+ ### Backyard AI
62
 
63
+ Another convenient way to use the model is [Backyard AI](https://backyard.ai/) application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use the model comfortably.
64
 
65
+ #### Download directly from HuggingFace (beta)
66
 
67
+ In the left panel, click `Manage Models`, then select `Hugging face models`. Paste `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF` into the text field and press `Fetch Models`. Click `Download` button to the next to the model format. Once the model is downloaded, you can select it in your character card or set it as a default model.
68
 
69
+ #### Download manually
70
+
71
+ Download the [Psyfighter2-13B-vore.Q4_K_M.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf) file into `%appdata%\faraday\models` folder on your computer. The model should appear in `Manage Models` menu under `Downloaded Models`. You can then select it in your character card or set it as a default model.
72
+
73
+ ### Model updates
74
+
75
+ - 04/13/2024 - uploaded the first version of the model
76
+ - 05/25/2024 - updated training process, making the model more coherent and improving the writing quality
77
 
78
  ## Bias, Risks, and Limitations
79
 
 
81
 
82
  ## Training Details
83
 
84
+ This model was fine-tuned on free-form text comprised of stories focused around the vore theme using [rank-stabilized](https://arxiv.org/abs/2312.03732) [QLoRA adapter](https://arxiv.org/abs/2305.14314) [QLoRA method](https://arxiv.org/abs/2305.14314). The resulting adapter was merged into the FP16 precision base model. The quantized version of the model was prepared using [llama.cpp](https://github.com/ggerganov/llama.cpp).
85
 
86
  ### Training Procedure
87
 
88
+ The model was fine-tuned with a [rank-stabilized](https://arxiv.org/abs/2312.03732) [QLoRA adapter](https://arxiv.org/abs/2305.14314) on NVIDIA GeForce RTX 4060 Ti over the span of ~24 hours. Training was performed using [Unsloth AI](https://github.com/unslothai/unsloth) library on `Ubuntu 22.04.4 LTS` with `CUDA 12.1` and `Pytorch 2.3.0`.
89
 
90
 
91
+ #### LoRa adapter configuration
92
 
93
+ - Rank: 128
94
+ - Alpha: 16
95
+ - Dropout rate: 0.1
96
+ - Target weights: `["q_proj", "k_proj", "o_proj", "gate_proj", "up_proj"]`,
97
+ - `use_rslora=True`
98
 
99
+ #### Training parameters
100
 
101
+ - Max. sequence length: 4096 tokens
102
+ - Samples per epoch: 3783
103
  - Number of epochs: 2
104
+ - Learning rate: 1e-4
105
+ - Warmup: 64 steps
106
+ - LR Schedule: linear
107
+ - Batch size: 1
108
+ - Gradient accumulation steps: 1
109
 
110
  #### Preprocessing
111
 
 
113
 
114
  - titles, foreword, tags, and anything not comprising the text of the story was removed
115
  - non-ascii characters and character sequences serving as chapter separators were removed
116
+ - any story mentioning underage personas in any context was removed from the dataset
117
  - names of private characters were replaced with randomized names across the dataset
118
 
119
  ## Environmental Impact
 
121
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
122
 
123
  - **Hardware Type:** NVIDIA GeForce RTX 4060 Ti
124
+ - **Hours used:** 24
125
  - **Cloud Provider:** N/A
126
  - **Compute Region:** US-East
127
+ - **Carbon Emitted:** 0.83 kg CO2 eq.
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "./models/KoboldAI_LLaMA2-13B-Psyfighter2",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
@@ -12,6 +12,7 @@
12
  "initializer_range": 0.02,
13
  "intermediate_size": 13824,
14
  "max_position_embeddings": 4096,
 
15
  "model_type": "llama",
16
  "num_attention_heads": 40,
17
  "num_hidden_layers": 40,
@@ -23,7 +24,7 @@
23
  "rope_theta": 10000.0,
24
  "tie_word_embeddings": false,
25
  "torch_dtype": "float16",
26
- "transformers_version": "4.37.2",
27
  "use_cache": true,
28
  "vocab_size": 32000,
29
  "welcome": "# Welcome to Psyfighter2 by Jeb Carter and Twistedshadows \nPsyfighter2 is a creative writing focused model built on Henk717's Tiefighter. The addition of medical and psychological data to the model directs its attention toward psychological and spatial details, which improves the writing output by encouraging the model to focus on more relevant details.\n\nThe key to working with PsyfighterV2 is to the understand that Less Is More.\nThis model is meant to be creative, If you let it improvise you will get better results than if you drown it in details, which can scatter and shatter the model's focus. If your back end supports it, we recommend setting a min-p of 0.05. \n\n## Story Writing\nStory co-writing is supported in the traditional way - simply start your story and invoke the model's completions as needed. To guide the model at a higher level we recommend using this format to generate stories on demand or help shape the outputs the model will use in its story continuations.\n\n\n``` \nURL: https://www.gutenberg.org/$AuthorName/Stories \n\nTitle:\nTags:\nSynopsis:\nNotes:\nFirst Publication: $MagazineName, $YEAR\n\n$Title\n\nA $Genre [Tale|Story|Novel]\n\nby $AuthorName\n```\nnThe author name has the heaviest influence on the writing style, but you can shape the output through tags, setting a year of imaginary first publication, and proving commentary in Notes can tell the model how the story is expected to go.## Chatbots and personas\nThis model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.\n\nFor example, you can put this in memory in regular chat mode:\n``` \n### Instruction: \nGenerate a conversation between Alice and Jeb where they discuss language models.\nIn this conversation Jeb is excited to teach Alice about Psyfighter. \n### Response: \n```\n\nBecause the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.\n\n## Instruct Prompting\nThis model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.\n\nDuring instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias. If using Instruct style directions during chat or storywriting, you can enclose your direction in formatting like this to keep it from contaminating the rest of the context: \n```\n***\n> [Instructions/Direction here]\n***\n```\n\nKeep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.\n\n## Adventuring and Adventure Games\nThis model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode). \n\nIt is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.\n\n## Discovered something cool and want to engage with us? \nJoin our community at https://koboldai.org/discord !\n\n### This model would not be possible without the KoboldAI MergeBox program and the awesome work from: \nDoctor Shotgun, Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia, Xwin-LM, elinas, jondurbin, NousResearch, CalderaAI, MrSeeker, OpenAssistant, ehartford, Henk717, AI Dungeon, StabilityAI and zattio770."
 
1
  {
2
+ "_name_or_path": "../../../models/KoboldAI_LLaMA2-13B-Psyfighter2/",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
12
  "initializer_range": 0.02,
13
  "intermediate_size": 13824,
14
  "max_position_embeddings": 4096,
15
+ "mlp_bias": false,
16
  "model_type": "llama",
17
  "num_attention_heads": 40,
18
  "num_hidden_layers": 40,
 
24
  "rope_theta": 10000.0,
25
  "tie_word_embeddings": false,
26
  "torch_dtype": "float16",
27
+ "transformers_version": "4.41.0",
28
  "use_cache": true,
29
  "vocab_size": 32000,
30
  "welcome": "# Welcome to Psyfighter2 by Jeb Carter and Twistedshadows \nPsyfighter2 is a creative writing focused model built on Henk717's Tiefighter. The addition of medical and psychological data to the model directs its attention toward psychological and spatial details, which improves the writing output by encouraging the model to focus on more relevant details.\n\nThe key to working with PsyfighterV2 is to the understand that Less Is More.\nThis model is meant to be creative, If you let it improvise you will get better results than if you drown it in details, which can scatter and shatter the model's focus. If your back end supports it, we recommend setting a min-p of 0.05. \n\n## Story Writing\nStory co-writing is supported in the traditional way - simply start your story and invoke the model's completions as needed. To guide the model at a higher level we recommend using this format to generate stories on demand or help shape the outputs the model will use in its story continuations.\n\n\n``` \nURL: https://www.gutenberg.org/$AuthorName/Stories \n\nTitle:\nTags:\nSynopsis:\nNotes:\nFirst Publication: $MagazineName, $YEAR\n\n$Title\n\nA $Genre [Tale|Story|Novel]\n\nby $AuthorName\n```\nnThe author name has the heaviest influence on the writing style, but you can shape the output through tags, setting a year of imaginary first publication, and proving commentary in Notes can tell the model how the story is expected to go.## Chatbots and personas\nThis model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.\n\nFor example, you can put this in memory in regular chat mode:\n``` \n### Instruction: \nGenerate a conversation between Alice and Jeb where they discuss language models.\nIn this conversation Jeb is excited to teach Alice about Psyfighter. \n### Response: \n```\n\nBecause the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.\n\n## Instruct Prompting\nThis model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.\n\nDuring instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias. If using Instruct style directions during chat or storywriting, you can enclose your direction in formatting like this to keep it from contaminating the rest of the context: \n```\n***\n> [Instructions/Direction here]\n***\n```\n\nKeep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.\n\n## Adventuring and Adventure Games\nThis model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode). \n\nIt is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.\n\n## Discovered something cool and want to engage with us? \nJoin our community at https://koboldai.org/discord !\n\n### This model would not be possible without the KoboldAI MergeBox program and the awesome work from: \nDoctor Shotgun, Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia, Xwin-LM, elinas, jondurbin, NousResearch, CalderaAI, MrSeeker, OpenAssistant, ehartford, Henk717, AI Dungeon, StabilityAI and zattio770."
generation_config.json CHANGED
@@ -3,5 +3,5 @@
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
- "transformers_version": "4.37.2"
7
  }
 
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
+ "transformers_version": "4.41.0"
7
  }
model-00001-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:892bbf2e8fb27538ffe84391eae17ef42bd95dca0202482bc72e9bb9f4dab113
3
  size 4978265728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e405b05644ace92bb9be26ecd41824a2a773a70fa19ee534c024c9e53dc3e8e6
3
  size 4978265728
model-00002-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f129558941b97d3c5bb4d919f91657ed22a58d8cc93c43e0f207acf41ccf39b
3
  size 4970422160
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b9bec5b5e9b28a6f68ed964381ab07017a6dcd06c1507bf802d98b23af55a23
3
  size 4970422160
model-00003-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3048c1081f2f2d01d1e18f6690f27cb4180918066a681fb627a5bb3a5245a929
3
  size 4970422184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a77527d52ba11b1dbd01c627ca81c28e2dbab2914cf6d6bf4de265a855f1a3e
3
  size 4970422184
model-00004-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:39c008c6f7db35f7088037987718fcff1dda73effab46f0013065f19b7715270
3
  size 4933701432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:971e8f443e04c074ae416fbccceb49749608247e3893c2805bdc072b16bed550
3
  size 4933701432
model-00005-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a02efc9dada78101c495462db5b6f30c9a9b537a784f2263363e5b44a35ed4b5
3
  size 4933722144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d05c74b09dc9e6f54fd13bec21f19704dfbb126484cab9115a3ca95ce3cd38e3
3
  size 4933722144
model-00006-of-00006.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52145cabfd6f7885c31b77ea9b13dcac86a989a01d3fabf029d0708f652e0e0a
3
  size 1245236904
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85653cec6b1526afd2de089f710528bbd3e141ed91a9f3526816b05700a4d121
3
  size 1245236904
special_tokens_map.json CHANGED
@@ -1,5 +1,23 @@
1
  {
2
- "bos_token": "<s>",
3
- "eos_token": "</s>",
4
- "unk_token": "<unk>"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  }
 
1
  {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
  }
tokenizer.json CHANGED
@@ -134,6 +134,7 @@
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
 
137
  "vocab": {
138
  "<unk>": 0,
139
  "<s>": 1,
 
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
137
+ "ignore_merges": false,
138
  "vocab": {
139
  "<unk>": 0,
140
  "<s>": 1,
tokenizer_config.json CHANGED
@@ -1,4 +1,6 @@
1
  {
 
 
2
  "added_tokens_decoder": {
3
  "0": {
4
  "content": "<unk>",
 
1
  {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
  "added_tokens_decoder": {
5
  "0": {
6
  "content": "<unk>",