SpirinEgor Sergey commited on
Commit
0cc6cfe
1 Parent(s): 85b3e3a

Corrected a typo in the 'TatonkaHF/bge-m3_en_ru' model name in the Initialization section. (#2)

Browse files

- Corrected a typo in the 'TatonkaHF/bge-m3_en_ru' model name in the Initialization section. (77d003a43dea3e92beab2cac4bffa4c74adc8d57)


Co-authored-by: Sergey <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -108,7 +108,7 @@ Also, you can use native [FlagEmbedding](https://github.com/FlagOpen/FlagEmbeddi
108
  We follow the [`USER-base`](https://huggingface.co/deepvk/USER-base) model training algorithm, with several changes as we use different backbone.
109
 
110
 
111
- **Initialization:** [`TatonkaHF/bge-m3_en_eu`](https://huggingface.co/TatonkaHF/bge-m3_en_ru) – shrinked version of [`baai/bge-m3`](https://huggingface.co/BAAI/bge-m3) to support only Russian and English tokens.
112
 
113
 
114
  **Fine-tuning:** Supervised fine-tuning two different models based on data symmetry and then merging via [`LM-Cocktail`](https://arxiv.org/abs/2311.13534):
 
108
  We follow the [`USER-base`](https://huggingface.co/deepvk/USER-base) model training algorithm, with several changes as we use different backbone.
109
 
110
 
111
+ **Initialization:** [`TatonkaHF/bge-m3_en_ru`](https://huggingface.co/TatonkaHF/bge-m3_en_ru) – shrinked version of [`baai/bge-m3`](https://huggingface.co/BAAI/bge-m3) to support only Russian and English tokens.
112
 
113
 
114
  **Fine-tuning:** Supervised fine-tuning two different models based on data symmetry and then merging via [`LM-Cocktail`](https://arxiv.org/abs/2311.13534):