Update README.md
Browse files
README.md
CHANGED
@@ -2608,7 +2608,7 @@ model-index:
|
|
2608 |
|
2609 |
# gte-base-en-v1.5
|
2610 |
|
2611 |
-
We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192
|
2612 |
The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
|
2613 |
|
2614 |
The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
|
|
|
2608 |
|
2609 |
# gte-base-en-v1.5
|
2610 |
|
2611 |
+
We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance.
|
2612 |
The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
|
2613 |
|
2614 |
The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
|