pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3450
- F1 Score: 0.8487
- Accuracy: 0.849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5426 | 0.83 | 200 | 0.5290 | 0.7440 | 0.744 |
| 0.4977 | 1.67 | 400 | 0.5235 | 0.7433 | 0.744 |
| 0.4907 | 2.5 | 600 | 0.5192 | 0.7419 | 0.742 |
| 0.4864 | 3.33 | 800 | 0.5160 | 0.7408 | 0.741 |
| 0.4874 | 4.17 | 1000 | 0.5147 | 0.7417 | 0.742 |
| 0.4788 | 5.0 | 1200 | 0.5142 | 0.7450 | 0.745 |
| 0.4768 | 5.83 | 1400 | 0.5102 | 0.7440 | 0.744 |
| 0.477 | 6.67 | 1600 | 0.5068 | 0.746 | 0.746 |
| 0.4756 | 7.5 | 1800 | 0.5057 | 0.7496 | 0.75 |
| 0.4692 | 8.33 | 2000 | 0.5048 | 0.7470 | 0.747 |
| 0.4702 | 9.17 | 2200 | 0.4995 | 0.7520 | 0.752 |
| 0.4689 | 10.0 | 2400 | 0.5099 | 0.7520 | 0.753 |
| 0.469 | 10.83 | 2600 | 0.5097 | 0.7524 | 0.754 |
| 0.4645 | 11.67 | 2800 | 0.5029 | 0.7531 | 0.754 |
| 0.4572 | 12.5 | 3000 | 0.4997 | 0.7506 | 0.751 |
| 0.4689 | 13.33 | 3200 | 0.4994 | 0.7513 | 0.752 |
| 0.4581 | 14.17 | 3400 | 0.4953 | 0.7438 | 0.744 |
| 0.4552 | 15.0 | 3600 | 0.5015 | 0.7580 | 0.759 |
| 0.4557 | 15.83 | 3800 | 0.4990 | 0.7545 | 0.755 |
| 0.4571 | 16.67 | 4000 | 0.5008 | 0.7545 | 0.755 |
| 0.4532 | 17.5 | 4200 | 0.5042 | 0.7569 | 0.758 |
| 0.4481 | 18.33 | 4400 | 0.5031 | 0.7568 | 0.757 |
| 0.4569 | 19.17 | 4600 | 0.4986 | 0.7576 | 0.758 |
| 0.4535 | 20.0 | 4800 | 0.4959 | 0.7549 | 0.755 |
| 0.4517 | 20.83 | 5000 | 0.5015 | 0.7589 | 0.759 |
| 0.448 | 21.67 | 5200 | 0.4988 | 0.7579 | 0.758 |
| 0.4457 | 22.5 | 5400 | 0.4977 | 0.7550 | 0.755 |
| 0.4477 | 23.33 | 5600 | 0.5039 | 0.7514 | 0.752 |
| 0.4487 | 24.17 | 5800 | 0.5021 | 0.7595 | 0.76 |
| 0.4487 | 25.0 | 6000 | 0.4963 | 0.7520 | 0.752 |
| 0.4456 | 25.83 | 6200 | 0.4956 | 0.7499 | 0.75 |
| 0.4443 | 26.67 | 6400 | 0.4957 | 0.7489 | 0.749 |
| 0.4454 | 27.5 | 6600 | 0.4992 | 0.7599 | 0.76 |
| 0.4431 | 28.33 | 6800 | 0.4964 | 0.7480 | 0.748 |
| 0.4416 | 29.17 | 7000 | 0.4987 | 0.7510 | 0.751 |
| 0.4424 | 30.0 | 7200 | 0.5007 | 0.7536 | 0.754 |
| 0.4434 | 30.83 | 7400 | 0.4988 | 0.7569 | 0.757 |
| 0.4373 | 31.67 | 7600 | 0.4978 | 0.7580 | 0.758 |
| 0.4432 | 32.5 | 7800 | 0.4988 | 0.7540 | 0.754 |
| 0.4391 | 33.33 | 8000 | 0.4969 | 0.7550 | 0.755 |
| 0.4447 | 34.17 | 8200 | 0.4996 | 0.7589 | 0.759 |
| 0.4396 | 35.0 | 8400 | 0.4987 | 0.7609 | 0.761 |
| 0.4424 | 35.83 | 8600 | 0.4968 | 0.7550 | 0.755 |
| 0.4443 | 36.67 | 8800 | 0.4973 | 0.7568 | 0.757 |
| 0.4376 | 37.5 | 9000 | 0.5016 | 0.7495 | 0.75 |
| 0.4362 | 38.33 | 9200 | 0.4981 | 0.7570 | 0.757 |
| 0.4408 | 39.17 | 9400 | 0.4968 | 0.7570 | 0.757 |
| 0.4375 | 40.0 | 9600 | 0.4979 | 0.7579 | 0.758 |
| 0.4402 | 40.83 | 9800 | 0.4969 | 0.7540 | 0.754 |
| 0.4382 | 41.67 | 10000 | 0.4972 | 0.7590 | 0.759 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:59:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_1-seqsight\_65536\_512\_47M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3450
* F1 Score: 0.8487
* Accuracy: 0.849
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_v3
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8916
- Qwk: 0.7949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9249 | 1.0 | 1731 | 1.0209 | 0.7428 |
| 0.8301 | 2.0 | 3462 | 0.8321 | 0.7973 |
| 0.7726 | 3.0 | 5193 | 0.9609 | 0.7834 |
| 0.7125 | 4.0 | 6924 | 0.8916 | 0.7949 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-v3-small", "model-index": [{"name": "output_v3", "results": []}]} | lemmein/output_v3 | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:59:28+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us
| output\_v3
==========
This model is a fine-tuned version of microsoft/deberta-v3-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8916
* Qwk: 0.7949
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tunisien
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the comondov dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 6.6667 | 20 | 10.2887 | 145.3174 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["ar"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Arbi-Houssem/comondov"], "base_model": "openai/whisper-medium", "model-index": [{"name": "Whisper Tunisien", "results": []}]} | Arbi-Houssem/output | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"dataset:Arbi-Houssem/comondov",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:59:33+00:00 | [] | [
"ar"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ar #dataset-Arbi-Houssem/comondov #base_model-openai/whisper-medium #license-apache-2.0 #endpoints_compatible #region-us
| Whisper Tunisien
================
This model is a fine-tuned version of openai/whisper-medium on the comondov dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ar #dataset-Arbi-Houssem/comondov #base_model-openai/whisper-medium #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | dabagyan/bert-sarcasm-model-with-context | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:59:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-audio | transformers |
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-large")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| **facebook/musicgen-large** | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. | {"license": "cc-by-nc-4.0", "tags": ["musicgen"], "inference": true} | karlwennerstrom/text-to-music | null | [
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:00:24+00:00 | [
"2306.05284"
] | [] | TAGS
#transformers #pytorch #musicgen #text-to-audio #arxiv-2306.05284 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| MusicGen - Large - 3.3B
=======================
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in Simple and Controllable Music Generation by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
* small
* medium
* large (this checkpoint)
* melody
Example
-------
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="\_blank" href="URL
<img src="URL alt="Open In Colab"/>
* Hugging Face Colab:
<a target="\_blank" href="URL
<img src="URL alt="Open In Colab"/>
* Hugging Face Demo:
<a target="\_blank" href="URL
<img src="URL alt="Open in HuggingFace"/>
Transformers Usage
------------------
You can run MusicGen locally with the Transformers library from version 4.31.0 onwards.
1. First install the Transformers library and scipy:
2. Run inference via the 'Text-to-Audio' (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
4. Listen to the audio samples either in an ipynb notebook:
Or save them as a '.wav' file using a third-party library, e.g. 'scipy':
For more details on using the MusicGen model for inference using the Transformers library, refer to the MusicGen docs.
Audiocraft Usage
----------------
You can also run MusicGen locally through the original Audiocraft library:
1. First install the 'audiocraft' library
2. Make sure to have 'ffmpeg' installed:
3. Run the following Python code:
Model details
-------------
Organization developing the model: The FAIR team of Meta AI.
Model date: MusicGen was trained between April 2023 and May 2023.
Model version: This is the version 1 of the model.
Model type: MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
Paper or resources for more information: More information can be found in the paper Simple and Controllable Music Generation.
Citation details:
License: Code is released under MIT, model weights are released under CC-BY-NC 4.0.
Where to send questions or comments about the model: Questions and comments about MusicGen can be sent via the Github repository of the project, or by opening an issue.
Intended use
------------
Primary intended use: The primary use of MusicGen is research on AI-based music generation, including:
* Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
* Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
Primary intended users: The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
Out-of-scope use cases: The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
Metrics
-------
Models performance measures: We used the following objective measure to evaluate the model on a standard music benchmark:
* Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
* Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
* CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
* Overall quality of the music samples;
* Text relevance to the provided text input;
* Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
Decision thresholds: Not applicable.
Evaluation datasets
-------------------
The model was evaluated on the MusicCaps benchmark and on an in-domain held-out evaluation set, with no artist overlap with the training set.
Training datasets
-----------------
The model was trained on licensed data using the following sources: the Meta Music Initiative Sound Collection, Shutterstock music collection and the Pond5 music collection. See the paper for more details about the training set and corresponding preprocessing.
Evaluation results
------------------
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
More information can be found in the paper Simple and Controllable Music Generation, in the Results section.
Limitations and biases
----------------------
Data: The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
Mitigations: Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs).
Limitations:
* The model is not able to generate realistic vocals.
* The model has been trained with English descriptions and will not perform as well in other languages.
* The model does not perform equally well for all music styles and cultures.
* The model sometimes generates end of songs, collapsing to silence.
* It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
Biases: The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
Risks and harms: Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
Use cases: Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
| [] | [
"TAGS\n#transformers #pytorch #musicgen #text-to-audio #arxiv-2306.05284 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_cb_bert
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2169
- Accuracy: 0.3636
- F1: 0.2430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7239 | 3.5714 | 50 | 1.2945 | 0.3182 | 0.1536 |
| 0.3879 | 7.1429 | 100 | 1.6236 | 0.4545 | 0.4158 |
| 0.1546 | 10.7143 | 150 | 3.1975 | 0.3636 | 0.2430 |
| 0.0741 | 14.2857 | 200 | 2.9703 | 0.4545 | 0.3895 |
| 0.0323 | 17.8571 | 250 | 3.8104 | 0.3636 | 0.2430 |
| 0.0073 | 21.4286 | 300 | 4.0583 | 0.3636 | 0.2430 |
| 0.0037 | 25.0 | 350 | 4.3166 | 0.3636 | 0.2430 |
| 0.0032 | 28.5714 | 400 | 4.2169 | 0.3636 | 0.2430 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_cb_bert", "results": []}]} | lenatr99/fine_tuned_cb_bert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:00:27+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| fine\_tuned\_cb\_bert
=====================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.2169
* Accuracy: 0.3636
* F1: 0.2430
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3560
- F1 Score: 0.8466
- Accuracy: 0.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5345 | 0.83 | 200 | 0.5325 | 0.7312 | 0.733 |
| 0.4938 | 1.67 | 400 | 0.5185 | 0.7422 | 0.743 |
| 0.4863 | 2.5 | 600 | 0.5107 | 0.7550 | 0.755 |
| 0.4809 | 3.33 | 800 | 0.5049 | 0.7437 | 0.744 |
| 0.4788 | 4.17 | 1000 | 0.5083 | 0.7518 | 0.754 |
| 0.4687 | 5.0 | 1200 | 0.5023 | 0.7544 | 0.755 |
| 0.4655 | 5.83 | 1400 | 0.4938 | 0.7450 | 0.745 |
| 0.463 | 6.67 | 1600 | 0.4967 | 0.7490 | 0.749 |
| 0.4618 | 7.5 | 1800 | 0.4922 | 0.7523 | 0.753 |
| 0.4539 | 8.33 | 2000 | 0.4933 | 0.7569 | 0.757 |
| 0.454 | 9.17 | 2200 | 0.4876 | 0.7560 | 0.756 |
| 0.4526 | 10.0 | 2400 | 0.4948 | 0.7604 | 0.761 |
| 0.4519 | 10.83 | 2600 | 0.4926 | 0.7617 | 0.763 |
| 0.4475 | 11.67 | 2800 | 0.4907 | 0.7559 | 0.756 |
| 0.4382 | 12.5 | 3000 | 0.4924 | 0.7630 | 0.763 |
| 0.4491 | 13.33 | 3200 | 0.4939 | 0.7457 | 0.746 |
| 0.4398 | 14.17 | 3400 | 0.4853 | 0.7516 | 0.752 |
| 0.4363 | 15.0 | 3600 | 0.4910 | 0.7672 | 0.768 |
| 0.4353 | 15.83 | 3800 | 0.4913 | 0.7627 | 0.763 |
| 0.4364 | 16.67 | 4000 | 0.4920 | 0.7656 | 0.766 |
| 0.4324 | 17.5 | 4200 | 0.4928 | 0.7567 | 0.757 |
| 0.4252 | 18.33 | 4400 | 0.5010 | 0.7638 | 0.764 |
| 0.4366 | 19.17 | 4600 | 0.4923 | 0.7638 | 0.764 |
| 0.4309 | 20.0 | 4800 | 0.4919 | 0.7610 | 0.761 |
| 0.428 | 20.83 | 5000 | 0.4988 | 0.7630 | 0.763 |
| 0.4249 | 21.67 | 5200 | 0.4914 | 0.7670 | 0.767 |
| 0.421 | 22.5 | 5400 | 0.4998 | 0.7599 | 0.76 |
| 0.4217 | 23.33 | 5600 | 0.4969 | 0.7646 | 0.765 |
| 0.4248 | 24.17 | 5800 | 0.4990 | 0.7588 | 0.759 |
| 0.4222 | 25.0 | 6000 | 0.4928 | 0.7630 | 0.763 |
| 0.4194 | 25.83 | 6200 | 0.4907 | 0.7620 | 0.762 |
| 0.4159 | 26.67 | 6400 | 0.4950 | 0.7659 | 0.766 |
| 0.4183 | 27.5 | 6600 | 0.4966 | 0.7680 | 0.768 |
| 0.4134 | 28.33 | 6800 | 0.4951 | 0.7659 | 0.766 |
| 0.4152 | 29.17 | 7000 | 0.4956 | 0.7620 | 0.762 |
| 0.4143 | 30.0 | 7200 | 0.4943 | 0.7518 | 0.752 |
| 0.4141 | 30.83 | 7400 | 0.4967 | 0.7599 | 0.76 |
| 0.4063 | 31.67 | 7600 | 0.5028 | 0.7579 | 0.758 |
| 0.4144 | 32.5 | 7800 | 0.4986 | 0.7610 | 0.761 |
| 0.4087 | 33.33 | 8000 | 0.4979 | 0.7629 | 0.763 |
| 0.4125 | 34.17 | 8200 | 0.4999 | 0.7650 | 0.765 |
| 0.4084 | 35.0 | 8400 | 0.4981 | 0.7640 | 0.764 |
| 0.411 | 35.83 | 8600 | 0.4975 | 0.7580 | 0.758 |
| 0.4117 | 36.67 | 8800 | 0.4977 | 0.7570 | 0.757 |
| 0.4042 | 37.5 | 9000 | 0.5037 | 0.7567 | 0.757 |
| 0.4046 | 38.33 | 9200 | 0.5019 | 0.7620 | 0.762 |
| 0.407 | 39.17 | 9400 | 0.5006 | 0.7650 | 0.765 |
| 0.404 | 40.0 | 9600 | 0.5043 | 0.7599 | 0.76 |
| 0.4041 | 40.83 | 9800 | 0.5028 | 0.7620 | 0.762 |
| 0.4037 | 41.67 | 10000 | 0.5027 | 0.7580 | 0.758 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_1-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:00:42+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_1-seqsight\_65536\_512\_47M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3560
* F1 Score: 0.8466
* Accuracy: 0.847
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-bnb-4bit"} | animaRegem/gemma-7b-lora-0_1-malayalam | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:01:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: animaRegem
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: animaRegem\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: animaRegem\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | animaRegem/gemma-2b-lora-0_1-malayalam-tokenizer | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:01:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3592
- F1 Score: 0.8408
- Accuracy: 0.841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5524 | 1.34 | 200 | 0.5012 | 0.7490 | 0.749 |
| 0.4884 | 2.68 | 400 | 0.4884 | 0.7503 | 0.751 |
| 0.4809 | 4.03 | 600 | 0.4846 | 0.7535 | 0.754 |
| 0.4723 | 5.37 | 800 | 0.4818 | 0.7550 | 0.755 |
| 0.4605 | 6.71 | 1000 | 0.4785 | 0.7562 | 0.757 |
| 0.4623 | 8.05 | 1200 | 0.4734 | 0.7670 | 0.767 |
| 0.458 | 9.4 | 1400 | 0.4741 | 0.7566 | 0.757 |
| 0.457 | 10.74 | 1600 | 0.4798 | 0.7559 | 0.757 |
| 0.4518 | 12.08 | 1800 | 0.4766 | 0.7597 | 0.761 |
| 0.4501 | 13.42 | 2000 | 0.4673 | 0.7566 | 0.757 |
| 0.4479 | 14.77 | 2200 | 0.4684 | 0.7640 | 0.764 |
| 0.4487 | 16.11 | 2400 | 0.4664 | 0.7636 | 0.764 |
| 0.4443 | 17.45 | 2600 | 0.4687 | 0.7640 | 0.764 |
| 0.4431 | 18.79 | 2800 | 0.4678 | 0.7610 | 0.761 |
| 0.4454 | 20.13 | 3000 | 0.4639 | 0.7580 | 0.758 |
| 0.4384 | 21.48 | 3200 | 0.4688 | 0.7618 | 0.762 |
| 0.4413 | 22.82 | 3400 | 0.4657 | 0.7669 | 0.767 |
| 0.4389 | 24.16 | 3600 | 0.4631 | 0.7620 | 0.762 |
| 0.4391 | 25.5 | 3800 | 0.4676 | 0.7645 | 0.765 |
| 0.4374 | 26.85 | 4000 | 0.4624 | 0.7710 | 0.771 |
| 0.436 | 28.19 | 4200 | 0.4631 | 0.7660 | 0.766 |
| 0.434 | 29.53 | 4400 | 0.4614 | 0.7630 | 0.763 |
| 0.4349 | 30.87 | 4600 | 0.4602 | 0.7679 | 0.768 |
| 0.4348 | 32.21 | 4800 | 0.4602 | 0.7670 | 0.767 |
| 0.43 | 33.56 | 5000 | 0.4626 | 0.7647 | 0.765 |
| 0.4317 | 34.9 | 5200 | 0.4601 | 0.7700 | 0.77 |
| 0.4345 | 36.24 | 5400 | 0.4570 | 0.7680 | 0.768 |
| 0.4285 | 37.58 | 5600 | 0.4581 | 0.7670 | 0.767 |
| 0.4292 | 38.93 | 5800 | 0.4563 | 0.7650 | 0.765 |
| 0.4294 | 40.27 | 6000 | 0.4574 | 0.7650 | 0.765 |
| 0.4272 | 41.61 | 6200 | 0.4580 | 0.7678 | 0.768 |
| 0.4283 | 42.95 | 6400 | 0.4558 | 0.7670 | 0.767 |
| 0.4296 | 44.3 | 6600 | 0.4553 | 0.7690 | 0.769 |
| 0.4236 | 45.64 | 6800 | 0.4552 | 0.7700 | 0.77 |
| 0.4276 | 46.98 | 7000 | 0.4557 | 0.7670 | 0.767 |
| 0.4287 | 48.32 | 7200 | 0.4534 | 0.7670 | 0.767 |
| 0.4249 | 49.66 | 7400 | 0.4563 | 0.7678 | 0.768 |
| 0.4235 | 51.01 | 7600 | 0.4532 | 0.7640 | 0.764 |
| 0.4265 | 52.35 | 7800 | 0.4539 | 0.7630 | 0.763 |
| 0.4211 | 53.69 | 8000 | 0.4534 | 0.7720 | 0.772 |
| 0.4253 | 55.03 | 8200 | 0.4546 | 0.7770 | 0.777 |
| 0.4232 | 56.38 | 8400 | 0.4547 | 0.7710 | 0.771 |
| 0.4248 | 57.72 | 8600 | 0.4541 | 0.7697 | 0.77 |
| 0.4218 | 59.06 | 8800 | 0.4536 | 0.7710 | 0.771 |
| 0.4235 | 60.4 | 9000 | 0.4524 | 0.7710 | 0.771 |
| 0.4232 | 61.74 | 9200 | 0.4526 | 0.7699 | 0.77 |
| 0.4238 | 63.09 | 9400 | 0.4524 | 0.7710 | 0.771 |
| 0.4265 | 64.43 | 9600 | 0.4520 | 0.7730 | 0.773 |
| 0.4192 | 65.77 | 9800 | 0.4526 | 0.7710 | 0.771 |
| 0.4209 | 67.11 | 10000 | 0.4525 | 0.7710 | 0.771 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:03:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_4-seqsight\_65536\_512\_47M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3592
* F1 Score: 0.8408
* Accuracy: 0.841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** xsa-dev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | xsa-dev/hugs_llama3_technique_ft_16bit_GGUF_1 | null | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"gguf",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:03:18+00:00 | [] | [
"en"
] | TAGS
#transformers #text-generation-inference #unsloth #llama #gguf #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: xsa-dev
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #text-generation-inference #unsloth #llama #gguf #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3946
- F1 Score: 0.8378
- Accuracy: 0.838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5197 | 1.34 | 200 | 0.4804 | 0.7559 | 0.756 |
| 0.4632 | 2.68 | 400 | 0.4707 | 0.7573 | 0.758 |
| 0.451 | 4.03 | 600 | 0.4674 | 0.7706 | 0.771 |
| 0.4386 | 5.37 | 800 | 0.4713 | 0.7608 | 0.761 |
| 0.4245 | 6.71 | 1000 | 0.4641 | 0.7704 | 0.771 |
| 0.4206 | 8.05 | 1200 | 0.4561 | 0.7650 | 0.765 |
| 0.4135 | 9.4 | 1400 | 0.4505 | 0.7729 | 0.773 |
| 0.4101 | 10.74 | 1600 | 0.4429 | 0.7760 | 0.776 |
| 0.398 | 12.08 | 1800 | 0.4503 | 0.7834 | 0.785 |
| 0.3924 | 13.42 | 2000 | 0.4314 | 0.7789 | 0.779 |
| 0.3862 | 14.77 | 2200 | 0.4378 | 0.7790 | 0.779 |
| 0.3818 | 16.11 | 2400 | 0.4344 | 0.7856 | 0.786 |
| 0.37 | 17.45 | 2600 | 0.4382 | 0.7819 | 0.782 |
| 0.3673 | 18.79 | 2800 | 0.4382 | 0.7930 | 0.793 |
| 0.3668 | 20.13 | 3000 | 0.4375 | 0.7919 | 0.792 |
| 0.355 | 21.48 | 3200 | 0.4364 | 0.8042 | 0.805 |
| 0.3526 | 22.82 | 3400 | 0.4336 | 0.8015 | 0.802 |
| 0.3472 | 24.16 | 3600 | 0.4297 | 0.8036 | 0.804 |
| 0.3397 | 25.5 | 3800 | 0.4356 | 0.8021 | 0.803 |
| 0.3336 | 26.85 | 4000 | 0.4270 | 0.8070 | 0.807 |
| 0.3311 | 28.19 | 4200 | 0.4383 | 0.8111 | 0.812 |
| 0.3216 | 29.53 | 4400 | 0.4312 | 0.8140 | 0.814 |
| 0.3223 | 30.87 | 4600 | 0.4287 | 0.8110 | 0.811 |
| 0.3171 | 32.21 | 4800 | 0.4274 | 0.8198 | 0.82 |
| 0.3087 | 33.56 | 5000 | 0.4340 | 0.8119 | 0.812 |
| 0.3112 | 34.9 | 5200 | 0.4324 | 0.8200 | 0.82 |
| 0.3074 | 36.24 | 5400 | 0.4328 | 0.8227 | 0.823 |
| 0.3009 | 37.58 | 5600 | 0.4299 | 0.8179 | 0.818 |
| 0.295 | 38.93 | 5800 | 0.4297 | 0.8229 | 0.823 |
| 0.2955 | 40.27 | 6000 | 0.4356 | 0.8257 | 0.826 |
| 0.291 | 41.61 | 6200 | 0.4261 | 0.8248 | 0.825 |
| 0.2879 | 42.95 | 6400 | 0.4289 | 0.8180 | 0.818 |
| 0.2859 | 44.3 | 6600 | 0.4275 | 0.8246 | 0.825 |
| 0.2799 | 45.64 | 6800 | 0.4301 | 0.8209 | 0.821 |
| 0.2806 | 46.98 | 7000 | 0.4298 | 0.8258 | 0.826 |
| 0.28 | 48.32 | 7200 | 0.4359 | 0.8283 | 0.829 |
| 0.2787 | 49.66 | 7400 | 0.4247 | 0.8276 | 0.828 |
| 0.2715 | 51.01 | 7600 | 0.4292 | 0.8298 | 0.83 |
| 0.2738 | 52.35 | 7800 | 0.4339 | 0.8294 | 0.83 |
| 0.2676 | 53.69 | 8000 | 0.4320 | 0.8257 | 0.826 |
| 0.2698 | 55.03 | 8200 | 0.4308 | 0.8289 | 0.829 |
| 0.2661 | 56.38 | 8400 | 0.4333 | 0.8297 | 0.83 |
| 0.2659 | 57.72 | 8600 | 0.4364 | 0.8286 | 0.829 |
| 0.265 | 59.06 | 8800 | 0.4285 | 0.8267 | 0.827 |
| 0.2613 | 60.4 | 9000 | 0.4340 | 0.8297 | 0.83 |
| 0.2622 | 61.74 | 9200 | 0.4372 | 0.8294 | 0.83 |
| 0.259 | 63.09 | 9400 | 0.4359 | 0.8346 | 0.835 |
| 0.2587 | 64.43 | 9600 | 0.4384 | 0.8324 | 0.833 |
| 0.2568 | 65.77 | 9800 | 0.4364 | 0.8326 | 0.833 |
| 0.2581 | 67.11 | 10000 | 0.4376 | 0.8325 | 0.833 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:03:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_4-seqsight\_65536\_512\_47M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3946
* F1 Score: 0.8378
* Accuracy: 0.838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3586
- F1 Score: 0.8414
- Accuracy: 0.842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5313 | 1.34 | 200 | 0.4867 | 0.7530 | 0.753 |
| 0.4729 | 2.68 | 400 | 0.4783 | 0.7539 | 0.755 |
| 0.4632 | 4.03 | 600 | 0.4764 | 0.7586 | 0.76 |
| 0.4539 | 5.37 | 800 | 0.4722 | 0.7579 | 0.758 |
| 0.4421 | 6.71 | 1000 | 0.4692 | 0.7671 | 0.768 |
| 0.4418 | 8.05 | 1200 | 0.4633 | 0.7627 | 0.763 |
| 0.4376 | 9.4 | 1400 | 0.4623 | 0.7610 | 0.761 |
| 0.437 | 10.74 | 1600 | 0.4580 | 0.7719 | 0.772 |
| 0.4274 | 12.08 | 1800 | 0.4650 | 0.7677 | 0.769 |
| 0.4248 | 13.42 | 2000 | 0.4510 | 0.7720 | 0.772 |
| 0.4224 | 14.77 | 2200 | 0.4550 | 0.7700 | 0.77 |
| 0.4205 | 16.11 | 2400 | 0.4479 | 0.7729 | 0.773 |
| 0.4143 | 17.45 | 2600 | 0.4532 | 0.7680 | 0.768 |
| 0.413 | 18.79 | 2800 | 0.4500 | 0.7770 | 0.777 |
| 0.4137 | 20.13 | 3000 | 0.4524 | 0.7658 | 0.766 |
| 0.4041 | 21.48 | 3200 | 0.4516 | 0.7626 | 0.763 |
| 0.4082 | 22.82 | 3400 | 0.4464 | 0.7708 | 0.771 |
| 0.4037 | 24.16 | 3600 | 0.4444 | 0.7718 | 0.772 |
| 0.4025 | 25.5 | 3800 | 0.4515 | 0.7690 | 0.77 |
| 0.3983 | 26.85 | 4000 | 0.4446 | 0.7769 | 0.777 |
| 0.3976 | 28.19 | 4200 | 0.4387 | 0.7738 | 0.774 |
| 0.3931 | 29.53 | 4400 | 0.4395 | 0.7800 | 0.78 |
| 0.3931 | 30.87 | 4600 | 0.4362 | 0.7789 | 0.779 |
| 0.393 | 32.21 | 4800 | 0.4352 | 0.7820 | 0.782 |
| 0.3884 | 33.56 | 5000 | 0.4389 | 0.7770 | 0.777 |
| 0.3885 | 34.9 | 5200 | 0.4355 | 0.7770 | 0.777 |
| 0.3895 | 36.24 | 5400 | 0.4320 | 0.7809 | 0.781 |
| 0.382 | 37.58 | 5600 | 0.4337 | 0.7840 | 0.784 |
| 0.3804 | 38.93 | 5800 | 0.4337 | 0.7840 | 0.784 |
| 0.3816 | 40.27 | 6000 | 0.4326 | 0.7879 | 0.788 |
| 0.3756 | 41.61 | 6200 | 0.4336 | 0.7950 | 0.795 |
| 0.3769 | 42.95 | 6400 | 0.4329 | 0.7850 | 0.785 |
| 0.3767 | 44.3 | 6600 | 0.4299 | 0.7939 | 0.794 |
| 0.3706 | 45.64 | 6800 | 0.4318 | 0.7890 | 0.789 |
| 0.3749 | 46.98 | 7000 | 0.4320 | 0.7910 | 0.791 |
| 0.3776 | 48.32 | 7200 | 0.4268 | 0.7909 | 0.791 |
| 0.3712 | 49.66 | 7400 | 0.4277 | 0.7920 | 0.792 |
| 0.3688 | 51.01 | 7600 | 0.4292 | 0.7930 | 0.793 |
| 0.3726 | 52.35 | 7800 | 0.4302 | 0.7919 | 0.792 |
| 0.367 | 53.69 | 8000 | 0.4283 | 0.7950 | 0.795 |
| 0.3693 | 55.03 | 8200 | 0.4328 | 0.7920 | 0.792 |
| 0.3686 | 56.38 | 8400 | 0.4288 | 0.7940 | 0.794 |
| 0.3668 | 57.72 | 8600 | 0.4300 | 0.7958 | 0.796 |
| 0.3645 | 59.06 | 8800 | 0.4292 | 0.7930 | 0.793 |
| 0.3665 | 60.4 | 9000 | 0.4279 | 0.7900 | 0.79 |
| 0.3669 | 61.74 | 9200 | 0.4286 | 0.7909 | 0.791 |
| 0.3658 | 63.09 | 9400 | 0.4284 | 0.7920 | 0.792 |
| 0.3654 | 64.43 | 9600 | 0.4283 | 0.7929 | 0.793 |
| 0.3628 | 65.77 | 9800 | 0.4286 | 0.7920 | 0.792 |
| 0.3624 | 67.11 | 10000 | 0.4286 | 0.7910 | 0.791 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_4-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:03:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_4-seqsight\_65536\_512\_47M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3586
* F1 Score: 0.8414
* Accuracy: 0.842
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5577
- F1 Score: 0.7107
- Accuracy: 0.712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6292 | 0.93 | 200 | 0.5841 | 0.6911 | 0.691 |
| 0.6031 | 1.87 | 400 | 0.5730 | 0.6984 | 0.699 |
| 0.598 | 2.8 | 600 | 0.5653 | 0.7042 | 0.708 |
| 0.5917 | 3.74 | 800 | 0.5626 | 0.7055 | 0.706 |
| 0.5892 | 4.67 | 1000 | 0.5581 | 0.7149 | 0.717 |
| 0.5869 | 5.61 | 1200 | 0.5559 | 0.7156 | 0.717 |
| 0.583 | 6.54 | 1400 | 0.5525 | 0.7224 | 0.725 |
| 0.5847 | 7.48 | 1600 | 0.5568 | 0.7131 | 0.713 |
| 0.5835 | 8.41 | 1800 | 0.5518 | 0.7112 | 0.712 |
| 0.5863 | 9.35 | 2000 | 0.5531 | 0.7186 | 0.719 |
| 0.5804 | 10.28 | 2200 | 0.5613 | 0.6986 | 0.699 |
| 0.5786 | 11.21 | 2400 | 0.5500 | 0.7256 | 0.727 |
| 0.5795 | 12.15 | 2600 | 0.5485 | 0.7174 | 0.719 |
| 0.5781 | 13.08 | 2800 | 0.5472 | 0.7237 | 0.726 |
| 0.577 | 14.02 | 3000 | 0.5497 | 0.7154 | 0.716 |
| 0.5776 | 14.95 | 3200 | 0.5473 | 0.7127 | 0.714 |
| 0.5774 | 15.89 | 3400 | 0.5464 | 0.7134 | 0.715 |
| 0.5741 | 16.82 | 3600 | 0.5471 | 0.7119 | 0.713 |
| 0.5733 | 17.76 | 3800 | 0.5490 | 0.7141 | 0.715 |
| 0.5749 | 18.69 | 4000 | 0.5510 | 0.7167 | 0.717 |
| 0.5727 | 19.63 | 4200 | 0.5438 | 0.7212 | 0.724 |
| 0.5754 | 20.56 | 4400 | 0.5446 | 0.7156 | 0.717 |
| 0.5712 | 21.5 | 4600 | 0.5517 | 0.7121 | 0.712 |
| 0.5709 | 22.43 | 4800 | 0.5448 | 0.7250 | 0.726 |
| 0.5744 | 23.36 | 5000 | 0.5475 | 0.7176 | 0.718 |
| 0.5717 | 24.3 | 5200 | 0.5508 | 0.7131 | 0.713 |
| 0.5699 | 25.23 | 5400 | 0.5450 | 0.7226 | 0.724 |
| 0.5734 | 26.17 | 5600 | 0.5457 | 0.7183 | 0.719 |
| 0.5695 | 27.1 | 5800 | 0.5439 | 0.7183 | 0.72 |
| 0.569 | 28.04 | 6000 | 0.5439 | 0.7221 | 0.723 |
| 0.568 | 28.97 | 6200 | 0.5522 | 0.7059 | 0.706 |
| 0.572 | 29.91 | 6400 | 0.5458 | 0.7225 | 0.723 |
| 0.5703 | 30.84 | 6600 | 0.5456 | 0.7164 | 0.717 |
| 0.5681 | 31.78 | 6800 | 0.5452 | 0.7238 | 0.724 |
| 0.5679 | 32.71 | 7000 | 0.5425 | 0.7241 | 0.725 |
| 0.572 | 33.64 | 7200 | 0.5433 | 0.7218 | 0.723 |
| 0.5652 | 34.58 | 7400 | 0.5510 | 0.7109 | 0.711 |
| 0.5702 | 35.51 | 7600 | 0.5463 | 0.7180 | 0.718 |
| 0.5678 | 36.45 | 7800 | 0.5453 | 0.7268 | 0.727 |
| 0.5686 | 37.38 | 8000 | 0.5444 | 0.7207 | 0.721 |
| 0.5625 | 38.32 | 8200 | 0.5423 | 0.7175 | 0.719 |
| 0.5671 | 39.25 | 8400 | 0.5440 | 0.7212 | 0.722 |
| 0.5668 | 40.19 | 8600 | 0.5440 | 0.7233 | 0.724 |
| 0.5653 | 41.12 | 8800 | 0.5445 | 0.7244 | 0.725 |
| 0.567 | 42.06 | 9000 | 0.5445 | 0.7285 | 0.729 |
| 0.566 | 42.99 | 9200 | 0.5456 | 0.7228 | 0.723 |
| 0.5676 | 43.93 | 9400 | 0.5465 | 0.7229 | 0.723 |
| 0.5667 | 44.86 | 9600 | 0.5445 | 0.7276 | 0.728 |
| 0.5662 | 45.79 | 9800 | 0.5445 | 0.7286 | 0.729 |
| 0.5634 | 46.73 | 10000 | 0.5447 | 0.7266 | 0.727 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:04:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_3-seqsight\_65536\_512\_47M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5577
* F1 Score: 0.7107
* Accuracy: 0.712
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5490
- F1 Score: 0.6962
- Accuracy: 0.698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6208 | 0.93 | 200 | 0.5737 | 0.6929 | 0.693 |
| 0.5963 | 1.87 | 400 | 0.5696 | 0.6930 | 0.693 |
| 0.5907 | 2.8 | 600 | 0.5576 | 0.7162 | 0.718 |
| 0.5842 | 3.74 | 800 | 0.5619 | 0.7001 | 0.7 |
| 0.5822 | 4.67 | 1000 | 0.5538 | 0.7155 | 0.716 |
| 0.5791 | 5.61 | 1200 | 0.5470 | 0.7250 | 0.727 |
| 0.5749 | 6.54 | 1400 | 0.5498 | 0.7267 | 0.728 |
| 0.5747 | 7.48 | 1600 | 0.5501 | 0.7191 | 0.719 |
| 0.573 | 8.41 | 1800 | 0.5464 | 0.7147 | 0.715 |
| 0.5762 | 9.35 | 2000 | 0.5457 | 0.7265 | 0.728 |
| 0.5689 | 10.28 | 2200 | 0.5498 | 0.7169 | 0.717 |
| 0.5662 | 11.21 | 2400 | 0.5440 | 0.7234 | 0.725 |
| 0.5668 | 12.15 | 2600 | 0.5410 | 0.7185 | 0.721 |
| 0.5634 | 13.08 | 2800 | 0.5422 | 0.7176 | 0.722 |
| 0.5631 | 14.02 | 3000 | 0.5416 | 0.7290 | 0.73 |
| 0.5618 | 14.95 | 3200 | 0.5383 | 0.7208 | 0.724 |
| 0.5617 | 15.89 | 3400 | 0.5381 | 0.7291 | 0.731 |
| 0.5597 | 16.82 | 3600 | 0.5400 | 0.7295 | 0.731 |
| 0.5567 | 17.76 | 3800 | 0.5420 | 0.7249 | 0.727 |
| 0.558 | 18.69 | 4000 | 0.5463 | 0.7289 | 0.729 |
| 0.5563 | 19.63 | 4200 | 0.5375 | 0.7251 | 0.728 |
| 0.5584 | 20.56 | 4400 | 0.5381 | 0.7264 | 0.728 |
| 0.5523 | 21.5 | 4600 | 0.5479 | 0.7140 | 0.714 |
| 0.5526 | 22.43 | 4800 | 0.5387 | 0.7275 | 0.729 |
| 0.5567 | 23.36 | 5000 | 0.5453 | 0.7251 | 0.725 |
| 0.551 | 24.3 | 5200 | 0.5539 | 0.7054 | 0.706 |
| 0.5498 | 25.23 | 5400 | 0.5404 | 0.7268 | 0.729 |
| 0.5545 | 26.17 | 5600 | 0.5407 | 0.7299 | 0.731 |
| 0.5489 | 27.1 | 5800 | 0.5393 | 0.7272 | 0.728 |
| 0.5478 | 28.04 | 6000 | 0.5395 | 0.7292 | 0.73 |
| 0.5469 | 28.97 | 6200 | 0.5465 | 0.7191 | 0.719 |
| 0.5509 | 29.91 | 6400 | 0.5414 | 0.7290 | 0.73 |
| 0.5488 | 30.84 | 6600 | 0.5385 | 0.7241 | 0.725 |
| 0.5459 | 31.78 | 6800 | 0.5413 | 0.7247 | 0.725 |
| 0.5463 | 32.71 | 7000 | 0.5390 | 0.7283 | 0.729 |
| 0.5501 | 33.64 | 7200 | 0.5389 | 0.7248 | 0.726 |
| 0.5427 | 34.58 | 7400 | 0.5485 | 0.7079 | 0.708 |
| 0.5464 | 35.51 | 7600 | 0.5422 | 0.7220 | 0.722 |
| 0.5448 | 36.45 | 7800 | 0.5403 | 0.7304 | 0.731 |
| 0.5453 | 37.38 | 8000 | 0.5399 | 0.7252 | 0.726 |
| 0.5374 | 38.32 | 8200 | 0.5403 | 0.7270 | 0.728 |
| 0.5424 | 39.25 | 8400 | 0.5400 | 0.7282 | 0.729 |
| 0.5439 | 40.19 | 8600 | 0.5402 | 0.7264 | 0.727 |
| 0.541 | 41.12 | 8800 | 0.5409 | 0.7264 | 0.727 |
| 0.5428 | 42.06 | 9000 | 0.5407 | 0.7272 | 0.728 |
| 0.5414 | 42.99 | 9200 | 0.5419 | 0.7248 | 0.725 |
| 0.5432 | 43.93 | 9400 | 0.5418 | 0.7238 | 0.724 |
| 0.5424 | 44.86 | 9600 | 0.5401 | 0.7255 | 0.726 |
| 0.5406 | 45.79 | 9800 | 0.5407 | 0.7264 | 0.727 |
| 0.5384 | 46.73 | 10000 | 0.5410 | 0.7245 | 0.725 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:05:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_3-seqsight\_65536\_512\_47M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5490
* F1 Score: 0.6962
* Accuracy: 0.698
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GritLM-7B - bnb 4bits
- Model creator: https://huggingface.co/GritLM/
- Original model: https://huggingface.co/GritLM/GritLM-7B/
Original model description:
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- GritLM/tulu2
tags:
- mteb
model-index:
- name: GritLM-7B
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.478
- type: map_at_10
value: 54.955
- type: map_at_100
value: 54.955
- type: map_at_1000
value: 54.955
- type: map_at_3
value: 50.888999999999996
- type: map_at_5
value: 53.349999999999994
- type: mrr_at_1
value: 39.757999999999996
- type: mrr_at_10
value: 55.449000000000005
- type: mrr_at_100
value: 55.449000000000005
- type: mrr_at_1000
value: 55.449000000000005
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.822
- type: ndcg_at_1
value: 38.478
- type: ndcg_at_10
value: 63.239999999999995
- type: ndcg_at_100
value: 63.239999999999995
- type: ndcg_at_1000
value: 63.239999999999995
- type: ndcg_at_3
value: 54.935
- type: ndcg_at_5
value: 59.379000000000005
- type: precision_at_1
value: 38.478
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 22.214
- type: precision_at_5
value: 15.491
- type: recall_at_1
value: 38.478
- type: recall_at_10
value: 89.331
- type: recall_at_100
value: 89.331
- type: recall_at_1000
value: 89.331
- type: recall_at_3
value: 66.643
- type: recall_at_5
value: 77.45400000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 51.67144081472449
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.11256154264126
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.1935203751726
- type: cos_sim_spearman
value: 86.35497970498659
- type: euclidean_pearson
value: 85.46910708503744
- type: euclidean_spearman
value: 85.13928935405485
- type: manhattan_pearson
value: 85.68373836333303
- type: manhattan_spearman
value: 85.40013867117746
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.86793640310432
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 39.80291334130727
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.421
- type: map_at_10
value: 52.349000000000004
- type: map_at_100
value: 52.349000000000004
- type: map_at_1000
value: 52.349000000000004
- type: map_at_3
value: 48.17
- type: map_at_5
value: 50.432
- type: mrr_at_1
value: 47.353
- type: mrr_at_10
value: 58.387
- type: mrr_at_100
value: 58.387
- type: mrr_at_1000
value: 58.387
- type: mrr_at_3
value: 56.199
- type: mrr_at_5
value: 57.487
- type: ndcg_at_1
value: 47.353
- type: ndcg_at_10
value: 59.202
- type: ndcg_at_100
value: 58.848
- type: ndcg_at_1000
value: 58.831999999999994
- type: ndcg_at_3
value: 54.112
- type: ndcg_at_5
value: 56.312
- type: precision_at_1
value: 47.353
- type: precision_at_10
value: 11.459
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 18.627
- type: recall_at_1
value: 38.421
- type: recall_at_10
value: 71.89
- type: recall_at_100
value: 71.89
- type: recall_at_1000
value: 71.89
- type: recall_at_3
value: 56.58
- type: recall_at_5
value: 63.125
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.025999999999996
- type: map_at_10
value: 50.590999999999994
- type: map_at_100
value: 51.99700000000001
- type: map_at_1000
value: 52.11599999999999
- type: map_at_3
value: 47.435
- type: map_at_5
value: 49.236000000000004
- type: mrr_at_1
value: 48.28
- type: mrr_at_10
value: 56.814
- type: mrr_at_100
value: 57.446
- type: mrr_at_1000
value: 57.476000000000006
- type: mrr_at_3
value: 54.958
- type: mrr_at_5
value: 56.084999999999994
- type: ndcg_at_1
value: 48.28
- type: ndcg_at_10
value: 56.442
- type: ndcg_at_100
value: 60.651999999999994
- type: ndcg_at_1000
value: 62.187000000000005
- type: ndcg_at_3
value: 52.866
- type: ndcg_at_5
value: 54.515
- type: precision_at_1
value: 48.28
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.945
- type: precision_at_5
value: 18.076
- type: recall_at_1
value: 38.025999999999996
- type: recall_at_10
value: 66.11399999999999
- type: recall_at_100
value: 83.339
- type: recall_at_1000
value: 92.413
- type: recall_at_3
value: 54.493
- type: recall_at_5
value: 59.64699999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.905
- type: map_at_10
value: 61.58
- type: map_at_100
value: 62.605
- type: map_at_1000
value: 62.637
- type: map_at_3
value: 58.074000000000005
- type: map_at_5
value: 60.260000000000005
- type: mrr_at_1
value: 54.42
- type: mrr_at_10
value: 64.847
- type: mrr_at_100
value: 65.403
- type: mrr_at_1000
value: 65.41900000000001
- type: mrr_at_3
value: 62.675000000000004
- type: mrr_at_5
value: 64.101
- type: ndcg_at_1
value: 54.42
- type: ndcg_at_10
value: 67.394
- type: ndcg_at_100
value: 70.846
- type: ndcg_at_1000
value: 71.403
- type: ndcg_at_3
value: 62.025
- type: ndcg_at_5
value: 65.032
- type: precision_at_1
value: 54.42
- type: precision_at_10
value: 10.646
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 27.398
- type: precision_at_5
value: 18.796
- type: recall_at_1
value: 47.905
- type: recall_at_10
value: 80.84599999999999
- type: recall_at_100
value: 95.078
- type: recall_at_1000
value: 98.878
- type: recall_at_3
value: 67.05600000000001
- type: recall_at_5
value: 74.261
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.745
- type: map_at_10
value: 41.021
- type: map_at_100
value: 41.021
- type: map_at_1000
value: 41.021
- type: map_at_3
value: 37.714999999999996
- type: map_at_5
value: 39.766
- type: mrr_at_1
value: 33.559
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 43.537
- type: mrr_at_1000
value: 43.537
- type: mrr_at_3
value: 40.546
- type: mrr_at_5
value: 42.439
- type: ndcg_at_1
value: 33.559
- type: ndcg_at_10
value: 46.781
- type: ndcg_at_100
value: 46.781
- type: ndcg_at_1000
value: 46.781
- type: ndcg_at_3
value: 40.516000000000005
- type: ndcg_at_5
value: 43.957
- type: precision_at_1
value: 33.559
- type: precision_at_10
value: 7.198
- type: precision_at_100
value: 0.72
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 12.316
- type: recall_at_1
value: 30.745
- type: recall_at_10
value: 62.038000000000004
- type: recall_at_100
value: 62.038000000000004
- type: recall_at_1000
value: 62.038000000000004
- type: recall_at_3
value: 45.378
- type: recall_at_5
value: 53.580000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.637999999999998
- type: map_at_10
value: 31.05
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.05
- type: map_at_3
value: 27.628000000000004
- type: map_at_5
value: 29.767
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 36.131
- type: mrr_at_100
value: 36.131
- type: mrr_at_1000
value: 36.131
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 35.143
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 37.478
- type: ndcg_at_100
value: 37.469
- type: ndcg_at_1000
value: 37.469
- type: ndcg_at_3
value: 31.757999999999996
- type: ndcg_at_5
value: 34.821999999999996
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.188999999999999
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.841
- type: recall_at_1
value: 19.637999999999998
- type: recall_at_10
value: 51.836000000000006
- type: recall_at_100
value: 51.836000000000006
- type: recall_at_1000
value: 51.836000000000006
- type: recall_at_3
value: 36.384
- type: recall_at_5
value: 43.964
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.884
- type: map_at_10
value: 47.88
- type: map_at_100
value: 47.88
- type: map_at_1000
value: 47.88
- type: map_at_3
value: 43.85
- type: map_at_5
value: 46.414
- type: mrr_at_1
value: 43.022
- type: mrr_at_10
value: 53.569
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.569
- type: mrr_at_3
value: 51.075
- type: mrr_at_5
value: 52.725
- type: ndcg_at_1
value: 43.022
- type: ndcg_at_10
value: 54.461000000000006
- type: ndcg_at_100
value: 54.388000000000005
- type: ndcg_at_1000
value: 54.388000000000005
- type: ndcg_at_3
value: 48.864999999999995
- type: ndcg_at_5
value: 52.032000000000004
- type: precision_at_1
value: 43.022
- type: precision_at_10
value: 9.885
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.612
- type: precision_at_5
value: 16.997
- type: recall_at_1
value: 34.884
- type: recall_at_10
value: 68.12899999999999
- type: recall_at_100
value: 68.12899999999999
- type: recall_at_1000
value: 68.12899999999999
- type: recall_at_3
value: 52.428
- type: recall_at_5
value: 60.662000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.588
- type: map_at_10
value: 43.85
- type: map_at_100
value: 45.317
- type: map_at_1000
value: 45.408
- type: map_at_3
value: 39.73
- type: map_at_5
value: 42.122
- type: mrr_at_1
value: 38.927
- type: mrr_at_10
value: 49.582
- type: mrr_at_100
value: 50.39
- type: mrr_at_1000
value: 50.426
- type: mrr_at_3
value: 46.518
- type: mrr_at_5
value: 48.271
- type: ndcg_at_1
value: 38.927
- type: ndcg_at_10
value: 50.605999999999995
- type: ndcg_at_100
value: 56.22200000000001
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 47.233999999999995
- type: precision_at_1
value: 38.927
- type: precision_at_10
value: 9.429
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.271
- type: precision_at_5
value: 15.434000000000001
- type: recall_at_1
value: 31.588
- type: recall_at_10
value: 64.836
- type: recall_at_100
value: 88.066
- type: recall_at_1000
value: 97.748
- type: recall_at_3
value: 47.128
- type: recall_at_5
value: 54.954
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.956083333333336
- type: map_at_10
value: 43.33483333333333
- type: map_at_100
value: 44.64883333333333
- type: map_at_1000
value: 44.75
- type: map_at_3
value: 39.87741666666666
- type: map_at_5
value: 41.86766666666667
- type: mrr_at_1
value: 38.06341666666667
- type: mrr_at_10
value: 47.839666666666666
- type: mrr_at_100
value: 48.644000000000005
- type: mrr_at_1000
value: 48.68566666666667
- type: mrr_at_3
value: 45.26358333333334
- type: mrr_at_5
value: 46.790000000000006
- type: ndcg_at_1
value: 38.06341666666667
- type: ndcg_at_10
value: 49.419333333333334
- type: ndcg_at_100
value: 54.50166666666667
- type: ndcg_at_1000
value: 56.161166666666674
- type: ndcg_at_3
value: 43.982416666666666
- type: ndcg_at_5
value: 46.638083333333334
- type: precision_at_1
value: 38.06341666666667
- type: precision_at_10
value: 8.70858333333333
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.37816666666667
- type: precision_at_5
value: 14.516333333333334
- type: recall_at_1
value: 31.956083333333336
- type: recall_at_10
value: 62.69458333333334
- type: recall_at_100
value: 84.46433333333334
- type: recall_at_1000
value: 95.58449999999999
- type: recall_at_3
value: 47.52016666666666
- type: recall_at_5
value: 54.36066666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.912
- type: map_at_10
value: 38.291
- type: map_at_100
value: 39.44
- type: map_at_1000
value: 39.528
- type: map_at_3
value: 35.638
- type: map_at_5
value: 37.218
- type: mrr_at_1
value: 32.822
- type: mrr_at_10
value: 41.661
- type: mrr_at_100
value: 42.546
- type: mrr_at_1000
value: 42.603
- type: mrr_at_3
value: 39.238
- type: mrr_at_5
value: 40.726
- type: ndcg_at_1
value: 32.822
- type: ndcg_at_10
value: 43.373
- type: ndcg_at_100
value: 48.638
- type: ndcg_at_1000
value: 50.654999999999994
- type: ndcg_at_3
value: 38.643
- type: ndcg_at_5
value: 41.126000000000005
- type: precision_at_1
value: 32.822
- type: precision_at_10
value: 6.8709999999999996
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 16.82
- type: precision_at_5
value: 11.718
- type: recall_at_1
value: 28.912
- type: recall_at_10
value: 55.376999999999995
- type: recall_at_100
value: 79.066
- type: recall_at_1000
value: 93.664
- type: recall_at_3
value: 42.569
- type: recall_at_5
value: 48.719
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.181
- type: map_at_10
value: 31.462
- type: map_at_100
value: 32.73
- type: map_at_1000
value: 32.848
- type: map_at_3
value: 28.57
- type: map_at_5
value: 30.182
- type: mrr_at_1
value: 27.185
- type: mrr_at_10
value: 35.846000000000004
- type: mrr_at_100
value: 36.811
- type: mrr_at_1000
value: 36.873
- type: mrr_at_3
value: 33.437
- type: mrr_at_5
value: 34.813
- type: ndcg_at_1
value: 27.185
- type: ndcg_at_10
value: 36.858000000000004
- type: ndcg_at_100
value: 42.501
- type: ndcg_at_1000
value: 44.945
- type: ndcg_at_3
value: 32.066
- type: ndcg_at_5
value: 34.29
- type: precision_at_1
value: 27.185
- type: precision_at_10
value: 6.752
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 15.290000000000001
- type: precision_at_5
value: 11.004999999999999
- type: recall_at_1
value: 22.181
- type: recall_at_10
value: 48.513
- type: recall_at_100
value: 73.418
- type: recall_at_1000
value: 90.306
- type: recall_at_3
value: 35.003
- type: recall_at_5
value: 40.876000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.934999999999995
- type: map_at_10
value: 44.727
- type: map_at_100
value: 44.727
- type: map_at_1000
value: 44.727
- type: map_at_3
value: 40.918
- type: map_at_5
value: 42.961
- type: mrr_at_1
value: 39.646
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 48.898
- type: mrr_at_1000
value: 48.898
- type: mrr_at_3
value: 45.896
- type: mrr_at_5
value: 47.514
- type: ndcg_at_1
value: 39.646
- type: ndcg_at_10
value: 50.817
- type: ndcg_at_100
value: 50.803
- type: ndcg_at_1000
value: 50.803
- type: ndcg_at_3
value: 44.507999999999996
- type: ndcg_at_5
value: 47.259
- type: precision_at_1
value: 39.646
- type: precision_at_10
value: 8.759
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 20.274
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 33.934999999999995
- type: recall_at_10
value: 65.037
- type: recall_at_100
value: 65.037
- type: recall_at_1000
value: 65.037
- type: recall_at_3
value: 47.439
- type: recall_at_5
value: 54.567
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.058
- type: map_at_10
value: 43.137
- type: map_at_100
value: 43.137
- type: map_at_1000
value: 43.137
- type: map_at_3
value: 39.882
- type: map_at_5
value: 41.379
- type: mrr_at_1
value: 38.933
- type: mrr_at_10
value: 48.344
- type: mrr_at_100
value: 48.344
- type: mrr_at_1000
value: 48.344
- type: mrr_at_3
value: 45.652
- type: mrr_at_5
value: 46.877
- type: ndcg_at_1
value: 38.933
- type: ndcg_at_10
value: 49.964
- type: ndcg_at_100
value: 49.242000000000004
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 44.605
- type: ndcg_at_5
value: 46.501999999999995
- type: precision_at_1
value: 38.933
- type: precision_at_10
value: 9.427000000000001
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 20.685000000000002
- type: precision_at_5
value: 14.585
- type: recall_at_1
value: 32.058
- type: recall_at_10
value: 63.074
- type: recall_at_100
value: 63.074
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 47.509
- type: recall_at_5
value: 52.455
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.029000000000003
- type: map_at_10
value: 34.646
- type: map_at_100
value: 34.646
- type: map_at_1000
value: 34.646
- type: map_at_3
value: 31.456
- type: map_at_5
value: 33.138
- type: mrr_at_1
value: 28.281
- type: mrr_at_10
value: 36.905
- type: mrr_at_100
value: 36.905
- type: mrr_at_1000
value: 36.905
- type: mrr_at_3
value: 34.011
- type: mrr_at_5
value: 35.638
- type: ndcg_at_1
value: 28.281
- type: ndcg_at_10
value: 40.159
- type: ndcg_at_100
value: 40.159
- type: ndcg_at_1000
value: 40.159
- type: ndcg_at_3
value: 33.995
- type: ndcg_at_5
value: 36.836999999999996
- type: precision_at_1
value: 28.281
- type: precision_at_10
value: 6.358999999999999
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.064
- type: precision_at_3
value: 14.233
- type: precision_at_5
value: 10.314
- type: recall_at_1
value: 26.029000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 55.08
- type: recall_at_1000
value: 55.08
- type: recall_at_3
value: 38.487
- type: recall_at_5
value: 45.308
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.842999999999998
- type: map_at_10
value: 22.101000000000003
- type: map_at_100
value: 24.319
- type: map_at_1000
value: 24.51
- type: map_at_3
value: 18.372
- type: map_at_5
value: 20.323
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.321
- type: mrr_at_100
value: 41.262
- type: mrr_at_1000
value: 41.297
- type: mrr_at_3
value: 36.558
- type: mrr_at_5
value: 38.824999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.906
- type: ndcg_at_100
value: 38.986
- type: ndcg_at_1000
value: 42.136
- type: ndcg_at_3
value: 24.911
- type: ndcg_at_5
value: 27.168999999999997
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.798
- type: precision_at_100
value: 1.8399999999999999
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 18.328
- type: precision_at_5
value: 14.502
- type: recall_at_1
value: 12.842999999999998
- type: recall_at_10
value: 37.245
- type: recall_at_100
value: 64.769
- type: recall_at_1000
value: 82.055
- type: recall_at_3
value: 23.159
- type: recall_at_5
value: 29.113
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.934000000000001
- type: map_at_10
value: 21.915000000000003
- type: map_at_100
value: 21.915000000000003
- type: map_at_1000
value: 21.915000000000003
- type: map_at_3
value: 14.623
- type: map_at_5
value: 17.841
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 78.994
- type: mrr_at_100
value: 78.994
- type: mrr_at_1000
value: 78.994
- type: mrr_at_3
value: 77.208
- type: mrr_at_5
value: 78.55799999999999
- type: ndcg_at_1
value: 60.62499999999999
- type: ndcg_at_10
value: 46.604
- type: ndcg_at_100
value: 35.653
- type: ndcg_at_1000
value: 35.531
- type: ndcg_at_3
value: 50.605
- type: ndcg_at_5
value: 48.730000000000004
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 37.75
- type: precision_at_100
value: 3.775
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 54.417
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 8.934000000000001
- type: recall_at_10
value: 28.471000000000004
- type: recall_at_100
value: 28.471000000000004
- type: recall_at_1000
value: 28.471000000000004
- type: recall_at_3
value: 16.019
- type: recall_at_5
value: 21.410999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.81899999999999
- type: map_at_10
value: 78.034
- type: map_at_100
value: 78.034
- type: map_at_1000
value: 78.034
- type: map_at_3
value: 76.43100000000001
- type: map_at_5
value: 77.515
- type: mrr_at_1
value: 71.542
- type: mrr_at_10
value: 81.638
- type: mrr_at_100
value: 81.638
- type: mrr_at_1000
value: 81.638
- type: mrr_at_3
value: 80.403
- type: mrr_at_5
value: 81.256
- type: ndcg_at_1
value: 71.542
- type: ndcg_at_10
value: 82.742
- type: ndcg_at_100
value: 82.741
- type: ndcg_at_1000
value: 82.741
- type: ndcg_at_3
value: 80.039
- type: ndcg_at_5
value: 81.695
- type: precision_at_1
value: 71.542
- type: precision_at_10
value: 10.387
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 31.447999999999997
- type: precision_at_5
value: 19.91
- type: recall_at_1
value: 66.81899999999999
- type: recall_at_10
value: 93.372
- type: recall_at_100
value: 93.372
- type: recall_at_1000
value: 93.372
- type: recall_at_3
value: 86.33
- type: recall_at_5
value: 90.347
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.158
- type: map_at_10
value: 52.017
- type: map_at_100
value: 54.259
- type: map_at_1000
value: 54.367
- type: map_at_3
value: 45.738
- type: map_at_5
value: 49.283
- type: mrr_at_1
value: 57.87
- type: mrr_at_10
value: 66.215
- type: mrr_at_100
value: 66.735
- type: mrr_at_1000
value: 66.75
- type: mrr_at_3
value: 64.043
- type: mrr_at_5
value: 65.116
- type: ndcg_at_1
value: 57.87
- type: ndcg_at_10
value: 59.946999999999996
- type: ndcg_at_100
value: 66.31099999999999
- type: ndcg_at_1000
value: 67.75999999999999
- type: ndcg_at_3
value: 55.483000000000004
- type: ndcg_at_5
value: 56.891000000000005
- type: precision_at_1
value: 57.87
- type: precision_at_10
value: 16.497
- type: precision_at_100
value: 2.321
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.067999999999998
- type: recall_at_1
value: 31.158
- type: recall_at_10
value: 67.381
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.989
- type: recall_at_3
value: 50.553000000000004
- type: recall_at_5
value: 57.824
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.073
- type: map_at_10
value: 72.418
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.215
- type: map_at_3
value: 68.791
- type: map_at_5
value: 71.19
- type: mrr_at_1
value: 84.146
- type: mrr_at_10
value: 88.994
- type: mrr_at_100
value: 89.116
- type: mrr_at_1000
value: 89.12
- type: mrr_at_3
value: 88.373
- type: mrr_at_5
value: 88.82
- type: ndcg_at_1
value: 84.146
- type: ndcg_at_10
value: 79.404
- type: ndcg_at_100
value: 81.83200000000001
- type: ndcg_at_1000
value: 82.524
- type: ndcg_at_3
value: 74.595
- type: ndcg_at_5
value: 77.474
- type: precision_at_1
value: 84.146
- type: precision_at_10
value: 16.753999999999998
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 48.854
- type: precision_at_5
value: 31.579
- type: recall_at_1
value: 42.073
- type: recall_at_10
value: 83.768
- type: recall_at_100
value: 93.018
- type: recall_at_1000
value: 97.481
- type: recall_at_3
value: 73.282
- type: recall_at_5
value: 78.947
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.698
- type: map_at_10
value: 34.585
- type: map_at_100
value: 35.782000000000004
- type: map_at_1000
value: 35.825
- type: map_at_3
value: 30.397999999999996
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 22.192
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 36.218
- type: mrr_at_1000
value: 36.256
- type: mrr_at_3
value: 30.986000000000004
- type: mrr_at_5
value: 33.268
- type: ndcg_at_1
value: 22.192
- type: ndcg_at_10
value: 41.957
- type: ndcg_at_100
value: 47.658
- type: ndcg_at_1000
value: 48.697
- type: ndcg_at_3
value: 33.433
- type: ndcg_at_5
value: 37.551
- type: precision_at_1
value: 22.192
- type: precision_at_10
value: 6.781
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.365
- type: precision_at_5
value: 10.713000000000001
- type: recall_at_1
value: 21.698
- type: recall_at_10
value: 64.79
- type: recall_at_100
value: 91.071
- type: recall_at_1000
value: 98.883
- type: recall_at_3
value: 41.611
- type: recall_at_5
value: 51.459999999999994
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.52153488185864
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 36.80090398444147
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.920999999999999
- type: map_at_10
value: 16.049
- type: map_at_100
value: 16.049
- type: map_at_1000
value: 16.049
- type: map_at_3
value: 11.865
- type: map_at_5
value: 13.657
- type: mrr_at_1
value: 53.87
- type: mrr_at_10
value: 62.291
- type: mrr_at_100
value: 62.291
- type: mrr_at_1000
value: 62.291
- type: mrr_at_3
value: 60.681
- type: mrr_at_5
value: 61.61
- type: ndcg_at_1
value: 51.23799999999999
- type: ndcg_at_10
value: 40.892
- type: ndcg_at_100
value: 26.951999999999998
- type: ndcg_at_1000
value: 26.474999999999998
- type: ndcg_at_3
value: 46.821
- type: ndcg_at_5
value: 44.333
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 30.124000000000002
- type: precision_at_100
value: 3.012
- type: precision_at_1000
value: 0.301
- type: precision_at_3
value: 43.55
- type: precision_at_5
value: 38.266
- type: recall_at_1
value: 6.920999999999999
- type: recall_at_10
value: 20.852
- type: recall_at_100
value: 20.852
- type: recall_at_1000
value: 20.852
- type: recall_at_3
value: 13.628000000000002
- type: recall_at_5
value: 16.273
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.827999999999996
- type: map_at_10
value: 63.434000000000005
- type: map_at_100
value: 63.434000000000005
- type: map_at_1000
value: 63.434000000000005
- type: map_at_3
value: 59.794000000000004
- type: map_at_5
value: 62.08
- type: mrr_at_1
value: 52.288999999999994
- type: mrr_at_10
value: 65.95
- type: mrr_at_100
value: 65.95
- type: mrr_at_1000
value: 65.95
- type: mrr_at_3
value: 63.413
- type: mrr_at_5
value: 65.08
- type: ndcg_at_1
value: 52.288999999999994
- type: ndcg_at_10
value: 70.301
- type: ndcg_at_100
value: 70.301
- type: ndcg_at_1000
value: 70.301
- type: ndcg_at_3
value: 63.979
- type: ndcg_at_5
value: 67.582
- type: precision_at_1
value: 52.288999999999994
- type: precision_at_10
value: 10.576
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 28.177000000000003
- type: precision_at_5
value: 19.073
- type: recall_at_1
value: 46.827999999999996
- type: recall_at_10
value: 88.236
- type: recall_at_100
value: 88.236
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 72.371
- type: recall_at_5
value: 80.56
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.652
- type: map_at_10
value: 85.953
- type: map_at_100
value: 85.953
- type: map_at_1000
value: 85.953
- type: map_at_3
value: 83.05399999999999
- type: map_at_5
value: 84.89
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.473
- type: mrr_at_100
value: 88.473
- type: mrr_at_1000
value: 88.473
- type: mrr_at_3
value: 87.592
- type: mrr_at_5
value: 88.211
- type: ndcg_at_1
value: 82.44
- type: ndcg_at_10
value: 89.467
- type: ndcg_at_100
value: 89.33
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 86.822
- type: ndcg_at_5
value: 88.307
- type: precision_at_1
value: 82.44
- type: precision_at_10
value: 13.616
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 38.117000000000004
- type: precision_at_5
value: 25.05
- type: recall_at_1
value: 71.652
- type: recall_at_10
value: 96.224
- type: recall_at_100
value: 96.224
- type: recall_at_1000
value: 96.224
- type: recall_at_3
value: 88.571
- type: recall_at_5
value: 92.812
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.295010338050474
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.26380819328142
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.683
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 17.532
- type: map_at_1000
value: 17.875
- type: map_at_3
value: 10.392
- type: map_at_5
value: 12.592
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 39.951
- type: mrr_at_100
value: 41.025
- type: mrr_at_1000
value: 41.056
- type: mrr_at_3
value: 36.317
- type: mrr_at_5
value: 38.412
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.410999999999998
- type: ndcg_at_100
value: 33.79
- type: ndcg_at_1000
value: 39.035
- type: ndcg_at_3
value: 22.845
- type: ndcg_at_5
value: 20.080000000000002
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 12.790000000000001
- type: precision_at_100
value: 2.633
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 21.367
- type: precision_at_5
value: 17.7
- type: recall_at_1
value: 5.683
- type: recall_at_10
value: 25.91
- type: recall_at_100
value: 53.443
- type: recall_at_1000
value: 78.73
- type: recall_at_3
value: 13.003
- type: recall_at_5
value: 17.932000000000002
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.677978681023
- type: cos_sim_spearman
value: 83.13093441058189
- type: euclidean_pearson
value: 83.35535759341572
- type: euclidean_spearman
value: 83.42583744219611
- type: manhattan_pearson
value: 83.2243124045889
- type: manhattan_spearman
value: 83.39801618652632
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.68960206569666
- type: cos_sim_spearman
value: 77.3368966488535
- type: euclidean_pearson
value: 77.62828980560303
- type: euclidean_spearman
value: 76.77951481444651
- type: manhattan_pearson
value: 77.88637240839041
- type: manhattan_spearman
value: 77.22157841466188
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.18745821650724
- type: cos_sim_spearman
value: 85.04423285574542
- type: euclidean_pearson
value: 85.46604816931023
- type: euclidean_spearman
value: 85.5230593932974
- type: manhattan_pearson
value: 85.57912805986261
- type: manhattan_spearman
value: 85.65955905111873
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.6715333300355
- type: cos_sim_spearman
value: 82.9058522514908
- type: euclidean_pearson
value: 83.9640357424214
- type: euclidean_spearman
value: 83.60415457472637
- type: manhattan_pearson
value: 84.05621005853469
- type: manhattan_spearman
value: 83.87077724707746
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.82422928098886
- type: cos_sim_spearman
value: 88.12660311894628
- type: euclidean_pearson
value: 87.50974805056555
- type: euclidean_spearman
value: 87.91957275596677
- type: manhattan_pearson
value: 87.74119404878883
- type: manhattan_spearman
value: 88.2808922165719
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.80605838552093
- type: cos_sim_spearman
value: 86.24123388765678
- type: euclidean_pearson
value: 85.32648347339814
- type: euclidean_spearman
value: 85.60046671950158
- type: manhattan_pearson
value: 85.53800168487811
- type: manhattan_spearman
value: 85.89542420480763
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.87540978988132
- type: cos_sim_spearman
value: 90.12715295099461
- type: euclidean_pearson
value: 91.61085993525275
- type: euclidean_spearman
value: 91.31835942311758
- type: manhattan_pearson
value: 91.57500202032934
- type: manhattan_spearman
value: 91.1790925526635
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.87136205329556
- type: cos_sim_spearman
value: 68.6253154635078
- type: euclidean_pearson
value: 68.91536015034222
- type: euclidean_spearman
value: 67.63744649352542
- type: manhattan_pearson
value: 69.2000713045275
- type: manhattan_spearman
value: 68.16002901587316
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.21849551039082
- type: cos_sim_spearman
value: 85.6392959372461
- type: euclidean_pearson
value: 85.92050852609488
- type: euclidean_spearman
value: 85.97205649009734
- type: manhattan_pearson
value: 86.1031154802254
- type: manhattan_spearman
value: 86.26791155517466
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.994
- type: map_at_10
value: 74.763
- type: map_at_100
value: 75.127
- type: map_at_1000
value: 75.143
- type: map_at_3
value: 71.824
- type: map_at_5
value: 73.71
- type: mrr_at_1
value: 68.333
- type: mrr_at_10
value: 75.749
- type: mrr_at_100
value: 75.922
- type: mrr_at_1000
value: 75.938
- type: mrr_at_3
value: 73.556
- type: mrr_at_5
value: 74.739
- type: ndcg_at_1
value: 68.333
- type: ndcg_at_10
value: 79.174
- type: ndcg_at_100
value: 80.41
- type: ndcg_at_1000
value: 80.804
- type: ndcg_at_3
value: 74.361
- type: ndcg_at_5
value: 76.861
- type: precision_at_1
value: 68.333
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 64.994
- type: recall_at_10
value: 91.822
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.878
- type: recall_at_5
value: 85.172
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72079207920792
- type: cos_sim_ap
value: 93.00265215525152
- type: cos_sim_f1
value: 85.06596306068602
- type: cos_sim_precision
value: 90.05586592178771
- type: cos_sim_recall
value: 80.60000000000001
- type: dot_accuracy
value: 99.66039603960397
- type: dot_ap
value: 91.22371407479089
- type: dot_f1
value: 82.34693877551021
- type: dot_precision
value: 84.0625
- type: dot_recall
value: 80.7
- type: euclidean_accuracy
value: 99.71881188118812
- type: euclidean_ap
value: 92.88449963304728
- type: euclidean_f1
value: 85.19480519480518
- type: euclidean_precision
value: 88.64864864864866
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.73267326732673
- type: manhattan_ap
value: 93.23055393056883
- type: manhattan_f1
value: 85.88957055214725
- type: manhattan_precision
value: 87.86610878661088
- type: manhattan_recall
value: 84.0
- type: max_accuracy
value: 99.73267326732673
- type: max_ap
value: 93.23055393056883
- type: max_f1
value: 85.88957055214725
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 77.3305735900358
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 41.32967136540674
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.783007208997144
- type: cos_sim_spearman
value: 30.373444721540533
- type: dot_pearson
value: 29.210604111143905
- type: dot_spearman
value: 29.98809758085659
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.234
- type: map_at_10
value: 1.894
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.894
- type: map_at_3
value: 0.636
- type: map_at_5
value: 1.0
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 74.798
- type: ndcg_at_100
value: 16.462
- type: ndcg_at_1000
value: 7.0889999999999995
- type: ndcg_at_3
value: 80.754
- type: ndcg_at_5
value: 77.319
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 7.8
- type: precision_at_1000
value: 0.7799999999999999
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.80000000000001
- type: recall_at_1
value: 0.234
- type: recall_at_10
value: 2.093
- type: recall_at_100
value: 2.093
- type: recall_at_1000
value: 2.093
- type: recall_at_3
value: 0.662
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.703
- type: map_at_10
value: 10.866000000000001
- type: map_at_100
value: 10.866000000000001
- type: map_at_1000
value: 10.866000000000001
- type: map_at_3
value: 5.909
- type: map_at_5
value: 7.35
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 53.583000000000006
- type: mrr_at_100
value: 53.583000000000006
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 49.32
- type: mrr_at_5
value: 51.769
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 27.926000000000002
- type: ndcg_at_100
value: 22.701
- type: ndcg_at_1000
value: 22.701
- type: ndcg_at_3
value: 32.073
- type: ndcg_at_5
value: 28.327999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 24.694
- type: precision_at_100
value: 2.469
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.703
- type: recall_at_10
value: 17.702
- type: recall_at_100
value: 17.702
- type: recall_at_1000
value: 17.702
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 9.748999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 55.70352297774293
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 81.08262141256193
- type: cos_sim_f1
value: 73.82341501361338
- type: cos_sim_precision
value: 72.5720112159062
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 86.66030875603504
- type: dot_ap
value: 76.6052349228621
- type: dot_f1
value: 70.13897280966768
- type: dot_precision
value: 64.70457079152732
- type: dot_recall
value: 76.56992084432717
- type: euclidean_accuracy
value: 88.37098408535495
- type: euclidean_ap
value: 81.12515230092113
- type: euclidean_f1
value: 74.10338225909379
- type: euclidean_precision
value: 71.76761433868974
- type: euclidean_recall
value: 76.59630606860158
- type: manhattan_accuracy
value: 88.34118137926924
- type: manhattan_ap
value: 80.95751834536561
- type: manhattan_f1
value: 73.9119496855346
- type: manhattan_precision
value: 70.625
- type: manhattan_recall
value: 77.5197889182058
- type: max_accuracy
value: 88.37098408535495
- type: max_ap
value: 81.12515230092113
- type: max_f1
value: 74.10338225909379
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.79896767182831
- type: cos_sim_ap
value: 87.40071784061065
- type: cos_sim_f1
value: 79.87753144712087
- type: cos_sim_precision
value: 76.67304015296367
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.95486474948578
- type: dot_ap
value: 86.00227979119943
- type: dot_f1
value: 78.54601474525914
- type: dot_precision
value: 75.00525394045535
- type: dot_recall
value: 82.43763473975977
- type: euclidean_accuracy
value: 89.7892653393876
- type: euclidean_ap
value: 87.42174706480819
- type: euclidean_f1
value: 80.07283321194465
- type: euclidean_precision
value: 75.96738529574351
- type: euclidean_recall
value: 84.6473668001232
- type: manhattan_accuracy
value: 89.8474793340319
- type: manhattan_ap
value: 87.47814292587448
- type: manhattan_f1
value: 80.15461150280949
- type: manhattan_precision
value: 74.88798234468
- type: manhattan_recall
value: 86.21804742839544
- type: max_accuracy
value: 89.8474793340319
- type: max_ap
value: 87.47814292587448
- type: max_f1
value: 80.15461150280949
---
# Model Summary
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
- **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm)
- **Paper:** https://arxiv.org/abs/2402.09906
- **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview
- **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh
| Model | Description |
|-------|-------------|
| [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT |
| [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT |
# Use
The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference).
# Citation
```bibtex
@misc{muennighoff2024generative,
title={Generative Representational Instruction Tuning},
author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela},
year={2024},
eprint={2402.09906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/GritLM_-_GritLM-7B-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"arxiv:2402.09906",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T17:05:23+00:00 | [
"2402.09906"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #custom_code #arxiv-2402.09906 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
GritLM-7B - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
pipeline\_tag: text-generation
inference: true
license: apache-2.0
datasets:
* GritLM/tulu2
tags:
* mteb
model-index:
* name: GritLM-7B
results:
+ task:
type: Classification
dataset:
type: mteb/amazon\_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
+ task:
type: Classification
dataset:
type: mteb/amazon\_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
+ task:
type: Classification
dataset:
type: mteb/amazon\_reviews\_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
+ task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.478
- type: map\_at\_10
value: 54.955
- type: map\_at\_100
value: 54.955
- type: map\_at\_1000
value: 54.955
- type: map\_at\_3
value: 50.888999999999996
- type: map\_at\_5
value: 53.349999999999994
- type: mrr\_at\_1
value: 39.757999999999996
- type: mrr\_at\_10
value: 55.449000000000005
- type: mrr\_at\_100
value: 55.449000000000005
- type: mrr\_at\_1000
value: 55.449000000000005
- type: mrr\_at\_3
value: 51.37500000000001
- type: mrr\_at\_5
value: 53.822
- type: ndcg\_at\_1
value: 38.478
- type: ndcg\_at\_10
value: 63.239999999999995
- type: ndcg\_at\_100
value: 63.239999999999995
- type: ndcg\_at\_1000
value: 63.239999999999995
- type: ndcg\_at\_3
value: 54.935
- type: ndcg\_at\_5
value: 59.379000000000005
- type: precision\_at\_1
value: 38.478
- type: precision\_at\_10
value: 8.933
- type: precision\_at\_100
value: 0.893
- type: precision\_at\_1000
value: 0.089
- type: precision\_at\_3
value: 22.214
- type: precision\_at\_5
value: 15.491
- type: recall\_at\_1
value: 38.478
- type: recall\_at\_10
value: 89.331
- type: recall\_at\_100
value: 89.331
- type: recall\_at\_1000
value: 89.331
- type: recall\_at\_3
value: 66.643
- type: recall\_at\_5
value: 77.45400000000001
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v\_measure
value: 51.67144081472449
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v\_measure
value: 48.11256154264126
+ task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
+ task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos\_sim\_pearson
value: 88.1935203751726
- type: cos\_sim\_spearman
value: 86.35497970498659
- type: euclidean\_pearson
value: 85.46910708503744
- type: euclidean\_spearman
value: 85.13928935405485
- type: manhattan\_pearson
value: 85.68373836333303
- type: manhattan\_spearman
value: 85.40013867117746
+ task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v\_measure
value: 40.86793640310432
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v\_measure
value: 39.80291334130727
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.421
- type: map\_at\_10
value: 52.349000000000004
- type: map\_at\_100
value: 52.349000000000004
- type: map\_at\_1000
value: 52.349000000000004
- type: map\_at\_3
value: 48.17
- type: map\_at\_5
value: 50.432
- type: mrr\_at\_1
value: 47.353
- type: mrr\_at\_10
value: 58.387
- type: mrr\_at\_100
value: 58.387
- type: mrr\_at\_1000
value: 58.387
- type: mrr\_at\_3
value: 56.199
- type: mrr\_at\_5
value: 57.487
- type: ndcg\_at\_1
value: 47.353
- type: ndcg\_at\_10
value: 59.202
- type: ndcg\_at\_100
value: 58.848
- type: ndcg\_at\_1000
value: 58.831999999999994
- type: ndcg\_at\_3
value: 54.112
- type: ndcg\_at\_5
value: 56.312
- type: precision\_at\_1
value: 47.353
- type: precision\_at\_10
value: 11.459
- type: precision\_at\_100
value: 1.146
- type: precision\_at\_1000
value: 0.11499999999999999
- type: precision\_at\_3
value: 26.133
- type: precision\_at\_5
value: 18.627
- type: recall\_at\_1
value: 38.421
- type: recall\_at\_10
value: 71.89
- type: recall\_at\_100
value: 71.89
- type: recall\_at\_1000
value: 71.89
- type: recall\_at\_3
value: 56.58
- type: recall\_at\_5
value: 63.125
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.025999999999996
- type: map\_at\_10
value: 50.590999999999994
- type: map\_at\_100
value: 51.99700000000001
- type: map\_at\_1000
value: 52.11599999999999
- type: map\_at\_3
value: 47.435
- type: map\_at\_5
value: 49.236000000000004
- type: mrr\_at\_1
value: 48.28
- type: mrr\_at\_10
value: 56.814
- type: mrr\_at\_100
value: 57.446
- type: mrr\_at\_1000
value: 57.476000000000006
- type: mrr\_at\_3
value: 54.958
- type: mrr\_at\_5
value: 56.084999999999994
- type: ndcg\_at\_1
value: 48.28
- type: ndcg\_at\_10
value: 56.442
- type: ndcg\_at\_100
value: 60.651999999999994
- type: ndcg\_at\_1000
value: 62.187000000000005
- type: ndcg\_at\_3
value: 52.866
- type: ndcg\_at\_5
value: 54.515
- type: precision\_at\_1
value: 48.28
- type: precision\_at\_10
value: 10.586
- type: precision\_at\_100
value: 1.6310000000000002
- type: precision\_at\_1000
value: 0.20600000000000002
- type: precision\_at\_3
value: 25.945
- type: precision\_at\_5
value: 18.076
- type: recall\_at\_1
value: 38.025999999999996
- type: recall\_at\_10
value: 66.11399999999999
- type: recall\_at\_100
value: 83.339
- type: recall\_at\_1000
value: 92.413
- type: recall\_at\_3
value: 54.493
- type: recall\_at\_5
value: 59.64699999999999
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 47.905
- type: map\_at\_10
value: 61.58
- type: map\_at\_100
value: 62.605
- type: map\_at\_1000
value: 62.637
- type: map\_at\_3
value: 58.074000000000005
- type: map\_at\_5
value: 60.260000000000005
- type: mrr\_at\_1
value: 54.42
- type: mrr\_at\_10
value: 64.847
- type: mrr\_at\_100
value: 65.403
- type: mrr\_at\_1000
value: 65.41900000000001
- type: mrr\_at\_3
value: 62.675000000000004
- type: mrr\_at\_5
value: 64.101
- type: ndcg\_at\_1
value: 54.42
- type: ndcg\_at\_10
value: 67.394
- type: ndcg\_at\_100
value: 70.846
- type: ndcg\_at\_1000
value: 71.403
- type: ndcg\_at\_3
value: 62.025
- type: ndcg\_at\_5
value: 65.032
- type: precision\_at\_1
value: 54.42
- type: precision\_at\_10
value: 10.646
- type: precision\_at\_100
value: 1.325
- type: precision\_at\_1000
value: 0.13999999999999999
- type: precision\_at\_3
value: 27.398
- type: precision\_at\_5
value: 18.796
- type: recall\_at\_1
value: 47.905
- type: recall\_at\_10
value: 80.84599999999999
- type: recall\_at\_100
value: 95.078
- type: recall\_at\_1000
value: 98.878
- type: recall\_at\_3
value: 67.05600000000001
- type: recall\_at\_5
value: 74.261
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 30.745
- type: map\_at\_10
value: 41.021
- type: map\_at\_100
value: 41.021
- type: map\_at\_1000
value: 41.021
- type: map\_at\_3
value: 37.714999999999996
- type: map\_at\_5
value: 39.766
- type: mrr\_at\_1
value: 33.559
- type: mrr\_at\_10
value: 43.537
- type: mrr\_at\_100
value: 43.537
- type: mrr\_at\_1000
value: 43.537
- type: mrr\_at\_3
value: 40.546
- type: mrr\_at\_5
value: 42.439
- type: ndcg\_at\_1
value: 33.559
- type: ndcg\_at\_10
value: 46.781
- type: ndcg\_at\_100
value: 46.781
- type: ndcg\_at\_1000
value: 46.781
- type: ndcg\_at\_3
value: 40.516000000000005
- type: ndcg\_at\_5
value: 43.957
- type: precision\_at\_1
value: 33.559
- type: precision\_at\_10
value: 7.198
- type: precision\_at\_100
value: 0.72
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 17.1
- type: precision\_at\_5
value: 12.316
- type: recall\_at\_1
value: 30.745
- type: recall\_at\_10
value: 62.038000000000004
- type: recall\_at\_100
value: 62.038000000000004
- type: recall\_at\_1000
value: 62.038000000000004
- type: recall\_at\_3
value: 45.378
- type: recall\_at\_5
value: 53.580000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 19.637999999999998
- type: map\_at\_10
value: 31.05
- type: map\_at\_100
value: 31.05
- type: map\_at\_1000
value: 31.05
- type: map\_at\_3
value: 27.628000000000004
- type: map\_at\_5
value: 29.767
- type: mrr\_at\_1
value: 25.0
- type: mrr\_at\_10
value: 36.131
- type: mrr\_at\_100
value: 36.131
- type: mrr\_at\_1000
value: 36.131
- type: mrr\_at\_3
value: 33.333
- type: mrr\_at\_5
value: 35.143
- type: ndcg\_at\_1
value: 25.0
- type: ndcg\_at\_10
value: 37.478
- type: ndcg\_at\_100
value: 37.469
- type: ndcg\_at\_1000
value: 37.469
- type: ndcg\_at\_3
value: 31.757999999999996
- type: ndcg\_at\_5
value: 34.821999999999996
- type: precision\_at\_1
value: 25.0
- type: precision\_at\_10
value: 7.188999999999999
- type: precision\_at\_100
value: 0.719
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 15.837000000000002
- type: precision\_at\_5
value: 11.841
- type: recall\_at\_1
value: 19.637999999999998
- type: recall\_at\_10
value: 51.836000000000006
- type: recall\_at\_100
value: 51.836000000000006
- type: recall\_at\_1000
value: 51.836000000000006
- type: recall\_at\_3
value: 36.384
- type: recall\_at\_5
value: 43.964
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 34.884
- type: map\_at\_10
value: 47.88
- type: map\_at\_100
value: 47.88
- type: map\_at\_1000
value: 47.88
- type: map\_at\_3
value: 43.85
- type: map\_at\_5
value: 46.414
- type: mrr\_at\_1
value: 43.022
- type: mrr\_at\_10
value: 53.569
- type: mrr\_at\_100
value: 53.569
- type: mrr\_at\_1000
value: 53.569
- type: mrr\_at\_3
value: 51.075
- type: mrr\_at\_5
value: 52.725
- type: ndcg\_at\_1
value: 43.022
- type: ndcg\_at\_10
value: 54.461000000000006
- type: ndcg\_at\_100
value: 54.388000000000005
- type: ndcg\_at\_1000
value: 54.388000000000005
- type: ndcg\_at\_3
value: 48.864999999999995
- type: ndcg\_at\_5
value: 52.032000000000004
- type: precision\_at\_1
value: 43.022
- type: precision\_at\_10
value: 9.885
- type: precision\_at\_100
value: 0.988
- type: precision\_at\_1000
value: 0.099
- type: precision\_at\_3
value: 23.612
- type: precision\_at\_5
value: 16.997
- type: recall\_at\_1
value: 34.884
- type: recall\_at\_10
value: 68.12899999999999
- type: recall\_at\_100
value: 68.12899999999999
- type: recall\_at\_1000
value: 68.12899999999999
- type: recall\_at\_3
value: 52.428
- type: recall\_at\_5
value: 60.662000000000006
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.588
- type: map\_at\_10
value: 43.85
- type: map\_at\_100
value: 45.317
- type: map\_at\_1000
value: 45.408
- type: map\_at\_3
value: 39.73
- type: map\_at\_5
value: 42.122
- type: mrr\_at\_1
value: 38.927
- type: mrr\_at\_10
value: 49.582
- type: mrr\_at\_100
value: 50.39
- type: mrr\_at\_1000
value: 50.426
- type: mrr\_at\_3
value: 46.518
- type: mrr\_at\_5
value: 48.271
- type: ndcg\_at\_1
value: 38.927
- type: ndcg\_at\_10
value: 50.605999999999995
- type: ndcg\_at\_100
value: 56.22200000000001
- type: ndcg\_at\_1000
value: 57.724
- type: ndcg\_at\_3
value: 44.232
- type: ndcg\_at\_5
value: 47.233999999999995
- type: precision\_at\_1
value: 38.927
- type: precision\_at\_10
value: 9.429
- type: precision\_at\_100
value: 1.435
- type: precision\_at\_1000
value: 0.172
- type: precision\_at\_3
value: 21.271
- type: precision\_at\_5
value: 15.434000000000001
- type: recall\_at\_1
value: 31.588
- type: recall\_at\_10
value: 64.836
- type: recall\_at\_100
value: 88.066
- type: recall\_at\_1000
value: 97.748
- type: recall\_at\_3
value: 47.128
- type: recall\_at\_5
value: 54.954
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.956083333333336
- type: map\_at\_10
value: 43.33483333333333
- type: map\_at\_100
value: 44.64883333333333
- type: map\_at\_1000
value: 44.75
- type: map\_at\_3
value: 39.87741666666666
- type: map\_at\_5
value: 41.86766666666667
- type: mrr\_at\_1
value: 38.06341666666667
- type: mrr\_at\_10
value: 47.839666666666666
- type: mrr\_at\_100
value: 48.644000000000005
- type: mrr\_at\_1000
value: 48.68566666666667
- type: mrr\_at\_3
value: 45.26358333333334
- type: mrr\_at\_5
value: 46.790000000000006
- type: ndcg\_at\_1
value: 38.06341666666667
- type: ndcg\_at\_10
value: 49.419333333333334
- type: ndcg\_at\_100
value: 54.50166666666667
- type: ndcg\_at\_1000
value: 56.161166666666674
- type: ndcg\_at\_3
value: 43.982416666666666
- type: ndcg\_at\_5
value: 46.638083333333334
- type: precision\_at\_1
value: 38.06341666666667
- type: precision\_at\_10
value: 8.70858333333333
- type: precision\_at\_100
value: 1.327
- type: precision\_at\_1000
value: 0.165
- type: precision\_at\_3
value: 20.37816666666667
- type: precision\_at\_5
value: 14.516333333333334
- type: recall\_at\_1
value: 31.956083333333336
- type: recall\_at\_10
value: 62.69458333333334
- type: recall\_at\_100
value: 84.46433333333334
- type: recall\_at\_1000
value: 95.58449999999999
- type: recall\_at\_3
value: 47.52016666666666
- type: recall\_at\_5
value: 54.36066666666666
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 28.912
- type: map\_at\_10
value: 38.291
- type: map\_at\_100
value: 39.44
- type: map\_at\_1000
value: 39.528
- type: map\_at\_3
value: 35.638
- type: map\_at\_5
value: 37.218
- type: mrr\_at\_1
value: 32.822
- type: mrr\_at\_10
value: 41.661
- type: mrr\_at\_100
value: 42.546
- type: mrr\_at\_1000
value: 42.603
- type: mrr\_at\_3
value: 39.238
- type: mrr\_at\_5
value: 40.726
- type: ndcg\_at\_1
value: 32.822
- type: ndcg\_at\_10
value: 43.373
- type: ndcg\_at\_100
value: 48.638
- type: ndcg\_at\_1000
value: 50.654999999999994
- type: ndcg\_at\_3
value: 38.643
- type: ndcg\_at\_5
value: 41.126000000000005
- type: precision\_at\_1
value: 32.822
- type: precision\_at\_10
value: 6.8709999999999996
- type: precision\_at\_100
value: 1.032
- type: precision\_at\_1000
value: 0.128
- type: precision\_at\_3
value: 16.82
- type: precision\_at\_5
value: 11.718
- type: recall\_at\_1
value: 28.912
- type: recall\_at\_10
value: 55.376999999999995
- type: recall\_at\_100
value: 79.066
- type: recall\_at\_1000
value: 93.664
- type: recall\_at\_3
value: 42.569
- type: recall\_at\_5
value: 48.719
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 22.181
- type: map\_at\_10
value: 31.462
- type: map\_at\_100
value: 32.73
- type: map\_at\_1000
value: 32.848
- type: map\_at\_3
value: 28.57
- type: map\_at\_5
value: 30.182
- type: mrr\_at\_1
value: 27.185
- type: mrr\_at\_10
value: 35.846000000000004
- type: mrr\_at\_100
value: 36.811
- type: mrr\_at\_1000
value: 36.873
- type: mrr\_at\_3
value: 33.437
- type: mrr\_at\_5
value: 34.813
- type: ndcg\_at\_1
value: 27.185
- type: ndcg\_at\_10
value: 36.858000000000004
- type: ndcg\_at\_100
value: 42.501
- type: ndcg\_at\_1000
value: 44.945
- type: ndcg\_at\_3
value: 32.066
- type: ndcg\_at\_5
value: 34.29
- type: precision\_at\_1
value: 27.185
- type: precision\_at\_10
value: 6.752
- type: precision\_at\_100
value: 1.111
- type: precision\_at\_1000
value: 0.151
- type: precision\_at\_3
value: 15.290000000000001
- type: precision\_at\_5
value: 11.004999999999999
- type: recall\_at\_1
value: 22.181
- type: recall\_at\_10
value: 48.513
- type: recall\_at\_100
value: 73.418
- type: recall\_at\_1000
value: 90.306
- type: recall\_at\_3
value: 35.003
- type: recall\_at\_5
value: 40.876000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 33.934999999999995
- type: map\_at\_10
value: 44.727
- type: map\_at\_100
value: 44.727
- type: map\_at\_1000
value: 44.727
- type: map\_at\_3
value: 40.918
- type: map\_at\_5
value: 42.961
- type: mrr\_at\_1
value: 39.646
- type: mrr\_at\_10
value: 48.898
- type: mrr\_at\_100
value: 48.898
- type: mrr\_at\_1000
value: 48.898
- type: mrr\_at\_3
value: 45.896
- type: mrr\_at\_5
value: 47.514
- type: ndcg\_at\_1
value: 39.646
- type: ndcg\_at\_10
value: 50.817
- type: ndcg\_at\_100
value: 50.803
- type: ndcg\_at\_1000
value: 50.803
- type: ndcg\_at\_3
value: 44.507999999999996
- type: ndcg\_at\_5
value: 47.259
- type: precision\_at\_1
value: 39.646
- type: precision\_at\_10
value: 8.759
- type: precision\_at\_100
value: 0.876
- type: precision\_at\_1000
value: 0.08800000000000001
- type: precision\_at\_3
value: 20.274
- type: precision\_at\_5
value: 14.366000000000001
- type: recall\_at\_1
value: 33.934999999999995
- type: recall\_at\_10
value: 65.037
- type: recall\_at\_100
value: 65.037
- type: recall\_at\_1000
value: 65.037
- type: recall\_at\_3
value: 47.439
- type: recall\_at\_5
value: 54.567
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 32.058
- type: map\_at\_10
value: 43.137
- type: map\_at\_100
value: 43.137
- type: map\_at\_1000
value: 43.137
- type: map\_at\_3
value: 39.882
- type: map\_at\_5
value: 41.379
- type: mrr\_at\_1
value: 38.933
- type: mrr\_at\_10
value: 48.344
- type: mrr\_at\_100
value: 48.344
- type: mrr\_at\_1000
value: 48.344
- type: mrr\_at\_3
value: 45.652
- type: mrr\_at\_5
value: 46.877
- type: ndcg\_at\_1
value: 38.933
- type: ndcg\_at\_10
value: 49.964
- type: ndcg\_at\_100
value: 49.242000000000004
- type: ndcg\_at\_1000
value: 49.222
- type: ndcg\_at\_3
value: 44.605
- type: ndcg\_at\_5
value: 46.501999999999995
- type: precision\_at\_1
value: 38.933
- type: precision\_at\_10
value: 9.427000000000001
- type: precision\_at\_100
value: 0.943
- type: precision\_at\_1000
value: 0.094
- type: precision\_at\_3
value: 20.685000000000002
- type: precision\_at\_5
value: 14.585
- type: recall\_at\_1
value: 32.058
- type: recall\_at\_10
value: 63.074
- type: recall\_at\_100
value: 63.074
- type: recall\_at\_1000
value: 63.074
- type: recall\_at\_3
value: 47.509
- type: recall\_at\_5
value: 52.455
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 26.029000000000003
- type: map\_at\_10
value: 34.646
- type: map\_at\_100
value: 34.646
- type: map\_at\_1000
value: 34.646
- type: map\_at\_3
value: 31.456
- type: map\_at\_5
value: 33.138
- type: mrr\_at\_1
value: 28.281
- type: mrr\_at\_10
value: 36.905
- type: mrr\_at\_100
value: 36.905
- type: mrr\_at\_1000
value: 36.905
- type: mrr\_at\_3
value: 34.011
- type: mrr\_at\_5
value: 35.638
- type: ndcg\_at\_1
value: 28.281
- type: ndcg\_at\_10
value: 40.159
- type: ndcg\_at\_100
value: 40.159
- type: ndcg\_at\_1000
value: 40.159
- type: ndcg\_at\_3
value: 33.995
- type: ndcg\_at\_5
value: 36.836999999999996
- type: precision\_at\_1
value: 28.281
- type: precision\_at\_10
value: 6.358999999999999
- type: precision\_at\_100
value: 0.636
- type: precision\_at\_1000
value: 0.064
- type: precision\_at\_3
value: 14.233
- type: precision\_at\_5
value: 10.314
- type: recall\_at\_1
value: 26.029000000000003
- type: recall\_at\_10
value: 55.08
- type: recall\_at\_100
value: 55.08
- type: recall\_at\_1000
value: 55.08
- type: recall\_at\_3
value: 38.487
- type: recall\_at\_5
value: 45.308
+ task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 12.842999999999998
- type: map\_at\_10
value: 22.101000000000003
- type: map\_at\_100
value: 24.319
- type: map\_at\_1000
value: 24.51
- type: map\_at\_3
value: 18.372
- type: map\_at\_5
value: 20.323
- type: mrr\_at\_1
value: 27.948
- type: mrr\_at\_10
value: 40.321
- type: mrr\_at\_100
value: 41.262
- type: mrr\_at\_1000
value: 41.297
- type: mrr\_at\_3
value: 36.558
- type: mrr\_at\_5
value: 38.824999999999996
- type: ndcg\_at\_1
value: 27.948
- type: ndcg\_at\_10
value: 30.906
- type: ndcg\_at\_100
value: 38.986
- type: ndcg\_at\_1000
value: 42.136
- type: ndcg\_at\_3
value: 24.911
- type: ndcg\_at\_5
value: 27.168999999999997
- type: precision\_at\_1
value: 27.948
- type: precision\_at\_10
value: 9.798
- type: precision\_at\_100
value: 1.8399999999999999
- type: precision\_at\_1000
value: 0.243
- type: precision\_at\_3
value: 18.328
- type: precision\_at\_5
value: 14.502
- type: recall\_at\_1
value: 12.842999999999998
- type: recall\_at\_10
value: 37.245
- type: recall\_at\_100
value: 64.769
- type: recall\_at\_1000
value: 82.055
- type: recall\_at\_3
value: 23.159
- type: recall\_at\_5
value: 29.113
+ task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 8.934000000000001
- type: map\_at\_10
value: 21.915000000000003
- type: map\_at\_100
value: 21.915000000000003
- type: map\_at\_1000
value: 21.915000000000003
- type: map\_at\_3
value: 14.623
- type: map\_at\_5
value: 17.841
- type: mrr\_at\_1
value: 71.25
- type: mrr\_at\_10
value: 78.994
- type: mrr\_at\_100
value: 78.994
- type: mrr\_at\_1000
value: 78.994
- type: mrr\_at\_3
value: 77.208
- type: mrr\_at\_5
value: 78.55799999999999
- type: ndcg\_at\_1
value: 60.62499999999999
- type: ndcg\_at\_10
value: 46.604
- type: ndcg\_at\_100
value: 35.653
- type: ndcg\_at\_1000
value: 35.531
- type: ndcg\_at\_3
value: 50.605
- type: ndcg\_at\_5
value: 48.730000000000004
- type: precision\_at\_1
value: 71.25
- type: precision\_at\_10
value: 37.75
- type: precision\_at\_100
value: 3.775
- type: precision\_at\_1000
value: 0.377
- type: precision\_at\_3
value: 54.417
- type: precision\_at\_5
value: 48.15
- type: recall\_at\_1
value: 8.934000000000001
- type: recall\_at\_10
value: 28.471000000000004
- type: recall\_at\_100
value: 28.471000000000004
- type: recall\_at\_1000
value: 28.471000000000004
- type: recall\_at\_3
value: 16.019
- type: recall\_at\_5
value: 21.410999999999998
+ task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
+ task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 66.81899999999999
- type: map\_at\_10
value: 78.034
- type: map\_at\_100
value: 78.034
- type: map\_at\_1000
value: 78.034
- type: map\_at\_3
value: 76.43100000000001
- type: map\_at\_5
value: 77.515
- type: mrr\_at\_1
value: 71.542
- type: mrr\_at\_10
value: 81.638
- type: mrr\_at\_100
value: 81.638
- type: mrr\_at\_1000
value: 81.638
- type: mrr\_at\_3
value: 80.403
- type: mrr\_at\_5
value: 81.256
- type: ndcg\_at\_1
value: 71.542
- type: ndcg\_at\_10
value: 82.742
- type: ndcg\_at\_100
value: 82.741
- type: ndcg\_at\_1000
value: 82.741
- type: ndcg\_at\_3
value: 80.039
- type: ndcg\_at\_5
value: 81.695
- type: precision\_at\_1
value: 71.542
- type: precision\_at\_10
value: 10.387
- type: precision\_at\_100
value: 1.039
- type: precision\_at\_1000
value: 0.104
- type: precision\_at\_3
value: 31.447999999999997
- type: precision\_at\_5
value: 19.91
- type: recall\_at\_1
value: 66.81899999999999
- type: recall\_at\_10
value: 93.372
- type: recall\_at\_100
value: 93.372
- type: recall\_at\_1000
value: 93.372
- type: recall\_at\_3
value: 86.33
- type: recall\_at\_5
value: 90.347
+ task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.158
- type: map\_at\_10
value: 52.017
- type: map\_at\_100
value: 54.259
- type: map\_at\_1000
value: 54.367
- type: map\_at\_3
value: 45.738
- type: map\_at\_5
value: 49.283
- type: mrr\_at\_1
value: 57.87
- type: mrr\_at\_10
value: 66.215
- type: mrr\_at\_100
value: 66.735
- type: mrr\_at\_1000
value: 66.75
- type: mrr\_at\_3
value: 64.043
- type: mrr\_at\_5
value: 65.116
- type: ndcg\_at\_1
value: 57.87
- type: ndcg\_at\_10
value: 59.946999999999996
- type: ndcg\_at\_100
value: 66.31099999999999
- type: ndcg\_at\_1000
value: 67.75999999999999
- type: ndcg\_at\_3
value: 55.483000000000004
- type: ndcg\_at\_5
value: 56.891000000000005
- type: precision\_at\_1
value: 57.87
- type: precision\_at\_10
value: 16.497
- type: precision\_at\_100
value: 2.321
- type: precision\_at\_1000
value: 0.258
- type: precision\_at\_3
value: 37.14
- type: precision\_at\_5
value: 27.067999999999998
- type: recall\_at\_1
value: 31.158
- type: recall\_at\_10
value: 67.381
- type: recall\_at\_100
value: 89.464
- type: recall\_at\_1000
value: 97.989
- type: recall\_at\_3
value: 50.553000000000004
- type: recall\_at\_5
value: 57.824
+ task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 42.073
- type: map\_at\_10
value: 72.418
- type: map\_at\_100
value: 73.175
- type: map\_at\_1000
value: 73.215
- type: map\_at\_3
value: 68.791
- type: map\_at\_5
value: 71.19
- type: mrr\_at\_1
value: 84.146
- type: mrr\_at\_10
value: 88.994
- type: mrr\_at\_100
value: 89.116
- type: mrr\_at\_1000
value: 89.12
- type: mrr\_at\_3
value: 88.373
- type: mrr\_at\_5
value: 88.82
- type: ndcg\_at\_1
value: 84.146
- type: ndcg\_at\_10
value: 79.404
- type: ndcg\_at\_100
value: 81.83200000000001
- type: ndcg\_at\_1000
value: 82.524
- type: ndcg\_at\_3
value: 74.595
- type: ndcg\_at\_5
value: 77.474
- type: precision\_at\_1
value: 84.146
- type: precision\_at\_10
value: 16.753999999999998
- type: precision\_at\_100
value: 1.8599999999999999
- type: precision\_at\_1000
value: 0.19499999999999998
- type: precision\_at\_3
value: 48.854
- type: precision\_at\_5
value: 31.579
- type: recall\_at\_1
value: 42.073
- type: recall\_at\_10
value: 83.768
- type: recall\_at\_100
value: 93.018
- type: recall\_at\_1000
value: 97.481
- type: recall\_at\_3
value: 73.282
- type: recall\_at\_5
value: 78.947
+ task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
+ task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map\_at\_1
value: 21.698
- type: map\_at\_10
value: 34.585
- type: map\_at\_100
value: 35.782000000000004
- type: map\_at\_1000
value: 35.825
- type: map\_at\_3
value: 30.397999999999996
- type: map\_at\_5
value: 32.72
- type: mrr\_at\_1
value: 22.192
- type: mrr\_at\_10
value: 35.085
- type: mrr\_at\_100
value: 36.218
- type: mrr\_at\_1000
value: 36.256
- type: mrr\_at\_3
value: 30.986000000000004
- type: mrr\_at\_5
value: 33.268
- type: ndcg\_at\_1
value: 22.192
- type: ndcg\_at\_10
value: 41.957
- type: ndcg\_at\_100
value: 47.658
- type: ndcg\_at\_1000
value: 48.697
- type: ndcg\_at\_3
value: 33.433
- type: ndcg\_at\_5
value: 37.551
- type: precision\_at\_1
value: 22.192
- type: precision\_at\_10
value: 6.781
- type: precision\_at\_100
value: 0.963
- type: precision\_at\_1000
value: 0.105
- type: precision\_at\_3
value: 14.365
- type: precision\_at\_5
value: 10.713000000000001
- type: recall\_at\_1
value: 21.698
- type: recall\_at\_10
value: 64.79
- type: recall\_at\_100
value: 91.071
- type: recall\_at\_1000
value: 98.883
- type: recall\_at\_3
value: 41.611
- type: recall\_at\_5
value: 51.459999999999994
+ task:
type: Classification
dataset:
type: mteb/mtop\_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
+ task:
type: Classification
dataset:
type: mteb/mtop\_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v\_measure
value: 36.52153488185864
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v\_measure
value: 36.80090398444147
+ task:
type: Reranking
dataset:
type: mteb/mind\_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
+ task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 6.920999999999999
- type: map\_at\_10
value: 16.049
- type: map\_at\_100
value: 16.049
- type: map\_at\_1000
value: 16.049
- type: map\_at\_3
value: 11.865
- type: map\_at\_5
value: 13.657
- type: mrr\_at\_1
value: 53.87
- type: mrr\_at\_10
value: 62.291
- type: mrr\_at\_100
value: 62.291
- type: mrr\_at\_1000
value: 62.291
- type: mrr\_at\_3
value: 60.681
- type: mrr\_at\_5
value: 61.61
- type: ndcg\_at\_1
value: 51.23799999999999
- type: ndcg\_at\_10
value: 40.892
- type: ndcg\_at\_100
value: 26.951999999999998
- type: ndcg\_at\_1000
value: 26.474999999999998
- type: ndcg\_at\_3
value: 46.821
- type: ndcg\_at\_5
value: 44.333
- type: precision\_at\_1
value: 53.251000000000005
- type: precision\_at\_10
value: 30.124000000000002
- type: precision\_at\_100
value: 3.012
- type: precision\_at\_1000
value: 0.301
- type: precision\_at\_3
value: 43.55
- type: precision\_at\_5
value: 38.266
- type: recall\_at\_1
value: 6.920999999999999
- type: recall\_at\_10
value: 20.852
- type: recall\_at\_100
value: 20.852
- type: recall\_at\_1000
value: 20.852
- type: recall\_at\_3
value: 13.628000000000002
- type: recall\_at\_5
value: 16.273
+ task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 46.827999999999996
- type: map\_at\_10
value: 63.434000000000005
- type: map\_at\_100
value: 63.434000000000005
- type: map\_at\_1000
value: 63.434000000000005
- type: map\_at\_3
value: 59.794000000000004
- type: map\_at\_5
value: 62.08
- type: mrr\_at\_1
value: 52.288999999999994
- type: mrr\_at\_10
value: 65.95
- type: mrr\_at\_100
value: 65.95
- type: mrr\_at\_1000
value: 65.95
- type: mrr\_at\_3
value: 63.413
- type: mrr\_at\_5
value: 65.08
- type: ndcg\_at\_1
value: 52.288999999999994
- type: ndcg\_at\_10
value: 70.301
- type: ndcg\_at\_100
value: 70.301
- type: ndcg\_at\_1000
value: 70.301
- type: ndcg\_at\_3
value: 63.979
- type: ndcg\_at\_5
value: 67.582
- type: precision\_at\_1
value: 52.288999999999994
- type: precision\_at\_10
value: 10.576
- type: precision\_at\_100
value: 1.058
- type: precision\_at\_1000
value: 0.106
- type: precision\_at\_3
value: 28.177000000000003
- type: precision\_at\_5
value: 19.073
- type: recall\_at\_1
value: 46.827999999999996
- type: recall\_at\_10
value: 88.236
- type: recall\_at\_100
value: 88.236
- type: recall\_at\_1000
value: 88.236
- type: recall\_at\_3
value: 72.371
- type: recall\_at\_5
value: 80.56
+ task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 71.652
- type: map\_at\_10
value: 85.953
- type: map\_at\_100
value: 85.953
- type: map\_at\_1000
value: 85.953
- type: map\_at\_3
value: 83.05399999999999
- type: map\_at\_5
value: 84.89
- type: mrr\_at\_1
value: 82.42
- type: mrr\_at\_10
value: 88.473
- type: mrr\_at\_100
value: 88.473
- type: mrr\_at\_1000
value: 88.473
- type: mrr\_at\_3
value: 87.592
- type: mrr\_at\_5
value: 88.211
- type: ndcg\_at\_1
value: 82.44
- type: ndcg\_at\_10
value: 89.467
- type: ndcg\_at\_100
value: 89.33
- type: ndcg\_at\_1000
value: 89.33
- type: ndcg\_at\_3
value: 86.822
- type: ndcg\_at\_5
value: 88.307
- type: precision\_at\_1
value: 82.44
- type: precision\_at\_10
value: 13.616
- type: precision\_at\_100
value: 1.362
- type: precision\_at\_1000
value: 0.136
- type: precision\_at\_3
value: 38.117000000000004
- type: precision\_at\_5
value: 25.05
- type: recall\_at\_1
value: 71.652
- type: recall\_at\_10
value: 96.224
- type: recall\_at\_100
value: 96.224
- type: recall\_at\_1000
value: 96.224
- type: recall\_at\_3
value: 88.571
- type: recall\_at\_5
value: 92.812
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v\_measure
value: 61.295010338050474
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v\_measure
value: 67.26380819328142
+ task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 5.683
- type: map\_at\_10
value: 14.924999999999999
- type: map\_at\_100
value: 17.532
- type: map\_at\_1000
value: 17.875
- type: map\_at\_3
value: 10.392
- type: map\_at\_5
value: 12.592
- type: mrr\_at\_1
value: 28.000000000000004
- type: mrr\_at\_10
value: 39.951
- type: mrr\_at\_100
value: 41.025
- type: mrr\_at\_1000
value: 41.056
- type: mrr\_at\_3
value: 36.317
- type: mrr\_at\_5
value: 38.412
- type: ndcg\_at\_1
value: 28.000000000000004
- type: ndcg\_at\_10
value: 24.410999999999998
- type: ndcg\_at\_100
value: 33.79
- type: ndcg\_at\_1000
value: 39.035
- type: ndcg\_at\_3
value: 22.845
- type: ndcg\_at\_5
value: 20.080000000000002
- type: precision\_at\_1
value: 28.000000000000004
- type: precision\_at\_10
value: 12.790000000000001
- type: precision\_at\_100
value: 2.633
- type: precision\_at\_1000
value: 0.388
- type: precision\_at\_3
value: 21.367
- type: precision\_at\_5
value: 17.7
- type: recall\_at\_1
value: 5.683
- type: recall\_at\_10
value: 25.91
- type: recall\_at\_100
value: 53.443
- type: recall\_at\_1000
value: 78.73
- type: recall\_at\_3
value: 13.003
- type: recall\_at\_5
value: 17.932000000000002
+ task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos\_sim\_pearson
value: 84.677978681023
- type: cos\_sim\_spearman
value: 83.13093441058189
- type: euclidean\_pearson
value: 83.35535759341572
- type: euclidean\_spearman
value: 83.42583744219611
- type: manhattan\_pearson
value: 83.2243124045889
- type: manhattan\_spearman
value: 83.39801618652632
+ task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos\_sim\_pearson
value: 81.68960206569666
- type: cos\_sim\_spearman
value: 77.3368966488535
- type: euclidean\_pearson
value: 77.62828980560303
- type: euclidean\_spearman
value: 76.77951481444651
- type: manhattan\_pearson
value: 77.88637240839041
- type: manhattan\_spearman
value: 77.22157841466188
+ task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos\_sim\_pearson
value: 84.18745821650724
- type: cos\_sim\_spearman
value: 85.04423285574542
- type: euclidean\_pearson
value: 85.46604816931023
- type: euclidean\_spearman
value: 85.5230593932974
- type: manhattan\_pearson
value: 85.57912805986261
- type: manhattan\_spearman
value: 85.65955905111873
+ task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos\_sim\_pearson
value: 83.6715333300355
- type: cos\_sim\_spearman
value: 82.9058522514908
- type: euclidean\_pearson
value: 83.9640357424214
- type: euclidean\_spearman
value: 83.60415457472637
- type: manhattan\_pearson
value: 84.05621005853469
- type: manhattan\_spearman
value: 83.87077724707746
+ task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos\_sim\_pearson
value: 87.82422928098886
- type: cos\_sim\_spearman
value: 88.12660311894628
- type: euclidean\_pearson
value: 87.50974805056555
- type: euclidean\_spearman
value: 87.91957275596677
- type: manhattan\_pearson
value: 87.74119404878883
- type: manhattan\_spearman
value: 88.2808922165719
+ task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos\_sim\_pearson
value: 84.80605838552093
- type: cos\_sim\_spearman
value: 86.24123388765678
- type: euclidean\_pearson
value: 85.32648347339814
- type: euclidean\_spearman
value: 85.60046671950158
- type: manhattan\_pearson
value: 85.53800168487811
- type: manhattan\_spearman
value: 85.89542420480763
+ task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos\_sim\_pearson
value: 89.87540978988132
- type: cos\_sim\_spearman
value: 90.12715295099461
- type: euclidean\_pearson
value: 91.61085993525275
- type: euclidean\_spearman
value: 91.31835942311758
- type: manhattan\_pearson
value: 91.57500202032934
- type: manhattan\_spearman
value: 91.1790925526635
+ task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos\_sim\_pearson
value: 69.87136205329556
- type: cos\_sim\_spearman
value: 68.6253154635078
- type: euclidean\_pearson
value: 68.91536015034222
- type: euclidean\_spearman
value: 67.63744649352542
- type: manhattan\_pearson
value: 69.2000713045275
- type: manhattan\_spearman
value: 68.16002901587316
+ task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos\_sim\_pearson
value: 85.21849551039082
- type: cos\_sim\_spearman
value: 85.6392959372461
- type: euclidean\_pearson
value: 85.92050852609488
- type: euclidean\_spearman
value: 85.97205649009734
- type: manhattan\_pearson
value: 86.1031154802254
- type: manhattan\_spearman
value: 86.26791155517466
+ task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
+ task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 64.994
- type: map\_at\_10
value: 74.763
- type: map\_at\_100
value: 75.127
- type: map\_at\_1000
value: 75.143
- type: map\_at\_3
value: 71.824
- type: map\_at\_5
value: 73.71
- type: mrr\_at\_1
value: 68.333
- type: mrr\_at\_10
value: 75.749
- type: mrr\_at\_100
value: 75.922
- type: mrr\_at\_1000
value: 75.938
- type: mrr\_at\_3
value: 73.556
- type: mrr\_at\_5
value: 74.739
- type: ndcg\_at\_1
value: 68.333
- type: ndcg\_at\_10
value: 79.174
- type: ndcg\_at\_100
value: 80.41
- type: ndcg\_at\_1000
value: 80.804
- type: ndcg\_at\_3
value: 74.361
- type: ndcg\_at\_5
value: 76.861
- type: precision\_at\_1
value: 68.333
- type: precision\_at\_10
value: 10.333
- type: precision\_at\_100
value: 1.0999999999999999
- type: precision\_at\_1000
value: 0.11299999999999999
- type: precision\_at\_3
value: 28.778
- type: precision\_at\_5
value: 19.067
- type: recall\_at\_1
value: 64.994
- type: recall\_at\_10
value: 91.822
- type: recall\_at\_100
value: 97.0
- type: recall\_at\_1000
value: 100.0
- type: recall\_at\_3
value: 78.878
- type: recall\_at\_5
value: 85.172
+ task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos\_sim\_accuracy
value: 99.72079207920792
- type: cos\_sim\_ap
value: 93.00265215525152
- type: cos\_sim\_f1
value: 85.06596306068602
- type: cos\_sim\_precision
value: 90.05586592178771
- type: cos\_sim\_recall
value: 80.60000000000001
- type: dot\_accuracy
value: 99.66039603960397
- type: dot\_ap
value: 91.22371407479089
- type: dot\_f1
value: 82.34693877551021
- type: dot\_precision
value: 84.0625
- type: dot\_recall
value: 80.7
- type: euclidean\_accuracy
value: 99.71881188118812
- type: euclidean\_ap
value: 92.88449963304728
- type: euclidean\_f1
value: 85.19480519480518
- type: euclidean\_precision
value: 88.64864864864866
- type: euclidean\_recall
value: 82.0
- type: manhattan\_accuracy
value: 99.73267326732673
- type: manhattan\_ap
value: 93.23055393056883
- type: manhattan\_f1
value: 85.88957055214725
- type: manhattan\_precision
value: 87.86610878661088
- type: manhattan\_recall
value: 84.0
- type: max\_accuracy
value: 99.73267326732673
- type: max\_ap
value: 93.23055393056883
- type: max\_f1
value: 85.88957055214725
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v\_measure
value: 77.3305735900358
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v\_measure
value: 41.32967136540674
+ task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
+ task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos\_sim\_pearson
value: 30.783007208997144
- type: cos\_sim\_spearman
value: 30.373444721540533
- type: dot\_pearson
value: 29.210604111143905
- type: dot\_spearman
value: 29.98809758085659
+ task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 0.234
- type: map\_at\_10
value: 1.894
- type: map\_at\_100
value: 1.894
- type: map\_at\_1000
value: 1.894
- type: map\_at\_3
value: 0.636
- type: map\_at\_5
value: 1.0
- type: mrr\_at\_1
value: 88.0
- type: mrr\_at\_10
value: 93.667
- type: mrr\_at\_100
value: 93.667
- type: mrr\_at\_1000
value: 93.667
- type: mrr\_at\_3
value: 93.667
- type: mrr\_at\_5
value: 93.667
- type: ndcg\_at\_1
value: 85.0
- type: ndcg\_at\_10
value: 74.798
- type: ndcg\_at\_100
value: 16.462
- type: ndcg\_at\_1000
value: 7.0889999999999995
- type: ndcg\_at\_3
value: 80.754
- type: ndcg\_at\_5
value: 77.319
- type: precision\_at\_1
value: 88.0
- type: precision\_at\_10
value: 78.0
- type: precision\_at\_100
value: 7.8
- type: precision\_at\_1000
value: 0.7799999999999999
- type: precision\_at\_3
value: 83.333
- type: precision\_at\_5
value: 80.80000000000001
- type: recall\_at\_1
value: 0.234
- type: recall\_at\_10
value: 2.093
- type: recall\_at\_100
value: 2.093
- type: recall\_at\_1000
value: 2.093
- type: recall\_at\_3
value: 0.662
- type: recall\_at\_5
value: 1.0739999999999998
+ task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 2.703
- type: map\_at\_10
value: 10.866000000000001
- type: map\_at\_100
value: 10.866000000000001
- type: map\_at\_1000
value: 10.866000000000001
- type: map\_at\_3
value: 5.909
- type: map\_at\_5
value: 7.35
- type: mrr\_at\_1
value: 36.735
- type: mrr\_at\_10
value: 53.583000000000006
- type: mrr\_at\_100
value: 53.583000000000006
- type: mrr\_at\_1000
value: 53.583000000000006
- type: mrr\_at\_3
value: 49.32
- type: mrr\_at\_5
value: 51.769
- type: ndcg\_at\_1
value: 34.694
- type: ndcg\_at\_10
value: 27.926000000000002
- type: ndcg\_at\_100
value: 22.701
- type: ndcg\_at\_1000
value: 22.701
- type: ndcg\_at\_3
value: 32.073
- type: ndcg\_at\_5
value: 28.327999999999996
- type: precision\_at\_1
value: 36.735
- type: precision\_at\_10
value: 24.694
- type: precision\_at\_100
value: 2.469
- type: precision\_at\_1000
value: 0.247
- type: precision\_at\_3
value: 31.973000000000003
- type: precision\_at\_5
value: 26.939
- type: recall\_at\_1
value: 2.703
- type: recall\_at\_10
value: 17.702
- type: recall\_at\_100
value: 17.702
- type: recall\_at\_1000
value: 17.702
- type: recall\_at\_3
value: 7.208
- type: recall\_at\_5
value: 9.748999999999999
+ task:
type: Classification
dataset:
type: mteb/toxic\_conversations\_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
+ task:
type: Classification
dataset:
type: mteb/tweet\_sentiment\_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
+ task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v\_measure
value: 55.70352297774293
+ task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos\_sim\_accuracy
value: 88.27561542588067
- type: cos\_sim\_ap
value: 81.08262141256193
- type: cos\_sim\_f1
value: 73.82341501361338
- type: cos\_sim\_precision
value: 72.5720112159062
- type: cos\_sim\_recall
value: 75.11873350923483
- type: dot\_accuracy
value: 86.66030875603504
- type: dot\_ap
value: 76.6052349228621
- type: dot\_f1
value: 70.13897280966768
- type: dot\_precision
value: 64.70457079152732
- type: dot\_recall
value: 76.56992084432717
- type: euclidean\_accuracy
value: 88.37098408535495
- type: euclidean\_ap
value: 81.12515230092113
- type: euclidean\_f1
value: 74.10338225909379
- type: euclidean\_precision
value: 71.76761433868974
- type: euclidean\_recall
value: 76.59630606860158
- type: manhattan\_accuracy
value: 88.34118137926924
- type: manhattan\_ap
value: 80.95751834536561
- type: manhattan\_f1
value: 73.9119496855346
- type: manhattan\_precision
value: 70.625
- type: manhattan\_recall
value: 77.5197889182058
- type: max\_accuracy
value: 88.37098408535495
- type: max\_ap
value: 81.12515230092113
- type: max\_f1
value: 74.10338225909379
+ task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos\_sim\_accuracy
value: 89.79896767182831
- type: cos\_sim\_ap
value: 87.40071784061065
- type: cos\_sim\_f1
value: 79.87753144712087
- type: cos\_sim\_precision
value: 76.67304015296367
- type: cos\_sim\_recall
value: 83.3615645210964
- type: dot\_accuracy
value: 88.95486474948578
- type: dot\_ap
value: 86.00227979119943
- type: dot\_f1
value: 78.54601474525914
- type: dot\_precision
value: 75.00525394045535
- type: dot\_recall
value: 82.43763473975977
- type: euclidean\_accuracy
value: 89.7892653393876
- type: euclidean\_ap
value: 87.42174706480819
- type: euclidean\_f1
value: 80.07283321194465
- type: euclidean\_precision
value: 75.96738529574351
- type: euclidean\_recall
value: 84.6473668001232
- type: manhattan\_accuracy
value: 89.8474793340319
- type: manhattan\_ap
value: 87.47814292587448
- type: manhattan\_f1
value: 80.15461150280949
- type: manhattan\_precision
value: 74.88798234468
- type: manhattan\_recall
value: 86.21804742839544
- type: max\_accuracy
value: 89.8474793340319
- type: max\_ap
value: 87.47814292587448
- type: max\_f1
value: 80.15461150280949
---
Model Summary
=============
>
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
>
>
>
* Repository: ContextualAI/gritlm
* Paper: URL
* Logs: URL
* Script: URL
Use
===
The model usage is documented here.
| [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #custom_code #arxiv-2402.09906 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-model
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7582
- Accuracy: 0.7424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7206 | 0.9880 | 62 | 0.7582 | 0.7424 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "fine-tuned-model", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.7423864203694458, "name": "Accuracy"}]}]}]} | carvalhaes/fine-tuned-model | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:06:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| fine-tuned-model
================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7582
* Accuracy: 0.7424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5571
- F1 Score: 0.6983
- Accuracy: 0.701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6152 | 0.93 | 200 | 0.5646 | 0.7015 | 0.706 |
| 0.5923 | 1.87 | 400 | 0.5661 | 0.6898 | 0.69 |
| 0.5867 | 2.8 | 600 | 0.5547 | 0.7215 | 0.723 |
| 0.5784 | 3.74 | 800 | 0.5597 | 0.7026 | 0.703 |
| 0.5757 | 4.67 | 1000 | 0.5493 | 0.7194 | 0.72 |
| 0.5707 | 5.61 | 1200 | 0.5421 | 0.7228 | 0.726 |
| 0.5658 | 6.54 | 1400 | 0.5426 | 0.7299 | 0.731 |
| 0.5638 | 7.48 | 1600 | 0.5426 | 0.7274 | 0.728 |
| 0.5608 | 8.41 | 1800 | 0.5390 | 0.7224 | 0.723 |
| 0.5628 | 9.35 | 2000 | 0.5391 | 0.7248 | 0.726 |
| 0.5553 | 10.28 | 2200 | 0.5445 | 0.7101 | 0.71 |
| 0.5525 | 11.21 | 2400 | 0.5418 | 0.7222 | 0.724 |
| 0.5518 | 12.15 | 2600 | 0.5403 | 0.7232 | 0.726 |
| 0.5459 | 13.08 | 2800 | 0.5447 | 0.7220 | 0.729 |
| 0.5457 | 14.02 | 3000 | 0.5390 | 0.7207 | 0.723 |
| 0.5439 | 14.95 | 3200 | 0.5381 | 0.7278 | 0.731 |
| 0.5425 | 15.89 | 3400 | 0.5380 | 0.7297 | 0.732 |
| 0.5397 | 16.82 | 3600 | 0.5406 | 0.7242 | 0.727 |
| 0.5351 | 17.76 | 3800 | 0.5399 | 0.7233 | 0.726 |
| 0.536 | 18.69 | 4000 | 0.5452 | 0.7218 | 0.722 |
| 0.534 | 19.63 | 4200 | 0.5418 | 0.7201 | 0.722 |
| 0.5342 | 20.56 | 4400 | 0.5423 | 0.7244 | 0.726 |
| 0.5274 | 21.5 | 4600 | 0.5477 | 0.7100 | 0.71 |
| 0.5269 | 22.43 | 4800 | 0.5466 | 0.7142 | 0.716 |
| 0.5285 | 23.36 | 5000 | 0.5517 | 0.7051 | 0.705 |
| 0.5224 | 24.3 | 5200 | 0.5521 | 0.6986 | 0.699 |
| 0.5194 | 25.23 | 5400 | 0.5508 | 0.7193 | 0.722 |
| 0.5245 | 26.17 | 5600 | 0.5442 | 0.7108 | 0.712 |
| 0.5155 | 27.1 | 5800 | 0.5491 | 0.7044 | 0.705 |
| 0.5161 | 28.04 | 6000 | 0.5447 | 0.7041 | 0.705 |
| 0.5114 | 28.97 | 6200 | 0.5540 | 0.7019 | 0.702 |
| 0.5161 | 29.91 | 6400 | 0.5514 | 0.7166 | 0.719 |
| 0.5109 | 30.84 | 6600 | 0.5514 | 0.7116 | 0.714 |
| 0.5064 | 31.78 | 6800 | 0.5529 | 0.7160 | 0.717 |
| 0.509 | 32.71 | 7000 | 0.5523 | 0.7072 | 0.709 |
| 0.5095 | 33.64 | 7200 | 0.5537 | 0.7158 | 0.717 |
| 0.5019 | 34.58 | 7400 | 0.5588 | 0.6950 | 0.695 |
| 0.5042 | 35.51 | 7600 | 0.5562 | 0.692 | 0.692 |
| 0.5029 | 36.45 | 7800 | 0.5594 | 0.7062 | 0.707 |
| 0.5029 | 37.38 | 8000 | 0.5603 | 0.6975 | 0.698 |
| 0.4968 | 38.32 | 8200 | 0.5590 | 0.7049 | 0.706 |
| 0.4992 | 39.25 | 8400 | 0.5634 | 0.7008 | 0.702 |
| 0.4965 | 40.19 | 8600 | 0.5624 | 0.7002 | 0.701 |
| 0.4974 | 41.12 | 8800 | 0.5622 | 0.7025 | 0.703 |
| 0.4989 | 42.06 | 9000 | 0.5610 | 0.7072 | 0.708 |
| 0.4962 | 42.99 | 9200 | 0.5612 | 0.6988 | 0.699 |
| 0.4983 | 43.93 | 9400 | 0.5612 | 0.7018 | 0.702 |
| 0.4954 | 44.86 | 9600 | 0.5618 | 0.7024 | 0.703 |
| 0.4947 | 45.79 | 9800 | 0.5622 | 0.7033 | 0.704 |
| 0.4901 | 46.73 | 10000 | 0.5631 | 0.6995 | 0.7 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:06:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_3-seqsight\_65536\_512\_47M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5571
* F1 Score: 0.6983
* Accuracy: 0.701
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4711
- F1 Score: 0.7738
- Accuracy: 0.774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5857 | 1.34 | 200 | 0.5446 | 0.7070 | 0.709 |
| 0.5463 | 2.68 | 400 | 0.5318 | 0.7309 | 0.731 |
| 0.5395 | 4.03 | 600 | 0.5247 | 0.736 | 0.736 |
| 0.5355 | 5.37 | 800 | 0.5259 | 0.7386 | 0.739 |
| 0.5311 | 6.71 | 1000 | 0.5198 | 0.7450 | 0.746 |
| 0.5268 | 8.05 | 1200 | 0.5181 | 0.7480 | 0.748 |
| 0.5236 | 9.4 | 1400 | 0.5168 | 0.7421 | 0.743 |
| 0.5227 | 10.74 | 1600 | 0.5135 | 0.7477 | 0.748 |
| 0.5221 | 12.08 | 1800 | 0.5155 | 0.7539 | 0.754 |
| 0.5201 | 13.42 | 2000 | 0.5100 | 0.7530 | 0.753 |
| 0.5189 | 14.77 | 2200 | 0.5123 | 0.7507 | 0.751 |
| 0.5132 | 16.11 | 2400 | 0.5106 | 0.7530 | 0.753 |
| 0.5175 | 17.45 | 2600 | 0.5099 | 0.7506 | 0.751 |
| 0.5124 | 18.79 | 2800 | 0.5082 | 0.7589 | 0.759 |
| 0.5106 | 20.13 | 3000 | 0.5086 | 0.7589 | 0.759 |
| 0.5117 | 21.48 | 3200 | 0.5107 | 0.7508 | 0.751 |
| 0.5132 | 22.82 | 3400 | 0.5076 | 0.7541 | 0.755 |
| 0.5099 | 24.16 | 3600 | 0.5068 | 0.7520 | 0.753 |
| 0.5063 | 25.5 | 3800 | 0.5087 | 0.7474 | 0.749 |
| 0.5105 | 26.85 | 4000 | 0.5084 | 0.7454 | 0.747 |
| 0.5057 | 28.19 | 4200 | 0.5059 | 0.7545 | 0.755 |
| 0.5064 | 29.53 | 4400 | 0.5066 | 0.7580 | 0.758 |
| 0.5029 | 30.87 | 4600 | 0.5057 | 0.7548 | 0.755 |
| 0.5057 | 32.21 | 4800 | 0.5065 | 0.7517 | 0.752 |
| 0.507 | 33.56 | 5000 | 0.5040 | 0.7580 | 0.758 |
| 0.5037 | 34.9 | 5200 | 0.5061 | 0.7559 | 0.756 |
| 0.4995 | 36.24 | 5400 | 0.5060 | 0.7500 | 0.751 |
| 0.5053 | 37.58 | 5600 | 0.5038 | 0.7556 | 0.756 |
| 0.504 | 38.93 | 5800 | 0.5037 | 0.7535 | 0.754 |
| 0.5014 | 40.27 | 6000 | 0.5029 | 0.7578 | 0.758 |
| 0.4999 | 41.61 | 6200 | 0.5034 | 0.7555 | 0.756 |
| 0.5055 | 42.95 | 6400 | 0.5043 | 0.7485 | 0.749 |
| 0.5003 | 44.3 | 6600 | 0.5036 | 0.7550 | 0.755 |
| 0.4994 | 45.64 | 6800 | 0.5039 | 0.7539 | 0.754 |
| 0.4994 | 46.98 | 7000 | 0.5054 | 0.7457 | 0.746 |
| 0.4982 | 48.32 | 7200 | 0.5044 | 0.7539 | 0.754 |
| 0.4983 | 49.66 | 7400 | 0.5045 | 0.7507 | 0.751 |
| 0.4981 | 51.01 | 7600 | 0.5038 | 0.7456 | 0.746 |
| 0.4961 | 52.35 | 7800 | 0.5042 | 0.7477 | 0.748 |
| 0.4979 | 53.69 | 8000 | 0.5052 | 0.7482 | 0.749 |
| 0.4952 | 55.03 | 8200 | 0.5036 | 0.7457 | 0.746 |
| 0.4982 | 56.38 | 8400 | 0.5028 | 0.7469 | 0.747 |
| 0.497 | 57.72 | 8600 | 0.5038 | 0.7483 | 0.749 |
| 0.4963 | 59.06 | 8800 | 0.5029 | 0.7483 | 0.749 |
| 0.4952 | 60.4 | 9000 | 0.5030 | 0.7448 | 0.745 |
| 0.4966 | 61.74 | 9200 | 0.5034 | 0.7483 | 0.749 |
| 0.5011 | 63.09 | 9400 | 0.5028 | 0.7475 | 0.748 |
| 0.4959 | 64.43 | 9600 | 0.5032 | 0.7465 | 0.747 |
| 0.4991 | 65.77 | 9800 | 0.5031 | 0.7466 | 0.747 |
| 0.4941 | 67.11 | 10000 | 0.5033 | 0.7475 | 0.748 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:09:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_2-seqsight\_65536\_512\_47M-L1\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4711
* F1 Score: 0.7738
* Accuracy: 0.774
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/hq6ceip | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:09:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4753
- F1 Score: 0.7840
- Accuracy: 0.785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5645 | 1.34 | 200 | 0.5230 | 0.7288 | 0.73 |
| 0.5304 | 2.68 | 400 | 0.5167 | 0.746 | 0.746 |
| 0.5202 | 4.03 | 600 | 0.5055 | 0.7610 | 0.761 |
| 0.5133 | 5.37 | 800 | 0.5065 | 0.7580 | 0.758 |
| 0.5071 | 6.71 | 1000 | 0.5095 | 0.7491 | 0.75 |
| 0.4993 | 8.05 | 1200 | 0.5024 | 0.7465 | 0.747 |
| 0.4951 | 9.4 | 1400 | 0.5096 | 0.7463 | 0.748 |
| 0.4902 | 10.74 | 1600 | 0.4929 | 0.752 | 0.752 |
| 0.4884 | 12.08 | 1800 | 0.4936 | 0.7540 | 0.754 |
| 0.4826 | 13.42 | 2000 | 0.4938 | 0.7550 | 0.755 |
| 0.4799 | 14.77 | 2200 | 0.5030 | 0.7482 | 0.751 |
| 0.4717 | 16.11 | 2400 | 0.5020 | 0.7490 | 0.749 |
| 0.4703 | 17.45 | 2600 | 0.4984 | 0.7536 | 0.755 |
| 0.462 | 18.79 | 2800 | 0.4910 | 0.7581 | 0.759 |
| 0.4576 | 20.13 | 3000 | 0.4936 | 0.7671 | 0.768 |
| 0.4564 | 21.48 | 3200 | 0.5030 | 0.7569 | 0.757 |
| 0.4556 | 22.82 | 3400 | 0.4965 | 0.7550 | 0.755 |
| 0.4503 | 24.16 | 3600 | 0.4917 | 0.7635 | 0.764 |
| 0.4425 | 25.5 | 3800 | 0.5048 | 0.7516 | 0.752 |
| 0.444 | 26.85 | 4000 | 0.4995 | 0.7573 | 0.758 |
| 0.441 | 28.19 | 4200 | 0.4975 | 0.7599 | 0.76 |
| 0.4366 | 29.53 | 4400 | 0.5035 | 0.7527 | 0.753 |
| 0.431 | 30.87 | 4600 | 0.4948 | 0.7528 | 0.753 |
| 0.4288 | 32.21 | 4800 | 0.5166 | 0.7485 | 0.749 |
| 0.4289 | 33.56 | 5000 | 0.5092 | 0.7538 | 0.754 |
| 0.4244 | 34.9 | 5200 | 0.5031 | 0.7500 | 0.75 |
| 0.4203 | 36.24 | 5400 | 0.4992 | 0.7547 | 0.755 |
| 0.4212 | 37.58 | 5600 | 0.4963 | 0.7619 | 0.762 |
| 0.4151 | 38.93 | 5800 | 0.5031 | 0.7586 | 0.759 |
| 0.4103 | 40.27 | 6000 | 0.5090 | 0.7517 | 0.752 |
| 0.4087 | 41.61 | 6200 | 0.5000 | 0.7530 | 0.753 |
| 0.413 | 42.95 | 6400 | 0.5046 | 0.7549 | 0.755 |
| 0.4031 | 44.3 | 6600 | 0.5112 | 0.7500 | 0.75 |
| 0.4049 | 45.64 | 6800 | 0.5135 | 0.7478 | 0.748 |
| 0.4038 | 46.98 | 7000 | 0.5129 | 0.7549 | 0.755 |
| 0.3993 | 48.32 | 7200 | 0.5133 | 0.7470 | 0.747 |
| 0.3966 | 49.66 | 7400 | 0.5064 | 0.7550 | 0.755 |
| 0.3959 | 51.01 | 7600 | 0.5116 | 0.7549 | 0.755 |
| 0.3894 | 52.35 | 7800 | 0.5182 | 0.7580 | 0.758 |
| 0.3944 | 53.69 | 8000 | 0.5128 | 0.7529 | 0.753 |
| 0.386 | 55.03 | 8200 | 0.5210 | 0.7460 | 0.746 |
| 0.388 | 56.38 | 8400 | 0.5143 | 0.7560 | 0.756 |
| 0.3881 | 57.72 | 8600 | 0.5146 | 0.7540 | 0.754 |
| 0.3851 | 59.06 | 8800 | 0.5129 | 0.7590 | 0.759 |
| 0.3856 | 60.4 | 9000 | 0.5232 | 0.7550 | 0.755 |
| 0.3835 | 61.74 | 9200 | 0.5139 | 0.752 | 0.752 |
| 0.3853 | 63.09 | 9400 | 0.5165 | 0.7510 | 0.751 |
| 0.3805 | 64.43 | 9600 | 0.5156 | 0.7549 | 0.755 |
| 0.3854 | 65.77 | 9800 | 0.5193 | 0.7550 | 0.755 |
| 0.3776 | 67.11 | 10000 | 0.5180 | 0.756 | 0.756 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:09:51+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_2-seqsight\_65536\_512\_47M-L32\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4753
* F1 Score: 0.7840
* Accuracy: 0.785
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4667
- F1 Score: 0.7770
- Accuracy: 0.778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5716 | 1.34 | 200 | 0.5302 | 0.7204 | 0.722 |
| 0.5353 | 2.68 | 400 | 0.5215 | 0.7350 | 0.735 |
| 0.5271 | 4.03 | 600 | 0.5118 | 0.7479 | 0.748 |
| 0.5221 | 5.37 | 800 | 0.5087 | 0.7499 | 0.75 |
| 0.5185 | 6.71 | 1000 | 0.5089 | 0.7599 | 0.76 |
| 0.5108 | 8.05 | 1200 | 0.5104 | 0.7449 | 0.745 |
| 0.5082 | 9.4 | 1400 | 0.5107 | 0.7445 | 0.746 |
| 0.5057 | 10.74 | 1600 | 0.5023 | 0.7518 | 0.752 |
| 0.505 | 12.08 | 1800 | 0.5077 | 0.7450 | 0.745 |
| 0.5005 | 13.42 | 2000 | 0.5044 | 0.7449 | 0.745 |
| 0.4996 | 14.77 | 2200 | 0.5082 | 0.7424 | 0.744 |
| 0.4936 | 16.11 | 2400 | 0.5090 | 0.7490 | 0.749 |
| 0.4946 | 17.45 | 2600 | 0.5053 | 0.7499 | 0.751 |
| 0.4885 | 18.79 | 2800 | 0.4999 | 0.7503 | 0.751 |
| 0.4859 | 20.13 | 3000 | 0.4994 | 0.7555 | 0.756 |
| 0.4861 | 21.48 | 3200 | 0.5075 | 0.7540 | 0.754 |
| 0.4876 | 22.82 | 3400 | 0.5025 | 0.7569 | 0.757 |
| 0.4833 | 24.16 | 3600 | 0.4986 | 0.7566 | 0.757 |
| 0.4774 | 25.5 | 3800 | 0.5025 | 0.7534 | 0.754 |
| 0.4819 | 26.85 | 4000 | 0.4993 | 0.7562 | 0.757 |
| 0.4783 | 28.19 | 4200 | 0.4959 | 0.762 | 0.762 |
| 0.4776 | 29.53 | 4400 | 0.5019 | 0.7580 | 0.758 |
| 0.4741 | 30.87 | 4600 | 0.4985 | 0.7639 | 0.764 |
| 0.4736 | 32.21 | 4800 | 0.5055 | 0.7564 | 0.757 |
| 0.4752 | 33.56 | 5000 | 0.4988 | 0.7518 | 0.752 |
| 0.4704 | 34.9 | 5200 | 0.5015 | 0.7589 | 0.759 |
| 0.4689 | 36.24 | 5400 | 0.4975 | 0.7686 | 0.769 |
| 0.4718 | 37.58 | 5600 | 0.4931 | 0.7547 | 0.755 |
| 0.4679 | 38.93 | 5800 | 0.4966 | 0.7587 | 0.759 |
| 0.4662 | 40.27 | 6000 | 0.4934 | 0.7608 | 0.761 |
| 0.4645 | 41.61 | 6200 | 0.4942 | 0.7520 | 0.752 |
| 0.4709 | 42.95 | 6400 | 0.4969 | 0.7609 | 0.761 |
| 0.4622 | 44.3 | 6600 | 0.4993 | 0.7540 | 0.754 |
| 0.4634 | 45.64 | 6800 | 0.4978 | 0.7520 | 0.752 |
| 0.4634 | 46.98 | 7000 | 0.4974 | 0.75 | 0.75 |
| 0.4618 | 48.32 | 7200 | 0.4976 | 0.7510 | 0.751 |
| 0.4599 | 49.66 | 7400 | 0.4945 | 0.7498 | 0.75 |
| 0.4604 | 51.01 | 7600 | 0.4957 | 0.7470 | 0.747 |
| 0.4562 | 52.35 | 7800 | 0.4983 | 0.7568 | 0.757 |
| 0.4611 | 53.69 | 8000 | 0.4957 | 0.7445 | 0.745 |
| 0.4548 | 55.03 | 8200 | 0.4944 | 0.7449 | 0.745 |
| 0.4581 | 56.38 | 8400 | 0.4942 | 0.7450 | 0.745 |
| 0.4591 | 57.72 | 8600 | 0.4934 | 0.7466 | 0.747 |
| 0.4543 | 59.06 | 8800 | 0.4927 | 0.7517 | 0.752 |
| 0.4563 | 60.4 | 9000 | 0.4961 | 0.7530 | 0.753 |
| 0.4566 | 61.74 | 9200 | 0.4936 | 0.7478 | 0.748 |
| 0.4584 | 63.09 | 9400 | 0.4943 | 0.7508 | 0.751 |
| 0.4518 | 64.43 | 9600 | 0.4950 | 0.7487 | 0.749 |
| 0.4596 | 65.77 | 9800 | 0.4949 | 0.7509 | 0.751 |
| 0.452 | 67.11 | 10000 | 0.4949 | 0.7498 | 0.75 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_tf_2-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:09:51+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_tf\_2-seqsight\_65536\_512\_47M-L8\_f
==========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4667
* F1 Score: 0.7770
* Accuracy: 0.778
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
## Llamacpp Quantizations of DuckyBlender/racist-phi3
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2783">b2783</a> for quantization.
Original model: https://huggingface.co/DuckyBlender/racist-phi3
| {"language": ["en"], "tags": ["racist", "nsfw", "not-for-all-audiences"], "datasets": ["DuckyBlender/racist-inputoutput"]} | DuckyBlender/racist-phi3-GGUF | null | [
"racist",
"nsfw",
"not-for-all-audiences",
"en",
"dataset:DuckyBlender/racist-inputoutput",
"region:us"
] | null | 2024-05-03T17:10:18+00:00 | [] | [
"en"
] | TAGS
#racist #nsfw #not-for-all-audiences #en #dataset-DuckyBlender/racist-inputoutput #region-us
|
## Llamacpp Quantizations of DuckyBlender/racist-phi3
Using <a href="URL release <a href="URL for quantization.
Original model: URL
| [
"## Llamacpp Quantizations of DuckyBlender/racist-phi3\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL"
] | [
"TAGS\n#racist #nsfw #not-for-all-audiences #en #dataset-DuckyBlender/racist-inputoutput #region-us \n",
"## Llamacpp Quantizations of DuckyBlender/racist-phi3\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7826
- F1 Score: 0.3230
- Accuracy: 0.3274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.185 | 0.35 | 200 | 2.1838 | 0.0863 | 0.1327 |
| 2.1807 | 0.7 | 400 | 2.1804 | 0.0877 | 0.1363 |
| 2.1748 | 1.05 | 600 | 2.1744 | 0.1189 | 0.1503 |
| 2.1706 | 1.4 | 800 | 2.1665 | 0.0993 | 0.1460 |
| 2.1628 | 1.75 | 1000 | 2.1550 | 0.1339 | 0.1621 |
| 2.1571 | 2.09 | 1200 | 2.1498 | 0.1296 | 0.1679 |
| 2.1487 | 2.44 | 1400 | 2.1426 | 0.1481 | 0.1654 |
| 2.1395 | 2.79 | 1600 | 2.1151 | 0.1770 | 0.1963 |
| 2.1143 | 3.14 | 1800 | 2.0605 | 0.1929 | 0.2197 |
| 2.0738 | 3.49 | 2000 | 2.0280 | 0.2069 | 0.2267 |
| 2.056 | 3.84 | 2200 | 1.9958 | 0.2303 | 0.2449 |
| 2.0299 | 4.19 | 2400 | 1.9701 | 0.2333 | 0.2469 |
| 2.0076 | 4.54 | 2600 | 1.9480 | 0.2426 | 0.2569 |
| 2.0016 | 4.89 | 2800 | 1.9330 | 0.2555 | 0.2660 |
| 1.9859 | 5.24 | 3000 | 1.9220 | 0.2567 | 0.2687 |
| 1.9754 | 5.58 | 3200 | 1.9137 | 0.2599 | 0.2701 |
| 1.9647 | 5.93 | 3400 | 1.8988 | 0.2645 | 0.2757 |
| 1.9619 | 6.28 | 3600 | 1.8909 | 0.2744 | 0.2805 |
| 1.9479 | 6.63 | 3800 | 1.8845 | 0.2699 | 0.2856 |
| 1.9448 | 6.98 | 4000 | 1.8778 | 0.2759 | 0.2870 |
| 1.9406 | 7.33 | 4200 | 1.8704 | 0.2794 | 0.2935 |
| 1.9341 | 7.68 | 4400 | 1.8636 | 0.2925 | 0.2979 |
| 1.9291 | 8.03 | 4600 | 1.8638 | 0.2861 | 0.2937 |
| 1.9248 | 8.38 | 4800 | 1.8564 | 0.2829 | 0.2965 |
| 1.9284 | 8.73 | 5000 | 1.8568 | 0.2824 | 0.2948 |
| 1.9183 | 9.08 | 5200 | 1.8473 | 0.2914 | 0.2941 |
| 1.9162 | 9.42 | 5400 | 1.8449 | 0.2834 | 0.3003 |
| 1.9152 | 9.77 | 5600 | 1.8363 | 0.2969 | 0.3089 |
| 1.9113 | 10.12 | 5800 | 1.8348 | 0.3011 | 0.3086 |
| 1.9133 | 10.47 | 6000 | 1.8321 | 0.2902 | 0.2989 |
| 1.9053 | 10.82 | 6200 | 1.8315 | 0.3019 | 0.3072 |
| 1.8974 | 11.17 | 6400 | 1.8236 | 0.3025 | 0.3066 |
| 1.9014 | 11.52 | 6600 | 1.8163 | 0.2985 | 0.3068 |
| 1.898 | 11.87 | 6800 | 1.8117 | 0.3064 | 0.3160 |
| 1.8863 | 12.22 | 7000 | 1.8083 | 0.3052 | 0.3127 |
| 1.8874 | 12.57 | 7200 | 1.8044 | 0.3067 | 0.3119 |
| 1.8863 | 12.91 | 7400 | 1.8006 | 0.3120 | 0.3189 |
| 1.8767 | 13.26 | 7600 | 1.7952 | 0.3067 | 0.3126 |
| 1.8833 | 13.61 | 7800 | 1.7948 | 0.3050 | 0.3098 |
| 1.8797 | 13.96 | 8000 | 1.7895 | 0.3114 | 0.3176 |
| 1.8645 | 14.31 | 8200 | 1.7869 | 0.3120 | 0.3194 |
| 1.8744 | 14.66 | 8400 | 1.7856 | 0.3198 | 0.3239 |
| 1.8649 | 15.01 | 8600 | 1.7839 | 0.3153 | 0.3206 |
| 1.8736 | 15.36 | 8800 | 1.7824 | 0.3191 | 0.3225 |
| 1.8607 | 15.71 | 9000 | 1.7825 | 0.3132 | 0.3192 |
| 1.8676 | 16.06 | 9200 | 1.7815 | 0.3143 | 0.3202 |
| 1.8671 | 16.4 | 9400 | 1.7803 | 0.3181 | 0.3230 |
| 1.8645 | 16.75 | 9600 | 1.7794 | 0.3183 | 0.3220 |
| 1.8659 | 17.1 | 9800 | 1.7795 | 0.3168 | 0.3220 |
| 1.8662 | 17.45 | 10000 | 1.7790 | 0.3174 | 0.3224 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:10:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_virus\_covid-seqsight\_65536\_512\_47M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7826
* F1 Score: 0.3230
* Accuracy: 0.3274
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GritLM-7B - bnb 8bits
- Model creator: https://huggingface.co/GritLM/
- Original model: https://huggingface.co/GritLM/GritLM-7B/
Original model description:
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- GritLM/tulu2
tags:
- mteb
model-index:
- name: GritLM-7B
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.478
- type: map_at_10
value: 54.955
- type: map_at_100
value: 54.955
- type: map_at_1000
value: 54.955
- type: map_at_3
value: 50.888999999999996
- type: map_at_5
value: 53.349999999999994
- type: mrr_at_1
value: 39.757999999999996
- type: mrr_at_10
value: 55.449000000000005
- type: mrr_at_100
value: 55.449000000000005
- type: mrr_at_1000
value: 55.449000000000005
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.822
- type: ndcg_at_1
value: 38.478
- type: ndcg_at_10
value: 63.239999999999995
- type: ndcg_at_100
value: 63.239999999999995
- type: ndcg_at_1000
value: 63.239999999999995
- type: ndcg_at_3
value: 54.935
- type: ndcg_at_5
value: 59.379000000000005
- type: precision_at_1
value: 38.478
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 22.214
- type: precision_at_5
value: 15.491
- type: recall_at_1
value: 38.478
- type: recall_at_10
value: 89.331
- type: recall_at_100
value: 89.331
- type: recall_at_1000
value: 89.331
- type: recall_at_3
value: 66.643
- type: recall_at_5
value: 77.45400000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 51.67144081472449
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.11256154264126
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.1935203751726
- type: cos_sim_spearman
value: 86.35497970498659
- type: euclidean_pearson
value: 85.46910708503744
- type: euclidean_spearman
value: 85.13928935405485
- type: manhattan_pearson
value: 85.68373836333303
- type: manhattan_spearman
value: 85.40013867117746
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.86793640310432
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 39.80291334130727
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.421
- type: map_at_10
value: 52.349000000000004
- type: map_at_100
value: 52.349000000000004
- type: map_at_1000
value: 52.349000000000004
- type: map_at_3
value: 48.17
- type: map_at_5
value: 50.432
- type: mrr_at_1
value: 47.353
- type: mrr_at_10
value: 58.387
- type: mrr_at_100
value: 58.387
- type: mrr_at_1000
value: 58.387
- type: mrr_at_3
value: 56.199
- type: mrr_at_5
value: 57.487
- type: ndcg_at_1
value: 47.353
- type: ndcg_at_10
value: 59.202
- type: ndcg_at_100
value: 58.848
- type: ndcg_at_1000
value: 58.831999999999994
- type: ndcg_at_3
value: 54.112
- type: ndcg_at_5
value: 56.312
- type: precision_at_1
value: 47.353
- type: precision_at_10
value: 11.459
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 18.627
- type: recall_at_1
value: 38.421
- type: recall_at_10
value: 71.89
- type: recall_at_100
value: 71.89
- type: recall_at_1000
value: 71.89
- type: recall_at_3
value: 56.58
- type: recall_at_5
value: 63.125
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.025999999999996
- type: map_at_10
value: 50.590999999999994
- type: map_at_100
value: 51.99700000000001
- type: map_at_1000
value: 52.11599999999999
- type: map_at_3
value: 47.435
- type: map_at_5
value: 49.236000000000004
- type: mrr_at_1
value: 48.28
- type: mrr_at_10
value: 56.814
- type: mrr_at_100
value: 57.446
- type: mrr_at_1000
value: 57.476000000000006
- type: mrr_at_3
value: 54.958
- type: mrr_at_5
value: 56.084999999999994
- type: ndcg_at_1
value: 48.28
- type: ndcg_at_10
value: 56.442
- type: ndcg_at_100
value: 60.651999999999994
- type: ndcg_at_1000
value: 62.187000000000005
- type: ndcg_at_3
value: 52.866
- type: ndcg_at_5
value: 54.515
- type: precision_at_1
value: 48.28
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.945
- type: precision_at_5
value: 18.076
- type: recall_at_1
value: 38.025999999999996
- type: recall_at_10
value: 66.11399999999999
- type: recall_at_100
value: 83.339
- type: recall_at_1000
value: 92.413
- type: recall_at_3
value: 54.493
- type: recall_at_5
value: 59.64699999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.905
- type: map_at_10
value: 61.58
- type: map_at_100
value: 62.605
- type: map_at_1000
value: 62.637
- type: map_at_3
value: 58.074000000000005
- type: map_at_5
value: 60.260000000000005
- type: mrr_at_1
value: 54.42
- type: mrr_at_10
value: 64.847
- type: mrr_at_100
value: 65.403
- type: mrr_at_1000
value: 65.41900000000001
- type: mrr_at_3
value: 62.675000000000004
- type: mrr_at_5
value: 64.101
- type: ndcg_at_1
value: 54.42
- type: ndcg_at_10
value: 67.394
- type: ndcg_at_100
value: 70.846
- type: ndcg_at_1000
value: 71.403
- type: ndcg_at_3
value: 62.025
- type: ndcg_at_5
value: 65.032
- type: precision_at_1
value: 54.42
- type: precision_at_10
value: 10.646
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 27.398
- type: precision_at_5
value: 18.796
- type: recall_at_1
value: 47.905
- type: recall_at_10
value: 80.84599999999999
- type: recall_at_100
value: 95.078
- type: recall_at_1000
value: 98.878
- type: recall_at_3
value: 67.05600000000001
- type: recall_at_5
value: 74.261
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.745
- type: map_at_10
value: 41.021
- type: map_at_100
value: 41.021
- type: map_at_1000
value: 41.021
- type: map_at_3
value: 37.714999999999996
- type: map_at_5
value: 39.766
- type: mrr_at_1
value: 33.559
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 43.537
- type: mrr_at_1000
value: 43.537
- type: mrr_at_3
value: 40.546
- type: mrr_at_5
value: 42.439
- type: ndcg_at_1
value: 33.559
- type: ndcg_at_10
value: 46.781
- type: ndcg_at_100
value: 46.781
- type: ndcg_at_1000
value: 46.781
- type: ndcg_at_3
value: 40.516000000000005
- type: ndcg_at_5
value: 43.957
- type: precision_at_1
value: 33.559
- type: precision_at_10
value: 7.198
- type: precision_at_100
value: 0.72
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 12.316
- type: recall_at_1
value: 30.745
- type: recall_at_10
value: 62.038000000000004
- type: recall_at_100
value: 62.038000000000004
- type: recall_at_1000
value: 62.038000000000004
- type: recall_at_3
value: 45.378
- type: recall_at_5
value: 53.580000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.637999999999998
- type: map_at_10
value: 31.05
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.05
- type: map_at_3
value: 27.628000000000004
- type: map_at_5
value: 29.767
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 36.131
- type: mrr_at_100
value: 36.131
- type: mrr_at_1000
value: 36.131
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 35.143
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 37.478
- type: ndcg_at_100
value: 37.469
- type: ndcg_at_1000
value: 37.469
- type: ndcg_at_3
value: 31.757999999999996
- type: ndcg_at_5
value: 34.821999999999996
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.188999999999999
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.841
- type: recall_at_1
value: 19.637999999999998
- type: recall_at_10
value: 51.836000000000006
- type: recall_at_100
value: 51.836000000000006
- type: recall_at_1000
value: 51.836000000000006
- type: recall_at_3
value: 36.384
- type: recall_at_5
value: 43.964
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.884
- type: map_at_10
value: 47.88
- type: map_at_100
value: 47.88
- type: map_at_1000
value: 47.88
- type: map_at_3
value: 43.85
- type: map_at_5
value: 46.414
- type: mrr_at_1
value: 43.022
- type: mrr_at_10
value: 53.569
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.569
- type: mrr_at_3
value: 51.075
- type: mrr_at_5
value: 52.725
- type: ndcg_at_1
value: 43.022
- type: ndcg_at_10
value: 54.461000000000006
- type: ndcg_at_100
value: 54.388000000000005
- type: ndcg_at_1000
value: 54.388000000000005
- type: ndcg_at_3
value: 48.864999999999995
- type: ndcg_at_5
value: 52.032000000000004
- type: precision_at_1
value: 43.022
- type: precision_at_10
value: 9.885
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.612
- type: precision_at_5
value: 16.997
- type: recall_at_1
value: 34.884
- type: recall_at_10
value: 68.12899999999999
- type: recall_at_100
value: 68.12899999999999
- type: recall_at_1000
value: 68.12899999999999
- type: recall_at_3
value: 52.428
- type: recall_at_5
value: 60.662000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.588
- type: map_at_10
value: 43.85
- type: map_at_100
value: 45.317
- type: map_at_1000
value: 45.408
- type: map_at_3
value: 39.73
- type: map_at_5
value: 42.122
- type: mrr_at_1
value: 38.927
- type: mrr_at_10
value: 49.582
- type: mrr_at_100
value: 50.39
- type: mrr_at_1000
value: 50.426
- type: mrr_at_3
value: 46.518
- type: mrr_at_5
value: 48.271
- type: ndcg_at_1
value: 38.927
- type: ndcg_at_10
value: 50.605999999999995
- type: ndcg_at_100
value: 56.22200000000001
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 47.233999999999995
- type: precision_at_1
value: 38.927
- type: precision_at_10
value: 9.429
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.271
- type: precision_at_5
value: 15.434000000000001
- type: recall_at_1
value: 31.588
- type: recall_at_10
value: 64.836
- type: recall_at_100
value: 88.066
- type: recall_at_1000
value: 97.748
- type: recall_at_3
value: 47.128
- type: recall_at_5
value: 54.954
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.956083333333336
- type: map_at_10
value: 43.33483333333333
- type: map_at_100
value: 44.64883333333333
- type: map_at_1000
value: 44.75
- type: map_at_3
value: 39.87741666666666
- type: map_at_5
value: 41.86766666666667
- type: mrr_at_1
value: 38.06341666666667
- type: mrr_at_10
value: 47.839666666666666
- type: mrr_at_100
value: 48.644000000000005
- type: mrr_at_1000
value: 48.68566666666667
- type: mrr_at_3
value: 45.26358333333334
- type: mrr_at_5
value: 46.790000000000006
- type: ndcg_at_1
value: 38.06341666666667
- type: ndcg_at_10
value: 49.419333333333334
- type: ndcg_at_100
value: 54.50166666666667
- type: ndcg_at_1000
value: 56.161166666666674
- type: ndcg_at_3
value: 43.982416666666666
- type: ndcg_at_5
value: 46.638083333333334
- type: precision_at_1
value: 38.06341666666667
- type: precision_at_10
value: 8.70858333333333
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.37816666666667
- type: precision_at_5
value: 14.516333333333334
- type: recall_at_1
value: 31.956083333333336
- type: recall_at_10
value: 62.69458333333334
- type: recall_at_100
value: 84.46433333333334
- type: recall_at_1000
value: 95.58449999999999
- type: recall_at_3
value: 47.52016666666666
- type: recall_at_5
value: 54.36066666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.912
- type: map_at_10
value: 38.291
- type: map_at_100
value: 39.44
- type: map_at_1000
value: 39.528
- type: map_at_3
value: 35.638
- type: map_at_5
value: 37.218
- type: mrr_at_1
value: 32.822
- type: mrr_at_10
value: 41.661
- type: mrr_at_100
value: 42.546
- type: mrr_at_1000
value: 42.603
- type: mrr_at_3
value: 39.238
- type: mrr_at_5
value: 40.726
- type: ndcg_at_1
value: 32.822
- type: ndcg_at_10
value: 43.373
- type: ndcg_at_100
value: 48.638
- type: ndcg_at_1000
value: 50.654999999999994
- type: ndcg_at_3
value: 38.643
- type: ndcg_at_5
value: 41.126000000000005
- type: precision_at_1
value: 32.822
- type: precision_at_10
value: 6.8709999999999996
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 16.82
- type: precision_at_5
value: 11.718
- type: recall_at_1
value: 28.912
- type: recall_at_10
value: 55.376999999999995
- type: recall_at_100
value: 79.066
- type: recall_at_1000
value: 93.664
- type: recall_at_3
value: 42.569
- type: recall_at_5
value: 48.719
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.181
- type: map_at_10
value: 31.462
- type: map_at_100
value: 32.73
- type: map_at_1000
value: 32.848
- type: map_at_3
value: 28.57
- type: map_at_5
value: 30.182
- type: mrr_at_1
value: 27.185
- type: mrr_at_10
value: 35.846000000000004
- type: mrr_at_100
value: 36.811
- type: mrr_at_1000
value: 36.873
- type: mrr_at_3
value: 33.437
- type: mrr_at_5
value: 34.813
- type: ndcg_at_1
value: 27.185
- type: ndcg_at_10
value: 36.858000000000004
- type: ndcg_at_100
value: 42.501
- type: ndcg_at_1000
value: 44.945
- type: ndcg_at_3
value: 32.066
- type: ndcg_at_5
value: 34.29
- type: precision_at_1
value: 27.185
- type: precision_at_10
value: 6.752
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 15.290000000000001
- type: precision_at_5
value: 11.004999999999999
- type: recall_at_1
value: 22.181
- type: recall_at_10
value: 48.513
- type: recall_at_100
value: 73.418
- type: recall_at_1000
value: 90.306
- type: recall_at_3
value: 35.003
- type: recall_at_5
value: 40.876000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.934999999999995
- type: map_at_10
value: 44.727
- type: map_at_100
value: 44.727
- type: map_at_1000
value: 44.727
- type: map_at_3
value: 40.918
- type: map_at_5
value: 42.961
- type: mrr_at_1
value: 39.646
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 48.898
- type: mrr_at_1000
value: 48.898
- type: mrr_at_3
value: 45.896
- type: mrr_at_5
value: 47.514
- type: ndcg_at_1
value: 39.646
- type: ndcg_at_10
value: 50.817
- type: ndcg_at_100
value: 50.803
- type: ndcg_at_1000
value: 50.803
- type: ndcg_at_3
value: 44.507999999999996
- type: ndcg_at_5
value: 47.259
- type: precision_at_1
value: 39.646
- type: precision_at_10
value: 8.759
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 20.274
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 33.934999999999995
- type: recall_at_10
value: 65.037
- type: recall_at_100
value: 65.037
- type: recall_at_1000
value: 65.037
- type: recall_at_3
value: 47.439
- type: recall_at_5
value: 54.567
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.058
- type: map_at_10
value: 43.137
- type: map_at_100
value: 43.137
- type: map_at_1000
value: 43.137
- type: map_at_3
value: 39.882
- type: map_at_5
value: 41.379
- type: mrr_at_1
value: 38.933
- type: mrr_at_10
value: 48.344
- type: mrr_at_100
value: 48.344
- type: mrr_at_1000
value: 48.344
- type: mrr_at_3
value: 45.652
- type: mrr_at_5
value: 46.877
- type: ndcg_at_1
value: 38.933
- type: ndcg_at_10
value: 49.964
- type: ndcg_at_100
value: 49.242000000000004
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 44.605
- type: ndcg_at_5
value: 46.501999999999995
- type: precision_at_1
value: 38.933
- type: precision_at_10
value: 9.427000000000001
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 20.685000000000002
- type: precision_at_5
value: 14.585
- type: recall_at_1
value: 32.058
- type: recall_at_10
value: 63.074
- type: recall_at_100
value: 63.074
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 47.509
- type: recall_at_5
value: 52.455
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.029000000000003
- type: map_at_10
value: 34.646
- type: map_at_100
value: 34.646
- type: map_at_1000
value: 34.646
- type: map_at_3
value: 31.456
- type: map_at_5
value: 33.138
- type: mrr_at_1
value: 28.281
- type: mrr_at_10
value: 36.905
- type: mrr_at_100
value: 36.905
- type: mrr_at_1000
value: 36.905
- type: mrr_at_3
value: 34.011
- type: mrr_at_5
value: 35.638
- type: ndcg_at_1
value: 28.281
- type: ndcg_at_10
value: 40.159
- type: ndcg_at_100
value: 40.159
- type: ndcg_at_1000
value: 40.159
- type: ndcg_at_3
value: 33.995
- type: ndcg_at_5
value: 36.836999999999996
- type: precision_at_1
value: 28.281
- type: precision_at_10
value: 6.358999999999999
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.064
- type: precision_at_3
value: 14.233
- type: precision_at_5
value: 10.314
- type: recall_at_1
value: 26.029000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 55.08
- type: recall_at_1000
value: 55.08
- type: recall_at_3
value: 38.487
- type: recall_at_5
value: 45.308
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.842999999999998
- type: map_at_10
value: 22.101000000000003
- type: map_at_100
value: 24.319
- type: map_at_1000
value: 24.51
- type: map_at_3
value: 18.372
- type: map_at_5
value: 20.323
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.321
- type: mrr_at_100
value: 41.262
- type: mrr_at_1000
value: 41.297
- type: mrr_at_3
value: 36.558
- type: mrr_at_5
value: 38.824999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.906
- type: ndcg_at_100
value: 38.986
- type: ndcg_at_1000
value: 42.136
- type: ndcg_at_3
value: 24.911
- type: ndcg_at_5
value: 27.168999999999997
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.798
- type: precision_at_100
value: 1.8399999999999999
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 18.328
- type: precision_at_5
value: 14.502
- type: recall_at_1
value: 12.842999999999998
- type: recall_at_10
value: 37.245
- type: recall_at_100
value: 64.769
- type: recall_at_1000
value: 82.055
- type: recall_at_3
value: 23.159
- type: recall_at_5
value: 29.113
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.934000000000001
- type: map_at_10
value: 21.915000000000003
- type: map_at_100
value: 21.915000000000003
- type: map_at_1000
value: 21.915000000000003
- type: map_at_3
value: 14.623
- type: map_at_5
value: 17.841
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 78.994
- type: mrr_at_100
value: 78.994
- type: mrr_at_1000
value: 78.994
- type: mrr_at_3
value: 77.208
- type: mrr_at_5
value: 78.55799999999999
- type: ndcg_at_1
value: 60.62499999999999
- type: ndcg_at_10
value: 46.604
- type: ndcg_at_100
value: 35.653
- type: ndcg_at_1000
value: 35.531
- type: ndcg_at_3
value: 50.605
- type: ndcg_at_5
value: 48.730000000000004
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 37.75
- type: precision_at_100
value: 3.775
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 54.417
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 8.934000000000001
- type: recall_at_10
value: 28.471000000000004
- type: recall_at_100
value: 28.471000000000004
- type: recall_at_1000
value: 28.471000000000004
- type: recall_at_3
value: 16.019
- type: recall_at_5
value: 21.410999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.81899999999999
- type: map_at_10
value: 78.034
- type: map_at_100
value: 78.034
- type: map_at_1000
value: 78.034
- type: map_at_3
value: 76.43100000000001
- type: map_at_5
value: 77.515
- type: mrr_at_1
value: 71.542
- type: mrr_at_10
value: 81.638
- type: mrr_at_100
value: 81.638
- type: mrr_at_1000
value: 81.638
- type: mrr_at_3
value: 80.403
- type: mrr_at_5
value: 81.256
- type: ndcg_at_1
value: 71.542
- type: ndcg_at_10
value: 82.742
- type: ndcg_at_100
value: 82.741
- type: ndcg_at_1000
value: 82.741
- type: ndcg_at_3
value: 80.039
- type: ndcg_at_5
value: 81.695
- type: precision_at_1
value: 71.542
- type: precision_at_10
value: 10.387
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 31.447999999999997
- type: precision_at_5
value: 19.91
- type: recall_at_1
value: 66.81899999999999
- type: recall_at_10
value: 93.372
- type: recall_at_100
value: 93.372
- type: recall_at_1000
value: 93.372
- type: recall_at_3
value: 86.33
- type: recall_at_5
value: 90.347
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.158
- type: map_at_10
value: 52.017
- type: map_at_100
value: 54.259
- type: map_at_1000
value: 54.367
- type: map_at_3
value: 45.738
- type: map_at_5
value: 49.283
- type: mrr_at_1
value: 57.87
- type: mrr_at_10
value: 66.215
- type: mrr_at_100
value: 66.735
- type: mrr_at_1000
value: 66.75
- type: mrr_at_3
value: 64.043
- type: mrr_at_5
value: 65.116
- type: ndcg_at_1
value: 57.87
- type: ndcg_at_10
value: 59.946999999999996
- type: ndcg_at_100
value: 66.31099999999999
- type: ndcg_at_1000
value: 67.75999999999999
- type: ndcg_at_3
value: 55.483000000000004
- type: ndcg_at_5
value: 56.891000000000005
- type: precision_at_1
value: 57.87
- type: precision_at_10
value: 16.497
- type: precision_at_100
value: 2.321
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.067999999999998
- type: recall_at_1
value: 31.158
- type: recall_at_10
value: 67.381
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.989
- type: recall_at_3
value: 50.553000000000004
- type: recall_at_5
value: 57.824
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.073
- type: map_at_10
value: 72.418
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.215
- type: map_at_3
value: 68.791
- type: map_at_5
value: 71.19
- type: mrr_at_1
value: 84.146
- type: mrr_at_10
value: 88.994
- type: mrr_at_100
value: 89.116
- type: mrr_at_1000
value: 89.12
- type: mrr_at_3
value: 88.373
- type: mrr_at_5
value: 88.82
- type: ndcg_at_1
value: 84.146
- type: ndcg_at_10
value: 79.404
- type: ndcg_at_100
value: 81.83200000000001
- type: ndcg_at_1000
value: 82.524
- type: ndcg_at_3
value: 74.595
- type: ndcg_at_5
value: 77.474
- type: precision_at_1
value: 84.146
- type: precision_at_10
value: 16.753999999999998
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 48.854
- type: precision_at_5
value: 31.579
- type: recall_at_1
value: 42.073
- type: recall_at_10
value: 83.768
- type: recall_at_100
value: 93.018
- type: recall_at_1000
value: 97.481
- type: recall_at_3
value: 73.282
- type: recall_at_5
value: 78.947
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.698
- type: map_at_10
value: 34.585
- type: map_at_100
value: 35.782000000000004
- type: map_at_1000
value: 35.825
- type: map_at_3
value: 30.397999999999996
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 22.192
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 36.218
- type: mrr_at_1000
value: 36.256
- type: mrr_at_3
value: 30.986000000000004
- type: mrr_at_5
value: 33.268
- type: ndcg_at_1
value: 22.192
- type: ndcg_at_10
value: 41.957
- type: ndcg_at_100
value: 47.658
- type: ndcg_at_1000
value: 48.697
- type: ndcg_at_3
value: 33.433
- type: ndcg_at_5
value: 37.551
- type: precision_at_1
value: 22.192
- type: precision_at_10
value: 6.781
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.365
- type: precision_at_5
value: 10.713000000000001
- type: recall_at_1
value: 21.698
- type: recall_at_10
value: 64.79
- type: recall_at_100
value: 91.071
- type: recall_at_1000
value: 98.883
- type: recall_at_3
value: 41.611
- type: recall_at_5
value: 51.459999999999994
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.52153488185864
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 36.80090398444147
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.920999999999999
- type: map_at_10
value: 16.049
- type: map_at_100
value: 16.049
- type: map_at_1000
value: 16.049
- type: map_at_3
value: 11.865
- type: map_at_5
value: 13.657
- type: mrr_at_1
value: 53.87
- type: mrr_at_10
value: 62.291
- type: mrr_at_100
value: 62.291
- type: mrr_at_1000
value: 62.291
- type: mrr_at_3
value: 60.681
- type: mrr_at_5
value: 61.61
- type: ndcg_at_1
value: 51.23799999999999
- type: ndcg_at_10
value: 40.892
- type: ndcg_at_100
value: 26.951999999999998
- type: ndcg_at_1000
value: 26.474999999999998
- type: ndcg_at_3
value: 46.821
- type: ndcg_at_5
value: 44.333
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 30.124000000000002
- type: precision_at_100
value: 3.012
- type: precision_at_1000
value: 0.301
- type: precision_at_3
value: 43.55
- type: precision_at_5
value: 38.266
- type: recall_at_1
value: 6.920999999999999
- type: recall_at_10
value: 20.852
- type: recall_at_100
value: 20.852
- type: recall_at_1000
value: 20.852
- type: recall_at_3
value: 13.628000000000002
- type: recall_at_5
value: 16.273
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.827999999999996
- type: map_at_10
value: 63.434000000000005
- type: map_at_100
value: 63.434000000000005
- type: map_at_1000
value: 63.434000000000005
- type: map_at_3
value: 59.794000000000004
- type: map_at_5
value: 62.08
- type: mrr_at_1
value: 52.288999999999994
- type: mrr_at_10
value: 65.95
- type: mrr_at_100
value: 65.95
- type: mrr_at_1000
value: 65.95
- type: mrr_at_3
value: 63.413
- type: mrr_at_5
value: 65.08
- type: ndcg_at_1
value: 52.288999999999994
- type: ndcg_at_10
value: 70.301
- type: ndcg_at_100
value: 70.301
- type: ndcg_at_1000
value: 70.301
- type: ndcg_at_3
value: 63.979
- type: ndcg_at_5
value: 67.582
- type: precision_at_1
value: 52.288999999999994
- type: precision_at_10
value: 10.576
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 28.177000000000003
- type: precision_at_5
value: 19.073
- type: recall_at_1
value: 46.827999999999996
- type: recall_at_10
value: 88.236
- type: recall_at_100
value: 88.236
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 72.371
- type: recall_at_5
value: 80.56
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.652
- type: map_at_10
value: 85.953
- type: map_at_100
value: 85.953
- type: map_at_1000
value: 85.953
- type: map_at_3
value: 83.05399999999999
- type: map_at_5
value: 84.89
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.473
- type: mrr_at_100
value: 88.473
- type: mrr_at_1000
value: 88.473
- type: mrr_at_3
value: 87.592
- type: mrr_at_5
value: 88.211
- type: ndcg_at_1
value: 82.44
- type: ndcg_at_10
value: 89.467
- type: ndcg_at_100
value: 89.33
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 86.822
- type: ndcg_at_5
value: 88.307
- type: precision_at_1
value: 82.44
- type: precision_at_10
value: 13.616
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 38.117000000000004
- type: precision_at_5
value: 25.05
- type: recall_at_1
value: 71.652
- type: recall_at_10
value: 96.224
- type: recall_at_100
value: 96.224
- type: recall_at_1000
value: 96.224
- type: recall_at_3
value: 88.571
- type: recall_at_5
value: 92.812
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.295010338050474
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.26380819328142
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.683
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 17.532
- type: map_at_1000
value: 17.875
- type: map_at_3
value: 10.392
- type: map_at_5
value: 12.592
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 39.951
- type: mrr_at_100
value: 41.025
- type: mrr_at_1000
value: 41.056
- type: mrr_at_3
value: 36.317
- type: mrr_at_5
value: 38.412
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.410999999999998
- type: ndcg_at_100
value: 33.79
- type: ndcg_at_1000
value: 39.035
- type: ndcg_at_3
value: 22.845
- type: ndcg_at_5
value: 20.080000000000002
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 12.790000000000001
- type: precision_at_100
value: 2.633
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 21.367
- type: precision_at_5
value: 17.7
- type: recall_at_1
value: 5.683
- type: recall_at_10
value: 25.91
- type: recall_at_100
value: 53.443
- type: recall_at_1000
value: 78.73
- type: recall_at_3
value: 13.003
- type: recall_at_5
value: 17.932000000000002
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.677978681023
- type: cos_sim_spearman
value: 83.13093441058189
- type: euclidean_pearson
value: 83.35535759341572
- type: euclidean_spearman
value: 83.42583744219611
- type: manhattan_pearson
value: 83.2243124045889
- type: manhattan_spearman
value: 83.39801618652632
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.68960206569666
- type: cos_sim_spearman
value: 77.3368966488535
- type: euclidean_pearson
value: 77.62828980560303
- type: euclidean_spearman
value: 76.77951481444651
- type: manhattan_pearson
value: 77.88637240839041
- type: manhattan_spearman
value: 77.22157841466188
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.18745821650724
- type: cos_sim_spearman
value: 85.04423285574542
- type: euclidean_pearson
value: 85.46604816931023
- type: euclidean_spearman
value: 85.5230593932974
- type: manhattan_pearson
value: 85.57912805986261
- type: manhattan_spearman
value: 85.65955905111873
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.6715333300355
- type: cos_sim_spearman
value: 82.9058522514908
- type: euclidean_pearson
value: 83.9640357424214
- type: euclidean_spearman
value: 83.60415457472637
- type: manhattan_pearson
value: 84.05621005853469
- type: manhattan_spearman
value: 83.87077724707746
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.82422928098886
- type: cos_sim_spearman
value: 88.12660311894628
- type: euclidean_pearson
value: 87.50974805056555
- type: euclidean_spearman
value: 87.91957275596677
- type: manhattan_pearson
value: 87.74119404878883
- type: manhattan_spearman
value: 88.2808922165719
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.80605838552093
- type: cos_sim_spearman
value: 86.24123388765678
- type: euclidean_pearson
value: 85.32648347339814
- type: euclidean_spearman
value: 85.60046671950158
- type: manhattan_pearson
value: 85.53800168487811
- type: manhattan_spearman
value: 85.89542420480763
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.87540978988132
- type: cos_sim_spearman
value: 90.12715295099461
- type: euclidean_pearson
value: 91.61085993525275
- type: euclidean_spearman
value: 91.31835942311758
- type: manhattan_pearson
value: 91.57500202032934
- type: manhattan_spearman
value: 91.1790925526635
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.87136205329556
- type: cos_sim_spearman
value: 68.6253154635078
- type: euclidean_pearson
value: 68.91536015034222
- type: euclidean_spearman
value: 67.63744649352542
- type: manhattan_pearson
value: 69.2000713045275
- type: manhattan_spearman
value: 68.16002901587316
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.21849551039082
- type: cos_sim_spearman
value: 85.6392959372461
- type: euclidean_pearson
value: 85.92050852609488
- type: euclidean_spearman
value: 85.97205649009734
- type: manhattan_pearson
value: 86.1031154802254
- type: manhattan_spearman
value: 86.26791155517466
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.994
- type: map_at_10
value: 74.763
- type: map_at_100
value: 75.127
- type: map_at_1000
value: 75.143
- type: map_at_3
value: 71.824
- type: map_at_5
value: 73.71
- type: mrr_at_1
value: 68.333
- type: mrr_at_10
value: 75.749
- type: mrr_at_100
value: 75.922
- type: mrr_at_1000
value: 75.938
- type: mrr_at_3
value: 73.556
- type: mrr_at_5
value: 74.739
- type: ndcg_at_1
value: 68.333
- type: ndcg_at_10
value: 79.174
- type: ndcg_at_100
value: 80.41
- type: ndcg_at_1000
value: 80.804
- type: ndcg_at_3
value: 74.361
- type: ndcg_at_5
value: 76.861
- type: precision_at_1
value: 68.333
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 64.994
- type: recall_at_10
value: 91.822
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.878
- type: recall_at_5
value: 85.172
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72079207920792
- type: cos_sim_ap
value: 93.00265215525152
- type: cos_sim_f1
value: 85.06596306068602
- type: cos_sim_precision
value: 90.05586592178771
- type: cos_sim_recall
value: 80.60000000000001
- type: dot_accuracy
value: 99.66039603960397
- type: dot_ap
value: 91.22371407479089
- type: dot_f1
value: 82.34693877551021
- type: dot_precision
value: 84.0625
- type: dot_recall
value: 80.7
- type: euclidean_accuracy
value: 99.71881188118812
- type: euclidean_ap
value: 92.88449963304728
- type: euclidean_f1
value: 85.19480519480518
- type: euclidean_precision
value: 88.64864864864866
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.73267326732673
- type: manhattan_ap
value: 93.23055393056883
- type: manhattan_f1
value: 85.88957055214725
- type: manhattan_precision
value: 87.86610878661088
- type: manhattan_recall
value: 84.0
- type: max_accuracy
value: 99.73267326732673
- type: max_ap
value: 93.23055393056883
- type: max_f1
value: 85.88957055214725
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 77.3305735900358
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 41.32967136540674
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.783007208997144
- type: cos_sim_spearman
value: 30.373444721540533
- type: dot_pearson
value: 29.210604111143905
- type: dot_spearman
value: 29.98809758085659
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.234
- type: map_at_10
value: 1.894
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.894
- type: map_at_3
value: 0.636
- type: map_at_5
value: 1.0
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 74.798
- type: ndcg_at_100
value: 16.462
- type: ndcg_at_1000
value: 7.0889999999999995
- type: ndcg_at_3
value: 80.754
- type: ndcg_at_5
value: 77.319
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 7.8
- type: precision_at_1000
value: 0.7799999999999999
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.80000000000001
- type: recall_at_1
value: 0.234
- type: recall_at_10
value: 2.093
- type: recall_at_100
value: 2.093
- type: recall_at_1000
value: 2.093
- type: recall_at_3
value: 0.662
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.703
- type: map_at_10
value: 10.866000000000001
- type: map_at_100
value: 10.866000000000001
- type: map_at_1000
value: 10.866000000000001
- type: map_at_3
value: 5.909
- type: map_at_5
value: 7.35
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 53.583000000000006
- type: mrr_at_100
value: 53.583000000000006
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 49.32
- type: mrr_at_5
value: 51.769
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 27.926000000000002
- type: ndcg_at_100
value: 22.701
- type: ndcg_at_1000
value: 22.701
- type: ndcg_at_3
value: 32.073
- type: ndcg_at_5
value: 28.327999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 24.694
- type: precision_at_100
value: 2.469
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.703
- type: recall_at_10
value: 17.702
- type: recall_at_100
value: 17.702
- type: recall_at_1000
value: 17.702
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 9.748999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 55.70352297774293
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 81.08262141256193
- type: cos_sim_f1
value: 73.82341501361338
- type: cos_sim_precision
value: 72.5720112159062
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 86.66030875603504
- type: dot_ap
value: 76.6052349228621
- type: dot_f1
value: 70.13897280966768
- type: dot_precision
value: 64.70457079152732
- type: dot_recall
value: 76.56992084432717
- type: euclidean_accuracy
value: 88.37098408535495
- type: euclidean_ap
value: 81.12515230092113
- type: euclidean_f1
value: 74.10338225909379
- type: euclidean_precision
value: 71.76761433868974
- type: euclidean_recall
value: 76.59630606860158
- type: manhattan_accuracy
value: 88.34118137926924
- type: manhattan_ap
value: 80.95751834536561
- type: manhattan_f1
value: 73.9119496855346
- type: manhattan_precision
value: 70.625
- type: manhattan_recall
value: 77.5197889182058
- type: max_accuracy
value: 88.37098408535495
- type: max_ap
value: 81.12515230092113
- type: max_f1
value: 74.10338225909379
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.79896767182831
- type: cos_sim_ap
value: 87.40071784061065
- type: cos_sim_f1
value: 79.87753144712087
- type: cos_sim_precision
value: 76.67304015296367
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.95486474948578
- type: dot_ap
value: 86.00227979119943
- type: dot_f1
value: 78.54601474525914
- type: dot_precision
value: 75.00525394045535
- type: dot_recall
value: 82.43763473975977
- type: euclidean_accuracy
value: 89.7892653393876
- type: euclidean_ap
value: 87.42174706480819
- type: euclidean_f1
value: 80.07283321194465
- type: euclidean_precision
value: 75.96738529574351
- type: euclidean_recall
value: 84.6473668001232
- type: manhattan_accuracy
value: 89.8474793340319
- type: manhattan_ap
value: 87.47814292587448
- type: manhattan_f1
value: 80.15461150280949
- type: manhattan_precision
value: 74.88798234468
- type: manhattan_recall
value: 86.21804742839544
- type: max_accuracy
value: 89.8474793340319
- type: max_ap
value: 87.47814292587448
- type: max_f1
value: 80.15461150280949
---
# Model Summary
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
- **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm)
- **Paper:** https://arxiv.org/abs/2402.09906
- **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview
- **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh
| Model | Description |
|-------|-------------|
| [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT |
| [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT |
# Use
The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference).
# Citation
```bibtex
@misc{muennighoff2024generative,
title={Generative Representational Instruction Tuning},
author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela},
year={2024},
eprint={2402.09906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/GritLM_-_GritLM-7B-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"arxiv:2402.09906",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-03T17:10:22+00:00 | [
"2402.09906"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #custom_code #arxiv-2402.09906 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
GritLM-7B - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
pipeline\_tag: text-generation
inference: true
license: apache-2.0
datasets:
* GritLM/tulu2
tags:
* mteb
model-index:
* name: GritLM-7B
results:
+ task:
type: Classification
dataset:
type: mteb/amazon\_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
+ task:
type: Classification
dataset:
type: mteb/amazon\_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
+ task:
type: Classification
dataset:
type: mteb/amazon\_reviews\_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
+ task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.478
- type: map\_at\_10
value: 54.955
- type: map\_at\_100
value: 54.955
- type: map\_at\_1000
value: 54.955
- type: map\_at\_3
value: 50.888999999999996
- type: map\_at\_5
value: 53.349999999999994
- type: mrr\_at\_1
value: 39.757999999999996
- type: mrr\_at\_10
value: 55.449000000000005
- type: mrr\_at\_100
value: 55.449000000000005
- type: mrr\_at\_1000
value: 55.449000000000005
- type: mrr\_at\_3
value: 51.37500000000001
- type: mrr\_at\_5
value: 53.822
- type: ndcg\_at\_1
value: 38.478
- type: ndcg\_at\_10
value: 63.239999999999995
- type: ndcg\_at\_100
value: 63.239999999999995
- type: ndcg\_at\_1000
value: 63.239999999999995
- type: ndcg\_at\_3
value: 54.935
- type: ndcg\_at\_5
value: 59.379000000000005
- type: precision\_at\_1
value: 38.478
- type: precision\_at\_10
value: 8.933
- type: precision\_at\_100
value: 0.893
- type: precision\_at\_1000
value: 0.089
- type: precision\_at\_3
value: 22.214
- type: precision\_at\_5
value: 15.491
- type: recall\_at\_1
value: 38.478
- type: recall\_at\_10
value: 89.331
- type: recall\_at\_100
value: 89.331
- type: recall\_at\_1000
value: 89.331
- type: recall\_at\_3
value: 66.643
- type: recall\_at\_5
value: 77.45400000000001
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v\_measure
value: 51.67144081472449
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v\_measure
value: 48.11256154264126
+ task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
+ task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos\_sim\_pearson
value: 88.1935203751726
- type: cos\_sim\_spearman
value: 86.35497970498659
- type: euclidean\_pearson
value: 85.46910708503744
- type: euclidean\_spearman
value: 85.13928935405485
- type: manhattan\_pearson
value: 85.68373836333303
- type: manhattan\_spearman
value: 85.40013867117746
+ task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v\_measure
value: 40.86793640310432
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v\_measure
value: 39.80291334130727
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.421
- type: map\_at\_10
value: 52.349000000000004
- type: map\_at\_100
value: 52.349000000000004
- type: map\_at\_1000
value: 52.349000000000004
- type: map\_at\_3
value: 48.17
- type: map\_at\_5
value: 50.432
- type: mrr\_at\_1
value: 47.353
- type: mrr\_at\_10
value: 58.387
- type: mrr\_at\_100
value: 58.387
- type: mrr\_at\_1000
value: 58.387
- type: mrr\_at\_3
value: 56.199
- type: mrr\_at\_5
value: 57.487
- type: ndcg\_at\_1
value: 47.353
- type: ndcg\_at\_10
value: 59.202
- type: ndcg\_at\_100
value: 58.848
- type: ndcg\_at\_1000
value: 58.831999999999994
- type: ndcg\_at\_3
value: 54.112
- type: ndcg\_at\_5
value: 56.312
- type: precision\_at\_1
value: 47.353
- type: precision\_at\_10
value: 11.459
- type: precision\_at\_100
value: 1.146
- type: precision\_at\_1000
value: 0.11499999999999999
- type: precision\_at\_3
value: 26.133
- type: precision\_at\_5
value: 18.627
- type: recall\_at\_1
value: 38.421
- type: recall\_at\_10
value: 71.89
- type: recall\_at\_100
value: 71.89
- type: recall\_at\_1000
value: 71.89
- type: recall\_at\_3
value: 56.58
- type: recall\_at\_5
value: 63.125
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.025999999999996
- type: map\_at\_10
value: 50.590999999999994
- type: map\_at\_100
value: 51.99700000000001
- type: map\_at\_1000
value: 52.11599999999999
- type: map\_at\_3
value: 47.435
- type: map\_at\_5
value: 49.236000000000004
- type: mrr\_at\_1
value: 48.28
- type: mrr\_at\_10
value: 56.814
- type: mrr\_at\_100
value: 57.446
- type: mrr\_at\_1000
value: 57.476000000000006
- type: mrr\_at\_3
value: 54.958
- type: mrr\_at\_5
value: 56.084999999999994
- type: ndcg\_at\_1
value: 48.28
- type: ndcg\_at\_10
value: 56.442
- type: ndcg\_at\_100
value: 60.651999999999994
- type: ndcg\_at\_1000
value: 62.187000000000005
- type: ndcg\_at\_3
value: 52.866
- type: ndcg\_at\_5
value: 54.515
- type: precision\_at\_1
value: 48.28
- type: precision\_at\_10
value: 10.586
- type: precision\_at\_100
value: 1.6310000000000002
- type: precision\_at\_1000
value: 0.20600000000000002
- type: precision\_at\_3
value: 25.945
- type: precision\_at\_5
value: 18.076
- type: recall\_at\_1
value: 38.025999999999996
- type: recall\_at\_10
value: 66.11399999999999
- type: recall\_at\_100
value: 83.339
- type: recall\_at\_1000
value: 92.413
- type: recall\_at\_3
value: 54.493
- type: recall\_at\_5
value: 59.64699999999999
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 47.905
- type: map\_at\_10
value: 61.58
- type: map\_at\_100
value: 62.605
- type: map\_at\_1000
value: 62.637
- type: map\_at\_3
value: 58.074000000000005
- type: map\_at\_5
value: 60.260000000000005
- type: mrr\_at\_1
value: 54.42
- type: mrr\_at\_10
value: 64.847
- type: mrr\_at\_100
value: 65.403
- type: mrr\_at\_1000
value: 65.41900000000001
- type: mrr\_at\_3
value: 62.675000000000004
- type: mrr\_at\_5
value: 64.101
- type: ndcg\_at\_1
value: 54.42
- type: ndcg\_at\_10
value: 67.394
- type: ndcg\_at\_100
value: 70.846
- type: ndcg\_at\_1000
value: 71.403
- type: ndcg\_at\_3
value: 62.025
- type: ndcg\_at\_5
value: 65.032
- type: precision\_at\_1
value: 54.42
- type: precision\_at\_10
value: 10.646
- type: precision\_at\_100
value: 1.325
- type: precision\_at\_1000
value: 0.13999999999999999
- type: precision\_at\_3
value: 27.398
- type: precision\_at\_5
value: 18.796
- type: recall\_at\_1
value: 47.905
- type: recall\_at\_10
value: 80.84599999999999
- type: recall\_at\_100
value: 95.078
- type: recall\_at\_1000
value: 98.878
- type: recall\_at\_3
value: 67.05600000000001
- type: recall\_at\_5
value: 74.261
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 30.745
- type: map\_at\_10
value: 41.021
- type: map\_at\_100
value: 41.021
- type: map\_at\_1000
value: 41.021
- type: map\_at\_3
value: 37.714999999999996
- type: map\_at\_5
value: 39.766
- type: mrr\_at\_1
value: 33.559
- type: mrr\_at\_10
value: 43.537
- type: mrr\_at\_100
value: 43.537
- type: mrr\_at\_1000
value: 43.537
- type: mrr\_at\_3
value: 40.546
- type: mrr\_at\_5
value: 42.439
- type: ndcg\_at\_1
value: 33.559
- type: ndcg\_at\_10
value: 46.781
- type: ndcg\_at\_100
value: 46.781
- type: ndcg\_at\_1000
value: 46.781
- type: ndcg\_at\_3
value: 40.516000000000005
- type: ndcg\_at\_5
value: 43.957
- type: precision\_at\_1
value: 33.559
- type: precision\_at\_10
value: 7.198
- type: precision\_at\_100
value: 0.72
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 17.1
- type: precision\_at\_5
value: 12.316
- type: recall\_at\_1
value: 30.745
- type: recall\_at\_10
value: 62.038000000000004
- type: recall\_at\_100
value: 62.038000000000004
- type: recall\_at\_1000
value: 62.038000000000004
- type: recall\_at\_3
value: 45.378
- type: recall\_at\_5
value: 53.580000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 19.637999999999998
- type: map\_at\_10
value: 31.05
- type: map\_at\_100
value: 31.05
- type: map\_at\_1000
value: 31.05
- type: map\_at\_3
value: 27.628000000000004
- type: map\_at\_5
value: 29.767
- type: mrr\_at\_1
value: 25.0
- type: mrr\_at\_10
value: 36.131
- type: mrr\_at\_100
value: 36.131
- type: mrr\_at\_1000
value: 36.131
- type: mrr\_at\_3
value: 33.333
- type: mrr\_at\_5
value: 35.143
- type: ndcg\_at\_1
value: 25.0
- type: ndcg\_at\_10
value: 37.478
- type: ndcg\_at\_100
value: 37.469
- type: ndcg\_at\_1000
value: 37.469
- type: ndcg\_at\_3
value: 31.757999999999996
- type: ndcg\_at\_5
value: 34.821999999999996
- type: precision\_at\_1
value: 25.0
- type: precision\_at\_10
value: 7.188999999999999
- type: precision\_at\_100
value: 0.719
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 15.837000000000002
- type: precision\_at\_5
value: 11.841
- type: recall\_at\_1
value: 19.637999999999998
- type: recall\_at\_10
value: 51.836000000000006
- type: recall\_at\_100
value: 51.836000000000006
- type: recall\_at\_1000
value: 51.836000000000006
- type: recall\_at\_3
value: 36.384
- type: recall\_at\_5
value: 43.964
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 34.884
- type: map\_at\_10
value: 47.88
- type: map\_at\_100
value: 47.88
- type: map\_at\_1000
value: 47.88
- type: map\_at\_3
value: 43.85
- type: map\_at\_5
value: 46.414
- type: mrr\_at\_1
value: 43.022
- type: mrr\_at\_10
value: 53.569
- type: mrr\_at\_100
value: 53.569
- type: mrr\_at\_1000
value: 53.569
- type: mrr\_at\_3
value: 51.075
- type: mrr\_at\_5
value: 52.725
- type: ndcg\_at\_1
value: 43.022
- type: ndcg\_at\_10
value: 54.461000000000006
- type: ndcg\_at\_100
value: 54.388000000000005
- type: ndcg\_at\_1000
value: 54.388000000000005
- type: ndcg\_at\_3
value: 48.864999999999995
- type: ndcg\_at\_5
value: 52.032000000000004
- type: precision\_at\_1
value: 43.022
- type: precision\_at\_10
value: 9.885
- type: precision\_at\_100
value: 0.988
- type: precision\_at\_1000
value: 0.099
- type: precision\_at\_3
value: 23.612
- type: precision\_at\_5
value: 16.997
- type: recall\_at\_1
value: 34.884
- type: recall\_at\_10
value: 68.12899999999999
- type: recall\_at\_100
value: 68.12899999999999
- type: recall\_at\_1000
value: 68.12899999999999
- type: recall\_at\_3
value: 52.428
- type: recall\_at\_5
value: 60.662000000000006
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.588
- type: map\_at\_10
value: 43.85
- type: map\_at\_100
value: 45.317
- type: map\_at\_1000
value: 45.408
- type: map\_at\_3
value: 39.73
- type: map\_at\_5
value: 42.122
- type: mrr\_at\_1
value: 38.927
- type: mrr\_at\_10
value: 49.582
- type: mrr\_at\_100
value: 50.39
- type: mrr\_at\_1000
value: 50.426
- type: mrr\_at\_3
value: 46.518
- type: mrr\_at\_5
value: 48.271
- type: ndcg\_at\_1
value: 38.927
- type: ndcg\_at\_10
value: 50.605999999999995
- type: ndcg\_at\_100
value: 56.22200000000001
- type: ndcg\_at\_1000
value: 57.724
- type: ndcg\_at\_3
value: 44.232
- type: ndcg\_at\_5
value: 47.233999999999995
- type: precision\_at\_1
value: 38.927
- type: precision\_at\_10
value: 9.429
- type: precision\_at\_100
value: 1.435
- type: precision\_at\_1000
value: 0.172
- type: precision\_at\_3
value: 21.271
- type: precision\_at\_5
value: 15.434000000000001
- type: recall\_at\_1
value: 31.588
- type: recall\_at\_10
value: 64.836
- type: recall\_at\_100
value: 88.066
- type: recall\_at\_1000
value: 97.748
- type: recall\_at\_3
value: 47.128
- type: recall\_at\_5
value: 54.954
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.956083333333336
- type: map\_at\_10
value: 43.33483333333333
- type: map\_at\_100
value: 44.64883333333333
- type: map\_at\_1000
value: 44.75
- type: map\_at\_3
value: 39.87741666666666
- type: map\_at\_5
value: 41.86766666666667
- type: mrr\_at\_1
value: 38.06341666666667
- type: mrr\_at\_10
value: 47.839666666666666
- type: mrr\_at\_100
value: 48.644000000000005
- type: mrr\_at\_1000
value: 48.68566666666667
- type: mrr\_at\_3
value: 45.26358333333334
- type: mrr\_at\_5
value: 46.790000000000006
- type: ndcg\_at\_1
value: 38.06341666666667
- type: ndcg\_at\_10
value: 49.419333333333334
- type: ndcg\_at\_100
value: 54.50166666666667
- type: ndcg\_at\_1000
value: 56.161166666666674
- type: ndcg\_at\_3
value: 43.982416666666666
- type: ndcg\_at\_5
value: 46.638083333333334
- type: precision\_at\_1
value: 38.06341666666667
- type: precision\_at\_10
value: 8.70858333333333
- type: precision\_at\_100
value: 1.327
- type: precision\_at\_1000
value: 0.165
- type: precision\_at\_3
value: 20.37816666666667
- type: precision\_at\_5
value: 14.516333333333334
- type: recall\_at\_1
value: 31.956083333333336
- type: recall\_at\_10
value: 62.69458333333334
- type: recall\_at\_100
value: 84.46433333333334
- type: recall\_at\_1000
value: 95.58449999999999
- type: recall\_at\_3
value: 47.52016666666666
- type: recall\_at\_5
value: 54.36066666666666
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 28.912
- type: map\_at\_10
value: 38.291
- type: map\_at\_100
value: 39.44
- type: map\_at\_1000
value: 39.528
- type: map\_at\_3
value: 35.638
- type: map\_at\_5
value: 37.218
- type: mrr\_at\_1
value: 32.822
- type: mrr\_at\_10
value: 41.661
- type: mrr\_at\_100
value: 42.546
- type: mrr\_at\_1000
value: 42.603
- type: mrr\_at\_3
value: 39.238
- type: mrr\_at\_5
value: 40.726
- type: ndcg\_at\_1
value: 32.822
- type: ndcg\_at\_10
value: 43.373
- type: ndcg\_at\_100
value: 48.638
- type: ndcg\_at\_1000
value: 50.654999999999994
- type: ndcg\_at\_3
value: 38.643
- type: ndcg\_at\_5
value: 41.126000000000005
- type: precision\_at\_1
value: 32.822
- type: precision\_at\_10
value: 6.8709999999999996
- type: precision\_at\_100
value: 1.032
- type: precision\_at\_1000
value: 0.128
- type: precision\_at\_3
value: 16.82
- type: precision\_at\_5
value: 11.718
- type: recall\_at\_1
value: 28.912
- type: recall\_at\_10
value: 55.376999999999995
- type: recall\_at\_100
value: 79.066
- type: recall\_at\_1000
value: 93.664
- type: recall\_at\_3
value: 42.569
- type: recall\_at\_5
value: 48.719
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 22.181
- type: map\_at\_10
value: 31.462
- type: map\_at\_100
value: 32.73
- type: map\_at\_1000
value: 32.848
- type: map\_at\_3
value: 28.57
- type: map\_at\_5
value: 30.182
- type: mrr\_at\_1
value: 27.185
- type: mrr\_at\_10
value: 35.846000000000004
- type: mrr\_at\_100
value: 36.811
- type: mrr\_at\_1000
value: 36.873
- type: mrr\_at\_3
value: 33.437
- type: mrr\_at\_5
value: 34.813
- type: ndcg\_at\_1
value: 27.185
- type: ndcg\_at\_10
value: 36.858000000000004
- type: ndcg\_at\_100
value: 42.501
- type: ndcg\_at\_1000
value: 44.945
- type: ndcg\_at\_3
value: 32.066
- type: ndcg\_at\_5
value: 34.29
- type: precision\_at\_1
value: 27.185
- type: precision\_at\_10
value: 6.752
- type: precision\_at\_100
value: 1.111
- type: precision\_at\_1000
value: 0.151
- type: precision\_at\_3
value: 15.290000000000001
- type: precision\_at\_5
value: 11.004999999999999
- type: recall\_at\_1
value: 22.181
- type: recall\_at\_10
value: 48.513
- type: recall\_at\_100
value: 73.418
- type: recall\_at\_1000
value: 90.306
- type: recall\_at\_3
value: 35.003
- type: recall\_at\_5
value: 40.876000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 33.934999999999995
- type: map\_at\_10
value: 44.727
- type: map\_at\_100
value: 44.727
- type: map\_at\_1000
value: 44.727
- type: map\_at\_3
value: 40.918
- type: map\_at\_5
value: 42.961
- type: mrr\_at\_1
value: 39.646
- type: mrr\_at\_10
value: 48.898
- type: mrr\_at\_100
value: 48.898
- type: mrr\_at\_1000
value: 48.898
- type: mrr\_at\_3
value: 45.896
- type: mrr\_at\_5
value: 47.514
- type: ndcg\_at\_1
value: 39.646
- type: ndcg\_at\_10
value: 50.817
- type: ndcg\_at\_100
value: 50.803
- type: ndcg\_at\_1000
value: 50.803
- type: ndcg\_at\_3
value: 44.507999999999996
- type: ndcg\_at\_5
value: 47.259
- type: precision\_at\_1
value: 39.646
- type: precision\_at\_10
value: 8.759
- type: precision\_at\_100
value: 0.876
- type: precision\_at\_1000
value: 0.08800000000000001
- type: precision\_at\_3
value: 20.274
- type: precision\_at\_5
value: 14.366000000000001
- type: recall\_at\_1
value: 33.934999999999995
- type: recall\_at\_10
value: 65.037
- type: recall\_at\_100
value: 65.037
- type: recall\_at\_1000
value: 65.037
- type: recall\_at\_3
value: 47.439
- type: recall\_at\_5
value: 54.567
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 32.058
- type: map\_at\_10
value: 43.137
- type: map\_at\_100
value: 43.137
- type: map\_at\_1000
value: 43.137
- type: map\_at\_3
value: 39.882
- type: map\_at\_5
value: 41.379
- type: mrr\_at\_1
value: 38.933
- type: mrr\_at\_10
value: 48.344
- type: mrr\_at\_100
value: 48.344
- type: mrr\_at\_1000
value: 48.344
- type: mrr\_at\_3
value: 45.652
- type: mrr\_at\_5
value: 46.877
- type: ndcg\_at\_1
value: 38.933
- type: ndcg\_at\_10
value: 49.964
- type: ndcg\_at\_100
value: 49.242000000000004
- type: ndcg\_at\_1000
value: 49.222
- type: ndcg\_at\_3
value: 44.605
- type: ndcg\_at\_5
value: 46.501999999999995
- type: precision\_at\_1
value: 38.933
- type: precision\_at\_10
value: 9.427000000000001
- type: precision\_at\_100
value: 0.943
- type: precision\_at\_1000
value: 0.094
- type: precision\_at\_3
value: 20.685000000000002
- type: precision\_at\_5
value: 14.585
- type: recall\_at\_1
value: 32.058
- type: recall\_at\_10
value: 63.074
- type: recall\_at\_100
value: 63.074
- type: recall\_at\_1000
value: 63.074
- type: recall\_at\_3
value: 47.509
- type: recall\_at\_5
value: 52.455
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 26.029000000000003
- type: map\_at\_10
value: 34.646
- type: map\_at\_100
value: 34.646
- type: map\_at\_1000
value: 34.646
- type: map\_at\_3
value: 31.456
- type: map\_at\_5
value: 33.138
- type: mrr\_at\_1
value: 28.281
- type: mrr\_at\_10
value: 36.905
- type: mrr\_at\_100
value: 36.905
- type: mrr\_at\_1000
value: 36.905
- type: mrr\_at\_3
value: 34.011
- type: mrr\_at\_5
value: 35.638
- type: ndcg\_at\_1
value: 28.281
- type: ndcg\_at\_10
value: 40.159
- type: ndcg\_at\_100
value: 40.159
- type: ndcg\_at\_1000
value: 40.159
- type: ndcg\_at\_3
value: 33.995
- type: ndcg\_at\_5
value: 36.836999999999996
- type: precision\_at\_1
value: 28.281
- type: precision\_at\_10
value: 6.358999999999999
- type: precision\_at\_100
value: 0.636
- type: precision\_at\_1000
value: 0.064
- type: precision\_at\_3
value: 14.233
- type: precision\_at\_5
value: 10.314
- type: recall\_at\_1
value: 26.029000000000003
- type: recall\_at\_10
value: 55.08
- type: recall\_at\_100
value: 55.08
- type: recall\_at\_1000
value: 55.08
- type: recall\_at\_3
value: 38.487
- type: recall\_at\_5
value: 45.308
+ task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 12.842999999999998
- type: map\_at\_10
value: 22.101000000000003
- type: map\_at\_100
value: 24.319
- type: map\_at\_1000
value: 24.51
- type: map\_at\_3
value: 18.372
- type: map\_at\_5
value: 20.323
- type: mrr\_at\_1
value: 27.948
- type: mrr\_at\_10
value: 40.321
- type: mrr\_at\_100
value: 41.262
- type: mrr\_at\_1000
value: 41.297
- type: mrr\_at\_3
value: 36.558
- type: mrr\_at\_5
value: 38.824999999999996
- type: ndcg\_at\_1
value: 27.948
- type: ndcg\_at\_10
value: 30.906
- type: ndcg\_at\_100
value: 38.986
- type: ndcg\_at\_1000
value: 42.136
- type: ndcg\_at\_3
value: 24.911
- type: ndcg\_at\_5
value: 27.168999999999997
- type: precision\_at\_1
value: 27.948
- type: precision\_at\_10
value: 9.798
- type: precision\_at\_100
value: 1.8399999999999999
- type: precision\_at\_1000
value: 0.243
- type: precision\_at\_3
value: 18.328
- type: precision\_at\_5
value: 14.502
- type: recall\_at\_1
value: 12.842999999999998
- type: recall\_at\_10
value: 37.245
- type: recall\_at\_100
value: 64.769
- type: recall\_at\_1000
value: 82.055
- type: recall\_at\_3
value: 23.159
- type: recall\_at\_5
value: 29.113
+ task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 8.934000000000001
- type: map\_at\_10
value: 21.915000000000003
- type: map\_at\_100
value: 21.915000000000003
- type: map\_at\_1000
value: 21.915000000000003
- type: map\_at\_3
value: 14.623
- type: map\_at\_5
value: 17.841
- type: mrr\_at\_1
value: 71.25
- type: mrr\_at\_10
value: 78.994
- type: mrr\_at\_100
value: 78.994
- type: mrr\_at\_1000
value: 78.994
- type: mrr\_at\_3
value: 77.208
- type: mrr\_at\_5
value: 78.55799999999999
- type: ndcg\_at\_1
value: 60.62499999999999
- type: ndcg\_at\_10
value: 46.604
- type: ndcg\_at\_100
value: 35.653
- type: ndcg\_at\_1000
value: 35.531
- type: ndcg\_at\_3
value: 50.605
- type: ndcg\_at\_5
value: 48.730000000000004
- type: precision\_at\_1
value: 71.25
- type: precision\_at\_10
value: 37.75
- type: precision\_at\_100
value: 3.775
- type: precision\_at\_1000
value: 0.377
- type: precision\_at\_3
value: 54.417
- type: precision\_at\_5
value: 48.15
- type: recall\_at\_1
value: 8.934000000000001
- type: recall\_at\_10
value: 28.471000000000004
- type: recall\_at\_100
value: 28.471000000000004
- type: recall\_at\_1000
value: 28.471000000000004
- type: recall\_at\_3
value: 16.019
- type: recall\_at\_5
value: 21.410999999999998
+ task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
+ task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 66.81899999999999
- type: map\_at\_10
value: 78.034
- type: map\_at\_100
value: 78.034
- type: map\_at\_1000
value: 78.034
- type: map\_at\_3
value: 76.43100000000001
- type: map\_at\_5
value: 77.515
- type: mrr\_at\_1
value: 71.542
- type: mrr\_at\_10
value: 81.638
- type: mrr\_at\_100
value: 81.638
- type: mrr\_at\_1000
value: 81.638
- type: mrr\_at\_3
value: 80.403
- type: mrr\_at\_5
value: 81.256
- type: ndcg\_at\_1
value: 71.542
- type: ndcg\_at\_10
value: 82.742
- type: ndcg\_at\_100
value: 82.741
- type: ndcg\_at\_1000
value: 82.741
- type: ndcg\_at\_3
value: 80.039
- type: ndcg\_at\_5
value: 81.695
- type: precision\_at\_1
value: 71.542
- type: precision\_at\_10
value: 10.387
- type: precision\_at\_100
value: 1.039
- type: precision\_at\_1000
value: 0.104
- type: precision\_at\_3
value: 31.447999999999997
- type: precision\_at\_5
value: 19.91
- type: recall\_at\_1
value: 66.81899999999999
- type: recall\_at\_10
value: 93.372
- type: recall\_at\_100
value: 93.372
- type: recall\_at\_1000
value: 93.372
- type: recall\_at\_3
value: 86.33
- type: recall\_at\_5
value: 90.347
+ task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.158
- type: map\_at\_10
value: 52.017
- type: map\_at\_100
value: 54.259
- type: map\_at\_1000
value: 54.367
- type: map\_at\_3
value: 45.738
- type: map\_at\_5
value: 49.283
- type: mrr\_at\_1
value: 57.87
- type: mrr\_at\_10
value: 66.215
- type: mrr\_at\_100
value: 66.735
- type: mrr\_at\_1000
value: 66.75
- type: mrr\_at\_3
value: 64.043
- type: mrr\_at\_5
value: 65.116
- type: ndcg\_at\_1
value: 57.87
- type: ndcg\_at\_10
value: 59.946999999999996
- type: ndcg\_at\_100
value: 66.31099999999999
- type: ndcg\_at\_1000
value: 67.75999999999999
- type: ndcg\_at\_3
value: 55.483000000000004
- type: ndcg\_at\_5
value: 56.891000000000005
- type: precision\_at\_1
value: 57.87
- type: precision\_at\_10
value: 16.497
- type: precision\_at\_100
value: 2.321
- type: precision\_at\_1000
value: 0.258
- type: precision\_at\_3
value: 37.14
- type: precision\_at\_5
value: 27.067999999999998
- type: recall\_at\_1
value: 31.158
- type: recall\_at\_10
value: 67.381
- type: recall\_at\_100
value: 89.464
- type: recall\_at\_1000
value: 97.989
- type: recall\_at\_3
value: 50.553000000000004
- type: recall\_at\_5
value: 57.824
+ task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 42.073
- type: map\_at\_10
value: 72.418
- type: map\_at\_100
value: 73.175
- type: map\_at\_1000
value: 73.215
- type: map\_at\_3
value: 68.791
- type: map\_at\_5
value: 71.19
- type: mrr\_at\_1
value: 84.146
- type: mrr\_at\_10
value: 88.994
- type: mrr\_at\_100
value: 89.116
- type: mrr\_at\_1000
value: 89.12
- type: mrr\_at\_3
value: 88.373
- type: mrr\_at\_5
value: 88.82
- type: ndcg\_at\_1
value: 84.146
- type: ndcg\_at\_10
value: 79.404
- type: ndcg\_at\_100
value: 81.83200000000001
- type: ndcg\_at\_1000
value: 82.524
- type: ndcg\_at\_3
value: 74.595
- type: ndcg\_at\_5
value: 77.474
- type: precision\_at\_1
value: 84.146
- type: precision\_at\_10
value: 16.753999999999998
- type: precision\_at\_100
value: 1.8599999999999999
- type: precision\_at\_1000
value: 0.19499999999999998
- type: precision\_at\_3
value: 48.854
- type: precision\_at\_5
value: 31.579
- type: recall\_at\_1
value: 42.073
- type: recall\_at\_10
value: 83.768
- type: recall\_at\_100
value: 93.018
- type: recall\_at\_1000
value: 97.481
- type: recall\_at\_3
value: 73.282
- type: recall\_at\_5
value: 78.947
+ task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
+ task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map\_at\_1
value: 21.698
- type: map\_at\_10
value: 34.585
- type: map\_at\_100
value: 35.782000000000004
- type: map\_at\_1000
value: 35.825
- type: map\_at\_3
value: 30.397999999999996
- type: map\_at\_5
value: 32.72
- type: mrr\_at\_1
value: 22.192
- type: mrr\_at\_10
value: 35.085
- type: mrr\_at\_100
value: 36.218
- type: mrr\_at\_1000
value: 36.256
- type: mrr\_at\_3
value: 30.986000000000004
- type: mrr\_at\_5
value: 33.268
- type: ndcg\_at\_1
value: 22.192
- type: ndcg\_at\_10
value: 41.957
- type: ndcg\_at\_100
value: 47.658
- type: ndcg\_at\_1000
value: 48.697
- type: ndcg\_at\_3
value: 33.433
- type: ndcg\_at\_5
value: 37.551
- type: precision\_at\_1
value: 22.192
- type: precision\_at\_10
value: 6.781
- type: precision\_at\_100
value: 0.963
- type: precision\_at\_1000
value: 0.105
- type: precision\_at\_3
value: 14.365
- type: precision\_at\_5
value: 10.713000000000001
- type: recall\_at\_1
value: 21.698
- type: recall\_at\_10
value: 64.79
- type: recall\_at\_100
value: 91.071
- type: recall\_at\_1000
value: 98.883
- type: recall\_at\_3
value: 41.611
- type: recall\_at\_5
value: 51.459999999999994
+ task:
type: Classification
dataset:
type: mteb/mtop\_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
+ task:
type: Classification
dataset:
type: mteb/mtop\_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v\_measure
value: 36.52153488185864
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v\_measure
value: 36.80090398444147
+ task:
type: Reranking
dataset:
type: mteb/mind\_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
+ task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 6.920999999999999
- type: map\_at\_10
value: 16.049
- type: map\_at\_100
value: 16.049
- type: map\_at\_1000
value: 16.049
- type: map\_at\_3
value: 11.865
- type: map\_at\_5
value: 13.657
- type: mrr\_at\_1
value: 53.87
- type: mrr\_at\_10
value: 62.291
- type: mrr\_at\_100
value: 62.291
- type: mrr\_at\_1000
value: 62.291
- type: mrr\_at\_3
value: 60.681
- type: mrr\_at\_5
value: 61.61
- type: ndcg\_at\_1
value: 51.23799999999999
- type: ndcg\_at\_10
value: 40.892
- type: ndcg\_at\_100
value: 26.951999999999998
- type: ndcg\_at\_1000
value: 26.474999999999998
- type: ndcg\_at\_3
value: 46.821
- type: ndcg\_at\_5
value: 44.333
- type: precision\_at\_1
value: 53.251000000000005
- type: precision\_at\_10
value: 30.124000000000002
- type: precision\_at\_100
value: 3.012
- type: precision\_at\_1000
value: 0.301
- type: precision\_at\_3
value: 43.55
- type: precision\_at\_5
value: 38.266
- type: recall\_at\_1
value: 6.920999999999999
- type: recall\_at\_10
value: 20.852
- type: recall\_at\_100
value: 20.852
- type: recall\_at\_1000
value: 20.852
- type: recall\_at\_3
value: 13.628000000000002
- type: recall\_at\_5
value: 16.273
+ task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 46.827999999999996
- type: map\_at\_10
value: 63.434000000000005
- type: map\_at\_100
value: 63.434000000000005
- type: map\_at\_1000
value: 63.434000000000005
- type: map\_at\_3
value: 59.794000000000004
- type: map\_at\_5
value: 62.08
- type: mrr\_at\_1
value: 52.288999999999994
- type: mrr\_at\_10
value: 65.95
- type: mrr\_at\_100
value: 65.95
- type: mrr\_at\_1000
value: 65.95
- type: mrr\_at\_3
value: 63.413
- type: mrr\_at\_5
value: 65.08
- type: ndcg\_at\_1
value: 52.288999999999994
- type: ndcg\_at\_10
value: 70.301
- type: ndcg\_at\_100
value: 70.301
- type: ndcg\_at\_1000
value: 70.301
- type: ndcg\_at\_3
value: 63.979
- type: ndcg\_at\_5
value: 67.582
- type: precision\_at\_1
value: 52.288999999999994
- type: precision\_at\_10
value: 10.576
- type: precision\_at\_100
value: 1.058
- type: precision\_at\_1000
value: 0.106
- type: precision\_at\_3
value: 28.177000000000003
- type: precision\_at\_5
value: 19.073
- type: recall\_at\_1
value: 46.827999999999996
- type: recall\_at\_10
value: 88.236
- type: recall\_at\_100
value: 88.236
- type: recall\_at\_1000
value: 88.236
- type: recall\_at\_3
value: 72.371
- type: recall\_at\_5
value: 80.56
+ task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 71.652
- type: map\_at\_10
value: 85.953
- type: map\_at\_100
value: 85.953
- type: map\_at\_1000
value: 85.953
- type: map\_at\_3
value: 83.05399999999999
- type: map\_at\_5
value: 84.89
- type: mrr\_at\_1
value: 82.42
- type: mrr\_at\_10
value: 88.473
- type: mrr\_at\_100
value: 88.473
- type: mrr\_at\_1000
value: 88.473
- type: mrr\_at\_3
value: 87.592
- type: mrr\_at\_5
value: 88.211
- type: ndcg\_at\_1
value: 82.44
- type: ndcg\_at\_10
value: 89.467
- type: ndcg\_at\_100
value: 89.33
- type: ndcg\_at\_1000
value: 89.33
- type: ndcg\_at\_3
value: 86.822
- type: ndcg\_at\_5
value: 88.307
- type: precision\_at\_1
value: 82.44
- type: precision\_at\_10
value: 13.616
- type: precision\_at\_100
value: 1.362
- type: precision\_at\_1000
value: 0.136
- type: precision\_at\_3
value: 38.117000000000004
- type: precision\_at\_5
value: 25.05
- type: recall\_at\_1
value: 71.652
- type: recall\_at\_10
value: 96.224
- type: recall\_at\_100
value: 96.224
- type: recall\_at\_1000
value: 96.224
- type: recall\_at\_3
value: 88.571
- type: recall\_at\_5
value: 92.812
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v\_measure
value: 61.295010338050474
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v\_measure
value: 67.26380819328142
+ task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 5.683
- type: map\_at\_10
value: 14.924999999999999
- type: map\_at\_100
value: 17.532
- type: map\_at\_1000
value: 17.875
- type: map\_at\_3
value: 10.392
- type: map\_at\_5
value: 12.592
- type: mrr\_at\_1
value: 28.000000000000004
- type: mrr\_at\_10
value: 39.951
- type: mrr\_at\_100
value: 41.025
- type: mrr\_at\_1000
value: 41.056
- type: mrr\_at\_3
value: 36.317
- type: mrr\_at\_5
value: 38.412
- type: ndcg\_at\_1
value: 28.000000000000004
- type: ndcg\_at\_10
value: 24.410999999999998
- type: ndcg\_at\_100
value: 33.79
- type: ndcg\_at\_1000
value: 39.035
- type: ndcg\_at\_3
value: 22.845
- type: ndcg\_at\_5
value: 20.080000000000002
- type: precision\_at\_1
value: 28.000000000000004
- type: precision\_at\_10
value: 12.790000000000001
- type: precision\_at\_100
value: 2.633
- type: precision\_at\_1000
value: 0.388
- type: precision\_at\_3
value: 21.367
- type: precision\_at\_5
value: 17.7
- type: recall\_at\_1
value: 5.683
- type: recall\_at\_10
value: 25.91
- type: recall\_at\_100
value: 53.443
- type: recall\_at\_1000
value: 78.73
- type: recall\_at\_3
value: 13.003
- type: recall\_at\_5
value: 17.932000000000002
+ task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos\_sim\_pearson
value: 84.677978681023
- type: cos\_sim\_spearman
value: 83.13093441058189
- type: euclidean\_pearson
value: 83.35535759341572
- type: euclidean\_spearman
value: 83.42583744219611
- type: manhattan\_pearson
value: 83.2243124045889
- type: manhattan\_spearman
value: 83.39801618652632
+ task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos\_sim\_pearson
value: 81.68960206569666
- type: cos\_sim\_spearman
value: 77.3368966488535
- type: euclidean\_pearson
value: 77.62828980560303
- type: euclidean\_spearman
value: 76.77951481444651
- type: manhattan\_pearson
value: 77.88637240839041
- type: manhattan\_spearman
value: 77.22157841466188
+ task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos\_sim\_pearson
value: 84.18745821650724
- type: cos\_sim\_spearman
value: 85.04423285574542
- type: euclidean\_pearson
value: 85.46604816931023
- type: euclidean\_spearman
value: 85.5230593932974
- type: manhattan\_pearson
value: 85.57912805986261
- type: manhattan\_spearman
value: 85.65955905111873
+ task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos\_sim\_pearson
value: 83.6715333300355
- type: cos\_sim\_spearman
value: 82.9058522514908
- type: euclidean\_pearson
value: 83.9640357424214
- type: euclidean\_spearman
value: 83.60415457472637
- type: manhattan\_pearson
value: 84.05621005853469
- type: manhattan\_spearman
value: 83.87077724707746
+ task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos\_sim\_pearson
value: 87.82422928098886
- type: cos\_sim\_spearman
value: 88.12660311894628
- type: euclidean\_pearson
value: 87.50974805056555
- type: euclidean\_spearman
value: 87.91957275596677
- type: manhattan\_pearson
value: 87.74119404878883
- type: manhattan\_spearman
value: 88.2808922165719
+ task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos\_sim\_pearson
value: 84.80605838552093
- type: cos\_sim\_spearman
value: 86.24123388765678
- type: euclidean\_pearson
value: 85.32648347339814
- type: euclidean\_spearman
value: 85.60046671950158
- type: manhattan\_pearson
value: 85.53800168487811
- type: manhattan\_spearman
value: 85.89542420480763
+ task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos\_sim\_pearson
value: 89.87540978988132
- type: cos\_sim\_spearman
value: 90.12715295099461
- type: euclidean\_pearson
value: 91.61085993525275
- type: euclidean\_spearman
value: 91.31835942311758
- type: manhattan\_pearson
value: 91.57500202032934
- type: manhattan\_spearman
value: 91.1790925526635
+ task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos\_sim\_pearson
value: 69.87136205329556
- type: cos\_sim\_spearman
value: 68.6253154635078
- type: euclidean\_pearson
value: 68.91536015034222
- type: euclidean\_spearman
value: 67.63744649352542
- type: manhattan\_pearson
value: 69.2000713045275
- type: manhattan\_spearman
value: 68.16002901587316
+ task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos\_sim\_pearson
value: 85.21849551039082
- type: cos\_sim\_spearman
value: 85.6392959372461
- type: euclidean\_pearson
value: 85.92050852609488
- type: euclidean\_spearman
value: 85.97205649009734
- type: manhattan\_pearson
value: 86.1031154802254
- type: manhattan\_spearman
value: 86.26791155517466
+ task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
+ task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 64.994
- type: map\_at\_10
value: 74.763
- type: map\_at\_100
value: 75.127
- type: map\_at\_1000
value: 75.143
- type: map\_at\_3
value: 71.824
- type: map\_at\_5
value: 73.71
- type: mrr\_at\_1
value: 68.333
- type: mrr\_at\_10
value: 75.749
- type: mrr\_at\_100
value: 75.922
- type: mrr\_at\_1000
value: 75.938
- type: mrr\_at\_3
value: 73.556
- type: mrr\_at\_5
value: 74.739
- type: ndcg\_at\_1
value: 68.333
- type: ndcg\_at\_10
value: 79.174
- type: ndcg\_at\_100
value: 80.41
- type: ndcg\_at\_1000
value: 80.804
- type: ndcg\_at\_3
value: 74.361
- type: ndcg\_at\_5
value: 76.861
- type: precision\_at\_1
value: 68.333
- type: precision\_at\_10
value: 10.333
- type: precision\_at\_100
value: 1.0999999999999999
- type: precision\_at\_1000
value: 0.11299999999999999
- type: precision\_at\_3
value: 28.778
- type: precision\_at\_5
value: 19.067
- type: recall\_at\_1
value: 64.994
- type: recall\_at\_10
value: 91.822
- type: recall\_at\_100
value: 97.0
- type: recall\_at\_1000
value: 100.0
- type: recall\_at\_3
value: 78.878
- type: recall\_at\_5
value: 85.172
+ task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos\_sim\_accuracy
value: 99.72079207920792
- type: cos\_sim\_ap
value: 93.00265215525152
- type: cos\_sim\_f1
value: 85.06596306068602
- type: cos\_sim\_precision
value: 90.05586592178771
- type: cos\_sim\_recall
value: 80.60000000000001
- type: dot\_accuracy
value: 99.66039603960397
- type: dot\_ap
value: 91.22371407479089
- type: dot\_f1
value: 82.34693877551021
- type: dot\_precision
value: 84.0625
- type: dot\_recall
value: 80.7
- type: euclidean\_accuracy
value: 99.71881188118812
- type: euclidean\_ap
value: 92.88449963304728
- type: euclidean\_f1
value: 85.19480519480518
- type: euclidean\_precision
value: 88.64864864864866
- type: euclidean\_recall
value: 82.0
- type: manhattan\_accuracy
value: 99.73267326732673
- type: manhattan\_ap
value: 93.23055393056883
- type: manhattan\_f1
value: 85.88957055214725
- type: manhattan\_precision
value: 87.86610878661088
- type: manhattan\_recall
value: 84.0
- type: max\_accuracy
value: 99.73267326732673
- type: max\_ap
value: 93.23055393056883
- type: max\_f1
value: 85.88957055214725
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v\_measure
value: 77.3305735900358
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v\_measure
value: 41.32967136540674
+ task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
+ task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos\_sim\_pearson
value: 30.783007208997144
- type: cos\_sim\_spearman
value: 30.373444721540533
- type: dot\_pearson
value: 29.210604111143905
- type: dot\_spearman
value: 29.98809758085659
+ task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 0.234
- type: map\_at\_10
value: 1.894
- type: map\_at\_100
value: 1.894
- type: map\_at\_1000
value: 1.894
- type: map\_at\_3
value: 0.636
- type: map\_at\_5
value: 1.0
- type: mrr\_at\_1
value: 88.0
- type: mrr\_at\_10
value: 93.667
- type: mrr\_at\_100
value: 93.667
- type: mrr\_at\_1000
value: 93.667
- type: mrr\_at\_3
value: 93.667
- type: mrr\_at\_5
value: 93.667
- type: ndcg\_at\_1
value: 85.0
- type: ndcg\_at\_10
value: 74.798
- type: ndcg\_at\_100
value: 16.462
- type: ndcg\_at\_1000
value: 7.0889999999999995
- type: ndcg\_at\_3
value: 80.754
- type: ndcg\_at\_5
value: 77.319
- type: precision\_at\_1
value: 88.0
- type: precision\_at\_10
value: 78.0
- type: precision\_at\_100
value: 7.8
- type: precision\_at\_1000
value: 0.7799999999999999
- type: precision\_at\_3
value: 83.333
- type: precision\_at\_5
value: 80.80000000000001
- type: recall\_at\_1
value: 0.234
- type: recall\_at\_10
value: 2.093
- type: recall\_at\_100
value: 2.093
- type: recall\_at\_1000
value: 2.093
- type: recall\_at\_3
value: 0.662
- type: recall\_at\_5
value: 1.0739999999999998
+ task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 2.703
- type: map\_at\_10
value: 10.866000000000001
- type: map\_at\_100
value: 10.866000000000001
- type: map\_at\_1000
value: 10.866000000000001
- type: map\_at\_3
value: 5.909
- type: map\_at\_5
value: 7.35
- type: mrr\_at\_1
value: 36.735
- type: mrr\_at\_10
value: 53.583000000000006
- type: mrr\_at\_100
value: 53.583000000000006
- type: mrr\_at\_1000
value: 53.583000000000006
- type: mrr\_at\_3
value: 49.32
- type: mrr\_at\_5
value: 51.769
- type: ndcg\_at\_1
value: 34.694
- type: ndcg\_at\_10
value: 27.926000000000002
- type: ndcg\_at\_100
value: 22.701
- type: ndcg\_at\_1000
value: 22.701
- type: ndcg\_at\_3
value: 32.073
- type: ndcg\_at\_5
value: 28.327999999999996
- type: precision\_at\_1
value: 36.735
- type: precision\_at\_10
value: 24.694
- type: precision\_at\_100
value: 2.469
- type: precision\_at\_1000
value: 0.247
- type: precision\_at\_3
value: 31.973000000000003
- type: precision\_at\_5
value: 26.939
- type: recall\_at\_1
value: 2.703
- type: recall\_at\_10
value: 17.702
- type: recall\_at\_100
value: 17.702
- type: recall\_at\_1000
value: 17.702
- type: recall\_at\_3
value: 7.208
- type: recall\_at\_5
value: 9.748999999999999
+ task:
type: Classification
dataset:
type: mteb/toxic\_conversations\_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
+ task:
type: Classification
dataset:
type: mteb/tweet\_sentiment\_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
+ task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v\_measure
value: 55.70352297774293
+ task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos\_sim\_accuracy
value: 88.27561542588067
- type: cos\_sim\_ap
value: 81.08262141256193
- type: cos\_sim\_f1
value: 73.82341501361338
- type: cos\_sim\_precision
value: 72.5720112159062
- type: cos\_sim\_recall
value: 75.11873350923483
- type: dot\_accuracy
value: 86.66030875603504
- type: dot\_ap
value: 76.6052349228621
- type: dot\_f1
value: 70.13897280966768
- type: dot\_precision
value: 64.70457079152732
- type: dot\_recall
value: 76.56992084432717
- type: euclidean\_accuracy
value: 88.37098408535495
- type: euclidean\_ap
value: 81.12515230092113
- type: euclidean\_f1
value: 74.10338225909379
- type: euclidean\_precision
value: 71.76761433868974
- type: euclidean\_recall
value: 76.59630606860158
- type: manhattan\_accuracy
value: 88.34118137926924
- type: manhattan\_ap
value: 80.95751834536561
- type: manhattan\_f1
value: 73.9119496855346
- type: manhattan\_precision
value: 70.625
- type: manhattan\_recall
value: 77.5197889182058
- type: max\_accuracy
value: 88.37098408535495
- type: max\_ap
value: 81.12515230092113
- type: max\_f1
value: 74.10338225909379
+ task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos\_sim\_accuracy
value: 89.79896767182831
- type: cos\_sim\_ap
value: 87.40071784061065
- type: cos\_sim\_f1
value: 79.87753144712087
- type: cos\_sim\_precision
value: 76.67304015296367
- type: cos\_sim\_recall
value: 83.3615645210964
- type: dot\_accuracy
value: 88.95486474948578
- type: dot\_ap
value: 86.00227979119943
- type: dot\_f1
value: 78.54601474525914
- type: dot\_precision
value: 75.00525394045535
- type: dot\_recall
value: 82.43763473975977
- type: euclidean\_accuracy
value: 89.7892653393876
- type: euclidean\_ap
value: 87.42174706480819
- type: euclidean\_f1
value: 80.07283321194465
- type: euclidean\_precision
value: 75.96738529574351
- type: euclidean\_recall
value: 84.6473668001232
- type: manhattan\_accuracy
value: 89.8474793340319
- type: manhattan\_ap
value: 87.47814292587448
- type: manhattan\_f1
value: 80.15461150280949
- type: manhattan\_precision
value: 74.88798234468
- type: manhattan\_recall
value: 86.21804742839544
- type: max\_accuracy
value: 89.8474793340319
- type: max\_ap
value: 87.47814292587448
- type: max\_f1
value: 80.15461150280949
---
Model Summary
=============
>
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
>
>
>
* Repository: ContextualAI/gritlm
* Paper: URL
* Logs: URL
* Script: URL
Use
===
The model usage is documented here.
| [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #custom_code #arxiv-2402.09906 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2499
- F1 Score: 0.5474
- Accuracy: 0.5351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1844 | 0.35 | 200 | 2.1790 | 0.0846 | 0.1394 |
| 2.1709 | 0.7 | 400 | 2.1527 | 0.1291 | 0.1645 |
| 2.1059 | 1.05 | 600 | 2.0039 | 0.2082 | 0.2292 |
| 1.9838 | 1.4 | 800 | 1.8829 | 0.2394 | 0.2690 |
| 1.8962 | 1.75 | 1000 | 1.8143 | 0.2801 | 0.2901 |
| 1.8521 | 2.09 | 1200 | 1.7949 | 0.2914 | 0.3109 |
| 1.8118 | 2.44 | 1400 | 1.7158 | 0.3344 | 0.3469 |
| 1.7736 | 2.79 | 1600 | 1.6691 | 0.3469 | 0.3676 |
| 1.7313 | 3.14 | 1800 | 1.6238 | 0.3670 | 0.3842 |
| 1.6996 | 3.49 | 2000 | 1.6052 | 0.3787 | 0.3851 |
| 1.6784 | 3.84 | 2200 | 1.5802 | 0.3885 | 0.3936 |
| 1.6454 | 4.19 | 2400 | 1.5568 | 0.3987 | 0.3977 |
| 1.6235 | 4.54 | 2600 | 1.5292 | 0.4096 | 0.4159 |
| 1.6131 | 4.89 | 2800 | 1.5245 | 0.4141 | 0.4209 |
| 1.5953 | 5.24 | 3000 | 1.4982 | 0.4287 | 0.4345 |
| 1.5705 | 5.58 | 3200 | 1.4806 | 0.4525 | 0.4462 |
| 1.5505 | 5.93 | 3400 | 1.4619 | 0.4395 | 0.4443 |
| 1.5402 | 6.28 | 3600 | 1.4492 | 0.4668 | 0.4506 |
| 1.5208 | 6.63 | 3800 | 1.4306 | 0.4571 | 0.4609 |
| 1.5061 | 6.98 | 4000 | 1.4279 | 0.4644 | 0.4623 |
| 1.4925 | 7.33 | 4200 | 1.4147 | 0.4805 | 0.4701 |
| 1.4772 | 7.68 | 4400 | 1.4055 | 0.4787 | 0.4696 |
| 1.4782 | 8.03 | 4600 | 1.3983 | 0.4738 | 0.4700 |
| 1.4524 | 8.38 | 4800 | 1.3893 | 0.4867 | 0.4829 |
| 1.4546 | 8.73 | 5000 | 1.3800 | 0.4816 | 0.4738 |
| 1.4394 | 9.08 | 5200 | 1.3782 | 0.4942 | 0.4775 |
| 1.4326 | 9.42 | 5400 | 1.3631 | 0.4857 | 0.4853 |
| 1.4264 | 9.77 | 5600 | 1.3457 | 0.4992 | 0.4932 |
| 1.4145 | 10.12 | 5800 | 1.3439 | 0.5071 | 0.4976 |
| 1.4115 | 10.47 | 6000 | 1.3366 | 0.5073 | 0.4972 |
| 1.3942 | 10.82 | 6200 | 1.3286 | 0.5113 | 0.4964 |
| 1.3797 | 11.17 | 6400 | 1.3205 | 0.5109 | 0.5029 |
| 1.3778 | 11.52 | 6600 | 1.3173 | 0.5186 | 0.5041 |
| 1.3805 | 11.87 | 6800 | 1.3090 | 0.5161 | 0.5040 |
| 1.3645 | 12.22 | 7000 | 1.3017 | 0.5267 | 0.5171 |
| 1.3628 | 12.57 | 7200 | 1.3015 | 0.5149 | 0.5061 |
| 1.3597 | 12.91 | 7400 | 1.2982 | 0.5236 | 0.5075 |
| 1.3554 | 13.26 | 7600 | 1.2894 | 0.5229 | 0.5130 |
| 1.3392 | 13.61 | 7800 | 1.2850 | 0.5326 | 0.5183 |
| 1.3441 | 13.96 | 8000 | 1.2806 | 0.5313 | 0.5182 |
| 1.3317 | 14.31 | 8200 | 1.2782 | 0.5332 | 0.5193 |
| 1.3369 | 14.66 | 8400 | 1.2731 | 0.5326 | 0.5220 |
| 1.3337 | 15.01 | 8600 | 1.2732 | 0.5297 | 0.5226 |
| 1.3336 | 15.36 | 8800 | 1.2696 | 0.5409 | 0.5279 |
| 1.3161 | 15.71 | 9000 | 1.2714 | 0.5357 | 0.5248 |
| 1.3329 | 16.06 | 9200 | 1.2696 | 0.5347 | 0.5242 |
| 1.3261 | 16.4 | 9400 | 1.2665 | 0.5363 | 0.5287 |
| 1.3228 | 16.75 | 9600 | 1.2668 | 0.5374 | 0.5252 |
| 1.3258 | 17.1 | 9800 | 1.2662 | 0.5395 | 0.5280 |
| 1.3289 | 17.45 | 10000 | 1.2655 | 0.5392 | 0.5277 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:10:57+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_virus\_covid-seqsight\_65536\_512\_47M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2499
* F1 Score: 0.5474
* Accuracy: 0.5351
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
token-classification | spacy | | Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.4,<3.8.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (11 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `College Name`, `Companies worked at`, `Degree`, `Designation`, `Email Address`, `Graduation Year`, `Location`, `Name`, `Skills`, `UNKNOWN`, `Years of Experience` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 57.19 |
| `ENTS_P` | 60.75 |
| `ENTS_R` | 54.02 |
| `TRANSFORMER_LOSS` | 480458.92 |
| `NER_LOSS` | 1538225.13 | | {"language": ["en"], "tags": ["spacy", "token-classification"]} | prof144/en_pipeline | null | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | null | 2024-05-03T17:10:59+00:00 | [] | [
"en"
] | TAGS
#spacy #token-classification #en #model-index #region-us
|
### Label Scheme
View label scheme (11 labels for 1 components)
### Accuracy
| [
"### Label Scheme\n\n\n\nView label scheme (11 labels for 1 components)",
"### Accuracy"
] | [
"TAGS\n#spacy #token-classification #en #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (11 labels for 1 components)",
"### Accuracy"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4679
- F1 Score: 0.4607
- Accuracy: 0.4564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 2.1848 | 0.35 | 200 | 2.1827 | 0.0810 | 0.1389 |
| 2.1771 | 0.7 | 400 | 2.1699 | 0.1086 | 0.1417 |
| 2.1622 | 1.05 | 600 | 2.1455 | 0.1413 | 0.1689 |
| 2.1255 | 1.4 | 800 | 2.0468 | 0.1962 | 0.2244 |
| 2.0333 | 1.75 | 1000 | 1.9384 | 0.2227 | 0.2599 |
| 1.9701 | 2.09 | 1200 | 1.8946 | 0.2381 | 0.2709 |
| 1.9252 | 2.44 | 1400 | 1.8326 | 0.2954 | 0.3059 |
| 1.8922 | 2.79 | 1600 | 1.7987 | 0.2895 | 0.3096 |
| 1.8635 | 3.14 | 1800 | 1.7673 | 0.2891 | 0.3158 |
| 1.8315 | 3.49 | 2000 | 1.7372 | 0.3193 | 0.3337 |
| 1.8216 | 3.84 | 2200 | 1.7192 | 0.3306 | 0.3421 |
| 1.7967 | 4.19 | 2400 | 1.6913 | 0.3757 | 0.3702 |
| 1.7827 | 4.54 | 2600 | 1.6755 | 0.3676 | 0.3740 |
| 1.7731 | 4.89 | 2800 | 1.6627 | 0.3750 | 0.3800 |
| 1.7636 | 5.24 | 3000 | 1.6477 | 0.3810 | 0.3869 |
| 1.7467 | 5.58 | 3200 | 1.6318 | 0.3942 | 0.3981 |
| 1.7298 | 5.93 | 3400 | 1.6335 | 0.3720 | 0.3806 |
| 1.7237 | 6.28 | 3600 | 1.6197 | 0.3891 | 0.3901 |
| 1.7099 | 6.63 | 3800 | 1.5950 | 0.4002 | 0.4086 |
| 1.6962 | 6.98 | 4000 | 1.5889 | 0.4084 | 0.4094 |
| 1.6871 | 7.33 | 4200 | 1.5824 | 0.4067 | 0.4116 |
| 1.6794 | 7.68 | 4400 | 1.5680 | 0.4223 | 0.4211 |
| 1.6816 | 8.03 | 4600 | 1.5706 | 0.4142 | 0.4126 |
| 1.6575 | 8.38 | 4800 | 1.5548 | 0.4110 | 0.4181 |
| 1.6688 | 8.73 | 5000 | 1.5507 | 0.4238 | 0.4271 |
| 1.6537 | 9.08 | 5200 | 1.5434 | 0.4284 | 0.4202 |
| 1.6549 | 9.42 | 5400 | 1.5424 | 0.4228 | 0.4244 |
| 1.6383 | 9.77 | 5600 | 1.5232 | 0.4264 | 0.4319 |
| 1.6347 | 10.12 | 5800 | 1.5260 | 0.4333 | 0.4294 |
| 1.6299 | 10.47 | 6000 | 1.5217 | 0.4366 | 0.4297 |
| 1.6276 | 10.82 | 6200 | 1.5146 | 0.4402 | 0.4307 |
| 1.6149 | 11.17 | 6400 | 1.5198 | 0.4366 | 0.4309 |
| 1.6118 | 11.52 | 6600 | 1.5046 | 0.4404 | 0.4319 |
| 1.6157 | 11.87 | 6800 | 1.5022 | 0.4437 | 0.4384 |
| 1.6018 | 12.22 | 7000 | 1.4951 | 0.4450 | 0.4370 |
| 1.5977 | 12.57 | 7200 | 1.4887 | 0.4440 | 0.4403 |
| 1.5986 | 12.91 | 7400 | 1.4909 | 0.4491 | 0.4399 |
| 1.5961 | 13.26 | 7600 | 1.4830 | 0.4442 | 0.4374 |
| 1.5912 | 13.61 | 7800 | 1.4843 | 0.4468 | 0.4355 |
| 1.585 | 13.96 | 8000 | 1.4802 | 0.4520 | 0.4471 |
| 1.5771 | 14.31 | 8200 | 1.4751 | 0.4550 | 0.4488 |
| 1.584 | 14.66 | 8400 | 1.4684 | 0.4564 | 0.4475 |
| 1.5823 | 15.01 | 8600 | 1.4734 | 0.4526 | 0.4475 |
| 1.593 | 15.36 | 8800 | 1.4694 | 0.4581 | 0.4493 |
| 1.5742 | 15.71 | 9000 | 1.4690 | 0.4541 | 0.4465 |
| 1.5807 | 16.06 | 9200 | 1.4676 | 0.4575 | 0.4491 |
| 1.5771 | 16.4 | 9400 | 1.4680 | 0.4541 | 0.4472 |
| 1.5728 | 16.75 | 9600 | 1.4663 | 0.4590 | 0.4504 |
| 1.5805 | 17.1 | 9800 | 1.4656 | 0.4607 | 0.4529 |
| 1.5809 | 17.45 | 10000 | 1.4646 | 0.4602 | 0.4528 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_virus_covid-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T17:11:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_virus\_covid-seqsight\_65536\_512\_47M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4679
* F1 Score: 0.4607
* Accuracy: 0.4564
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4633
- F1 Score: 0.8019
- Accuracy: 0.8026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6143 | 5.13 | 200 | 0.5429 | 0.7210 | 0.7308 |
| 0.4939 | 10.26 | 400 | 0.4784 | 0.7652 | 0.7651 |
| 0.4597 | 15.38 | 600 | 0.4630 | 0.7815 | 0.7814 |
| 0.4424 | 20.51 | 800 | 0.4467 | 0.7942 | 0.7945 |
| 0.429 | 25.64 | 1000 | 0.4458 | 0.7912 | 0.7912 |
| 0.4193 | 30.77 | 1200 | 0.4435 | 0.8027 | 0.8026 |
| 0.4116 | 35.9 | 1400 | 0.4424 | 0.7992 | 0.7993 |
| 0.4052 | 41.03 | 1600 | 0.4467 | 0.7970 | 0.7977 |
| 0.4001 | 46.15 | 1800 | 0.4446 | 0.7945 | 0.7945 |
| 0.3919 | 51.28 | 2000 | 0.4406 | 0.8009 | 0.8010 |
| 0.3874 | 56.41 | 2200 | 0.4495 | 0.8066 | 0.8075 |
| 0.3804 | 61.54 | 2400 | 0.4465 | 0.8024 | 0.8026 |
| 0.3733 | 66.67 | 2600 | 0.4572 | 0.8039 | 0.8042 |
| 0.3732 | 71.79 | 2800 | 0.4552 | 0.8054 | 0.8059 |
| 0.3721 | 76.92 | 3000 | 0.4549 | 0.7799 | 0.7798 |
| 0.3669 | 82.05 | 3200 | 0.4633 | 0.7893 | 0.7896 |
| 0.3633 | 87.18 | 3400 | 0.4594 | 0.7881 | 0.7879 |
| 0.3587 | 92.31 | 3600 | 0.4601 | 0.7993 | 0.7993 |
| 0.3569 | 97.44 | 3800 | 0.4608 | 0.7961 | 0.7961 |
| 0.3474 | 102.56 | 4000 | 0.4729 | 0.7912 | 0.7912 |
| 0.3523 | 107.69 | 4200 | 0.4651 | 0.7929 | 0.7928 |
| 0.3502 | 112.82 | 4400 | 0.4641 | 0.7896 | 0.7896 |
| 0.3427 | 117.95 | 4600 | 0.4727 | 0.7896 | 0.7896 |
| 0.3428 | 123.08 | 4800 | 0.4731 | 0.7946 | 0.7945 |
| 0.3407 | 128.21 | 5000 | 0.4764 | 0.7927 | 0.7928 |
| 0.3418 | 133.33 | 5200 | 0.4797 | 0.7893 | 0.7896 |
| 0.3346 | 138.46 | 5400 | 0.4938 | 0.7925 | 0.7928 |
| 0.3348 | 143.59 | 5600 | 0.4862 | 0.7957 | 0.7961 |
| 0.3364 | 148.72 | 5800 | 0.4881 | 0.7908 | 0.7912 |
| 0.3329 | 153.85 | 6000 | 0.4877 | 0.7860 | 0.7863 |
| 0.3306 | 158.97 | 6200 | 0.4849 | 0.7878 | 0.7879 |
| 0.3292 | 164.1 | 6400 | 0.4915 | 0.7939 | 0.7945 |
| 0.3262 | 169.23 | 6600 | 0.4810 | 0.7863 | 0.7863 |
| 0.3294 | 174.36 | 6800 | 0.4848 | 0.7911 | 0.7912 |
| 0.3258 | 179.49 | 7000 | 0.4976 | 0.7908 | 0.7912 |
| 0.3258 | 184.62 | 7200 | 0.5007 | 0.7986 | 0.7993 |
| 0.3236 | 189.74 | 7400 | 0.4985 | 0.7878 | 0.7879 |
| 0.3199 | 194.87 | 7600 | 0.5001 | 0.7878 | 0.7879 |
| 0.3197 | 200.0 | 7800 | 0.5024 | 0.7876 | 0.7879 |
| 0.3227 | 205.13 | 8000 | 0.4944 | 0.7877 | 0.7879 |
| 0.3174 | 210.26 | 8200 | 0.4960 | 0.7863 | 0.7863 |
| 0.3199 | 215.38 | 8400 | 0.4989 | 0.7862 | 0.7863 |
| 0.3156 | 220.51 | 8600 | 0.5035 | 0.7893 | 0.7896 |
| 0.3171 | 225.64 | 8800 | 0.5018 | 0.7879 | 0.7879 |
| 0.3179 | 230.77 | 9000 | 0.5001 | 0.7895 | 0.7896 |
| 0.3152 | 235.9 | 9200 | 0.4989 | 0.7895 | 0.7896 |
| 0.3189 | 241.03 | 9400 | 0.5018 | 0.7911 | 0.7912 |
| 0.3144 | 246.15 | 9600 | 0.5024 | 0.7895 | 0.7896 |
| 0.3203 | 251.28 | 9800 | 0.5003 | 0.7895 | 0.7896 |
| 0.3167 | 256.41 | 10000 | 0.5005 | 0.7895 | 0.7896 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:11:27+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_4096\_512\_15M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4633
* F1 Score: 0.8019
* Accuracy: 0.8026
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4525
- F1 Score: 0.8104
- Accuracy: 0.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5575 | 5.13 | 200 | 0.4922 | 0.7656 | 0.7700 |
| 0.4488 | 10.26 | 400 | 0.4478 | 0.7979 | 0.7977 |
| 0.4178 | 15.38 | 600 | 0.4400 | 0.8073 | 0.8075 |
| 0.399 | 20.51 | 800 | 0.4380 | 0.7994 | 0.7993 |
| 0.3815 | 25.64 | 1000 | 0.4497 | 0.8044 | 0.8042 |
| 0.3664 | 30.77 | 1200 | 0.4466 | 0.8008 | 0.8010 |
| 0.3513 | 35.9 | 1400 | 0.4605 | 0.8009 | 0.8010 |
| 0.3398 | 41.03 | 1600 | 0.4883 | 0.8029 | 0.8042 |
| 0.3316 | 46.15 | 1800 | 0.4697 | 0.7992 | 0.7993 |
| 0.3172 | 51.28 | 2000 | 0.4807 | 0.7990 | 0.7993 |
| 0.3078 | 56.41 | 2200 | 0.4928 | 0.8010 | 0.8010 |
| 0.2977 | 61.54 | 2400 | 0.4936 | 0.8027 | 0.8026 |
| 0.2837 | 66.67 | 2600 | 0.5377 | 0.7967 | 0.7977 |
| 0.28 | 71.79 | 2800 | 0.5258 | 0.7924 | 0.7928 |
| 0.2724 | 76.92 | 3000 | 0.5418 | 0.7943 | 0.7945 |
| 0.2668 | 82.05 | 3200 | 0.5509 | 0.7865 | 0.7879 |
| 0.256 | 87.18 | 3400 | 0.5541 | 0.8010 | 0.8010 |
| 0.2487 | 92.31 | 3600 | 0.5716 | 0.7987 | 0.7993 |
| 0.2461 | 97.44 | 3800 | 0.5703 | 0.7832 | 0.7847 |
| 0.2357 | 102.56 | 4000 | 0.5745 | 0.7926 | 0.7928 |
| 0.2345 | 107.69 | 4200 | 0.5881 | 0.7893 | 0.7896 |
| 0.2332 | 112.82 | 4400 | 0.5964 | 0.7787 | 0.7798 |
| 0.2222 | 117.95 | 4600 | 0.6121 | 0.7961 | 0.7961 |
| 0.2141 | 123.08 | 4800 | 0.6155 | 0.7897 | 0.7896 |
| 0.2133 | 128.21 | 5000 | 0.6218 | 0.7945 | 0.7945 |
| 0.2121 | 133.33 | 5200 | 0.6485 | 0.7872 | 0.7879 |
| 0.2051 | 138.46 | 5400 | 0.6307 | 0.7910 | 0.7912 |
| 0.1996 | 143.59 | 5600 | 0.6425 | 0.7929 | 0.7928 |
| 0.1976 | 148.72 | 5800 | 0.6696 | 0.7994 | 0.7993 |
| 0.1967 | 153.85 | 6000 | 0.6575 | 0.7873 | 0.7879 |
| 0.1901 | 158.97 | 6200 | 0.6697 | 0.7816 | 0.7814 |
| 0.1896 | 164.1 | 6400 | 0.6617 | 0.7943 | 0.7945 |
| 0.1824 | 169.23 | 6600 | 0.6753 | 0.7977 | 0.7977 |
| 0.1858 | 174.36 | 6800 | 0.6642 | 0.7959 | 0.7961 |
| 0.1762 | 179.49 | 7000 | 0.6973 | 0.7942 | 0.7945 |
| 0.1769 | 184.62 | 7200 | 0.7137 | 0.7921 | 0.7928 |
| 0.1769 | 189.74 | 7400 | 0.7157 | 0.7911 | 0.7912 |
| 0.1709 | 194.87 | 7600 | 0.7214 | 0.7878 | 0.7879 |
| 0.1749 | 200.0 | 7800 | 0.7159 | 0.7894 | 0.7896 |
| 0.1717 | 205.13 | 8000 | 0.7236 | 0.7863 | 0.7863 |
| 0.1698 | 210.26 | 8200 | 0.7168 | 0.7911 | 0.7912 |
| 0.1669 | 215.38 | 8400 | 0.7280 | 0.7862 | 0.7863 |
| 0.1685 | 220.51 | 8600 | 0.7279 | 0.7843 | 0.7847 |
| 0.1626 | 225.64 | 8800 | 0.7365 | 0.7895 | 0.7896 |
| 0.1678 | 230.77 | 9000 | 0.7328 | 0.7895 | 0.7896 |
| 0.1628 | 235.9 | 9200 | 0.7431 | 0.7912 | 0.7912 |
| 0.1676 | 241.03 | 9400 | 0.7286 | 0.7877 | 0.7879 |
| 0.1602 | 246.15 | 9600 | 0.7438 | 0.7844 | 0.7847 |
| 0.1668 | 251.28 | 9800 | 0.7388 | 0.7894 | 0.7896 |
| 0.1609 | 256.41 | 10000 | 0.7400 | 0.7894 | 0.7896 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:11:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_4096\_512\_15M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4525
* F1 Score: 0.8104
* Accuracy: 0.8108
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4529
- F1 Score: 0.8189
- Accuracy: 0.8189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.525 | 5.13 | 200 | 0.4598 | 0.7888 | 0.7912 |
| 0.4258 | 10.26 | 400 | 0.4665 | 0.7872 | 0.7879 |
| 0.3858 | 15.38 | 600 | 0.4548 | 0.7944 | 0.7945 |
| 0.3567 | 20.51 | 800 | 0.4949 | 0.7881 | 0.7896 |
| 0.3289 | 25.64 | 1000 | 0.4710 | 0.7994 | 0.7993 |
| 0.3034 | 30.77 | 1200 | 0.4920 | 0.7770 | 0.7781 |
| 0.2771 | 35.9 | 1400 | 0.5520 | 0.7913 | 0.7912 |
| 0.2585 | 41.03 | 1600 | 0.5391 | 0.7790 | 0.7798 |
| 0.2376 | 46.15 | 1800 | 0.5667 | 0.7839 | 0.7847 |
| 0.2163 | 51.28 | 2000 | 0.6376 | 0.7881 | 0.7879 |
| 0.2005 | 56.41 | 2200 | 0.6994 | 0.7927 | 0.7928 |
| 0.1804 | 61.54 | 2400 | 0.7399 | 0.7848 | 0.7847 |
| 0.1633 | 66.67 | 2600 | 0.8005 | 0.7694 | 0.7700 |
| 0.1578 | 71.79 | 2800 | 0.8019 | 0.7723 | 0.7732 |
| 0.1423 | 76.92 | 3000 | 0.8350 | 0.7480 | 0.7488 |
| 0.1326 | 82.05 | 3200 | 0.7942 | 0.7535 | 0.7537 |
| 0.1223 | 87.18 | 3400 | 0.9037 | 0.7633 | 0.7635 |
| 0.1147 | 92.31 | 3600 | 0.9318 | 0.7583 | 0.7586 |
| 0.1092 | 97.44 | 3800 | 0.9013 | 0.7675 | 0.7684 |
| 0.1027 | 102.56 | 4000 | 0.9575 | 0.7649 | 0.7651 |
| 0.0978 | 107.69 | 4200 | 0.9630 | 0.7778 | 0.7781 |
| 0.0934 | 112.82 | 4400 | 0.9373 | 0.7747 | 0.7749 |
| 0.0825 | 117.95 | 4600 | 1.0492 | 0.7668 | 0.7667 |
| 0.083 | 123.08 | 4800 | 1.1142 | 0.7633 | 0.7635 |
| 0.0819 | 128.21 | 5000 | 1.0054 | 0.7750 | 0.7749 |
| 0.0807 | 133.33 | 5200 | 1.0625 | 0.7741 | 0.7749 |
| 0.0732 | 138.46 | 5400 | 1.0712 | 0.7658 | 0.7667 |
| 0.0694 | 143.59 | 5600 | 1.0530 | 0.7679 | 0.7684 |
| 0.0686 | 148.72 | 5800 | 1.0695 | 0.7726 | 0.7732 |
| 0.0683 | 153.85 | 6000 | 1.0801 | 0.7783 | 0.7781 |
| 0.0603 | 158.97 | 6200 | 1.1614 | 0.7749 | 0.7749 |
| 0.0629 | 164.1 | 6400 | 1.0910 | 0.7748 | 0.7749 |
| 0.0597 | 169.23 | 6600 | 1.0800 | 0.7764 | 0.7765 |
| 0.0612 | 174.36 | 6800 | 1.1113 | 0.7620 | 0.7618 |
| 0.0553 | 179.49 | 7000 | 1.1382 | 0.7781 | 0.7781 |
| 0.0561 | 184.62 | 7200 | 1.1173 | 0.7763 | 0.7765 |
| 0.0553 | 189.74 | 7400 | 1.1237 | 0.7701 | 0.7700 |
| 0.0503 | 194.87 | 7600 | 1.1780 | 0.7799 | 0.7798 |
| 0.05 | 200.0 | 7800 | 1.2119 | 0.7668 | 0.7667 |
| 0.0483 | 205.13 | 8000 | 1.2256 | 0.7733 | 0.7732 |
| 0.0485 | 210.26 | 8200 | 1.2152 | 0.7797 | 0.7798 |
| 0.0508 | 215.38 | 8400 | 1.1864 | 0.7779 | 0.7781 |
| 0.0476 | 220.51 | 8600 | 1.2031 | 0.7857 | 0.7863 |
| 0.0453 | 225.64 | 8800 | 1.2366 | 0.7830 | 0.7830 |
| 0.0451 | 230.77 | 9000 | 1.2441 | 0.7782 | 0.7781 |
| 0.0443 | 235.9 | 9200 | 1.2473 | 0.7812 | 0.7814 |
| 0.0473 | 241.03 | 9400 | 1.2117 | 0.7764 | 0.7765 |
| 0.0442 | 246.15 | 9600 | 1.2430 | 0.7846 | 0.7847 |
| 0.0428 | 251.28 | 9800 | 1.2550 | 0.7829 | 0.7830 |
| 0.0434 | 256.41 | 10000 | 1.2525 | 0.7829 | 0.7830 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:12:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_4096\_512\_15M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4529
* F1 Score: 0.8189
* Accuracy: 0.8189
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** jirawan-chro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | jirawan-chro/lora_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:14:19+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: jirawan-chro
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: jirawan-chro\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: jirawan-chro\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1268
- F1 Score: 0.9512
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3546 | 0.6 | 200 | 0.1798 | 0.9257 | 0.9258 |
| 0.1903 | 1.2 | 400 | 0.1564 | 0.9370 | 0.9371 |
| 0.1734 | 1.81 | 600 | 0.1403 | 0.9463 | 0.9463 |
| 0.1569 | 2.41 | 800 | 0.1374 | 0.9468 | 0.9469 |
| 0.1515 | 3.01 | 1000 | 0.1296 | 0.9510 | 0.9510 |
| 0.1465 | 3.61 | 1200 | 0.1263 | 0.9521 | 0.9521 |
| 0.1441 | 4.22 | 1400 | 0.1236 | 0.9529 | 0.9529 |
| 0.1372 | 4.82 | 1600 | 0.1209 | 0.9538 | 0.9538 |
| 0.1365 | 5.42 | 1800 | 0.1239 | 0.9504 | 0.9504 |
| 0.1312 | 6.02 | 2000 | 0.1234 | 0.9520 | 0.9520 |
| 0.1302 | 6.63 | 2200 | 0.1171 | 0.9550 | 0.9550 |
| 0.1309 | 7.23 | 2400 | 0.1162 | 0.9546 | 0.9546 |
| 0.1263 | 7.83 | 2600 | 0.1171 | 0.9533 | 0.9533 |
| 0.1285 | 8.43 | 2800 | 0.1189 | 0.9536 | 0.9536 |
| 0.1298 | 9.04 | 3000 | 0.1164 | 0.9563 | 0.9563 |
| 0.126 | 9.64 | 3200 | 0.1199 | 0.9533 | 0.9533 |
| 0.1268 | 10.24 | 3400 | 0.1154 | 0.9585 | 0.9585 |
| 0.1243 | 10.84 | 3600 | 0.1142 | 0.9566 | 0.9567 |
| 0.1214 | 11.45 | 3800 | 0.1139 | 0.9555 | 0.9555 |
| 0.1228 | 12.05 | 4000 | 0.1134 | 0.9576 | 0.9576 |
| 0.1223 | 12.65 | 4200 | 0.1137 | 0.9557 | 0.9557 |
| 0.1237 | 13.25 | 4400 | 0.1122 | 0.9557 | 0.9557 |
| 0.1207 | 13.86 | 4600 | 0.1118 | 0.9563 | 0.9563 |
| 0.1225 | 14.46 | 4800 | 0.1122 | 0.9572 | 0.9572 |
| 0.1182 | 15.06 | 5000 | 0.1109 | 0.9580 | 0.9580 |
| 0.1191 | 15.66 | 5200 | 0.1114 | 0.9565 | 0.9565 |
| 0.1218 | 16.27 | 5400 | 0.1102 | 0.9580 | 0.9580 |
| 0.1179 | 16.87 | 5600 | 0.1105 | 0.9561 | 0.9561 |
| 0.1165 | 17.47 | 5800 | 0.1104 | 0.9563 | 0.9563 |
| 0.1219 | 18.07 | 6000 | 0.1094 | 0.9580 | 0.9580 |
| 0.1189 | 18.67 | 6200 | 0.1086 | 0.9583 | 0.9584 |
| 0.1187 | 19.28 | 6400 | 0.1089 | 0.9576 | 0.9576 |
| 0.1128 | 19.88 | 6600 | 0.1102 | 0.9583 | 0.9584 |
| 0.1209 | 20.48 | 6800 | 0.1097 | 0.9580 | 0.9580 |
| 0.115 | 21.08 | 7000 | 0.1088 | 0.9576 | 0.9576 |
| 0.1127 | 21.69 | 7200 | 0.1103 | 0.9565 | 0.9565 |
| 0.1147 | 22.29 | 7400 | 0.1116 | 0.9567 | 0.9567 |
| 0.1183 | 22.89 | 7600 | 0.1086 | 0.9567 | 0.9567 |
| 0.1137 | 23.49 | 7800 | 0.1083 | 0.9576 | 0.9576 |
| 0.1158 | 24.1 | 8000 | 0.1084 | 0.9593 | 0.9593 |
| 0.1133 | 24.7 | 8200 | 0.1080 | 0.9584 | 0.9584 |
| 0.1132 | 25.3 | 8400 | 0.1082 | 0.9580 | 0.9580 |
| 0.1129 | 25.9 | 8600 | 0.1081 | 0.9574 | 0.9574 |
| 0.1149 | 26.51 | 8800 | 0.1079 | 0.9572 | 0.9572 |
| 0.1137 | 27.11 | 9000 | 0.1075 | 0.9578 | 0.9578 |
| 0.1135 | 27.71 | 9200 | 0.1078 | 0.9593 | 0.9593 |
| 0.1092 | 28.31 | 9400 | 0.1081 | 0.9589 | 0.9589 |
| 0.1183 | 28.92 | 9600 | 0.1074 | 0.9576 | 0.9576 |
| 0.11 | 29.52 | 9800 | 0.1076 | 0.9578 | 0.9578 |
| 0.1162 | 30.12 | 10000 | 0.1075 | 0.9582 | 0.9582 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:15:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_15M-L1\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1268
* F1 Score: 0.9512
* Accuracy: 0.9512
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4321
- F1 Score: 0.7993
- Accuracy: 0.7993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6099 | 0.54 | 200 | 0.5123 | 0.7516 | 0.7517 |
| 0.5027 | 1.08 | 400 | 0.4814 | 0.7731 | 0.7735 |
| 0.4759 | 1.62 | 600 | 0.4652 | 0.7825 | 0.7826 |
| 0.466 | 2.16 | 800 | 0.4640 | 0.7823 | 0.7823 |
| 0.4622 | 2.7 | 1000 | 0.4578 | 0.7863 | 0.7863 |
| 0.459 | 3.24 | 1200 | 0.4556 | 0.7879 | 0.7880 |
| 0.4549 | 3.78 | 1400 | 0.4535 | 0.7881 | 0.7882 |
| 0.452 | 4.32 | 1600 | 0.4580 | 0.7896 | 0.7897 |
| 0.4519 | 4.86 | 1800 | 0.4566 | 0.7884 | 0.7887 |
| 0.4485 | 5.41 | 2000 | 0.4530 | 0.7923 | 0.7924 |
| 0.444 | 5.95 | 2200 | 0.4512 | 0.7893 | 0.7894 |
| 0.4465 | 6.49 | 2400 | 0.4478 | 0.7910 | 0.7910 |
| 0.4425 | 7.03 | 2600 | 0.4493 | 0.7905 | 0.7909 |
| 0.4443 | 7.57 | 2800 | 0.4481 | 0.7972 | 0.7973 |
| 0.4371 | 8.11 | 3000 | 0.4472 | 0.7958 | 0.7959 |
| 0.4398 | 8.65 | 3200 | 0.4437 | 0.7951 | 0.7951 |
| 0.4411 | 9.19 | 3400 | 0.4442 | 0.7939 | 0.7939 |
| 0.4368 | 9.73 | 3600 | 0.4501 | 0.7915 | 0.7919 |
| 0.4404 | 10.27 | 3800 | 0.4432 | 0.7947 | 0.7948 |
| 0.4346 | 10.81 | 4000 | 0.4449 | 0.7969 | 0.7970 |
| 0.436 | 11.35 | 4200 | 0.4438 | 0.7951 | 0.7953 |
| 0.435 | 11.89 | 4400 | 0.4437 | 0.7951 | 0.7953 |
| 0.4315 | 12.43 | 4600 | 0.4426 | 0.7954 | 0.7954 |
| 0.4327 | 12.97 | 4800 | 0.4431 | 0.7972 | 0.7973 |
| 0.4332 | 13.51 | 5000 | 0.4484 | 0.7893 | 0.7900 |
| 0.4322 | 14.05 | 5200 | 0.4408 | 0.7967 | 0.7968 |
| 0.4306 | 14.59 | 5400 | 0.4414 | 0.7986 | 0.7986 |
| 0.4301 | 15.14 | 5600 | 0.4410 | 0.7986 | 0.7986 |
| 0.4322 | 15.68 | 5800 | 0.4412 | 0.7955 | 0.7956 |
| 0.4238 | 16.22 | 6000 | 0.4422 | 0.7962 | 0.7963 |
| 0.4305 | 16.76 | 6200 | 0.4392 | 0.7962 | 0.7963 |
| 0.4333 | 17.3 | 6400 | 0.4398 | 0.7960 | 0.7961 |
| 0.4277 | 17.84 | 6600 | 0.4423 | 0.7937 | 0.7939 |
| 0.4271 | 18.38 | 6800 | 0.4429 | 0.7940 | 0.7943 |
| 0.4266 | 18.92 | 7000 | 0.4394 | 0.7950 | 0.7951 |
| 0.4217 | 19.46 | 7200 | 0.4408 | 0.7952 | 0.7953 |
| 0.4336 | 20.0 | 7400 | 0.4388 | 0.7985 | 0.7985 |
| 0.4299 | 20.54 | 7600 | 0.4405 | 0.7940 | 0.7941 |
| 0.4257 | 21.08 | 7800 | 0.4399 | 0.7946 | 0.7948 |
| 0.4269 | 21.62 | 8000 | 0.4372 | 0.7964 | 0.7965 |
| 0.4254 | 22.16 | 8200 | 0.4375 | 0.7976 | 0.7976 |
| 0.4316 | 22.7 | 8400 | 0.4386 | 0.7939 | 0.7941 |
| 0.4249 | 23.24 | 8600 | 0.4363 | 0.7980 | 0.7980 |
| 0.4243 | 23.78 | 8800 | 0.4377 | 0.7971 | 0.7971 |
| 0.42 | 24.32 | 9000 | 0.4383 | 0.7985 | 0.7985 |
| 0.426 | 24.86 | 9200 | 0.4372 | 0.7973 | 0.7973 |
| 0.4327 | 25.41 | 9400 | 0.4373 | 0.7966 | 0.7966 |
| 0.4198 | 25.95 | 9600 | 0.4382 | 0.7973 | 0.7973 |
| 0.4274 | 26.49 | 9800 | 0.4382 | 0.7957 | 0.7958 |
| 0.4227 | 27.03 | 10000 | 0.4381 | 0.7966 | 0.7966 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:15:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_15M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4321
* F1 Score: 0.7993
* Accuracy: 0.7993
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4073
- F1 Score: 0.8143
- Accuracy: 0.8144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.532 | 0.54 | 200 | 0.4612 | 0.7853 | 0.7853 |
| 0.458 | 1.08 | 400 | 0.4718 | 0.7885 | 0.7894 |
| 0.4427 | 1.62 | 600 | 0.4458 | 0.7929 | 0.7929 |
| 0.4349 | 2.16 | 800 | 0.4436 | 0.7970 | 0.7971 |
| 0.4323 | 2.7 | 1000 | 0.4393 | 0.7956 | 0.7958 |
| 0.4271 | 3.24 | 1200 | 0.4347 | 0.7961 | 0.7965 |
| 0.4219 | 3.78 | 1400 | 0.4377 | 0.7934 | 0.7939 |
| 0.4204 | 4.32 | 1600 | 0.4353 | 0.8010 | 0.8010 |
| 0.4198 | 4.86 | 1800 | 0.4322 | 0.7973 | 0.7976 |
| 0.4127 | 5.41 | 2000 | 0.4312 | 0.8001 | 0.8003 |
| 0.4133 | 5.95 | 2200 | 0.4320 | 0.8046 | 0.8046 |
| 0.4152 | 6.49 | 2400 | 0.4262 | 0.8016 | 0.8017 |
| 0.4079 | 7.03 | 2600 | 0.4236 | 0.8015 | 0.8015 |
| 0.4079 | 7.57 | 2800 | 0.4268 | 0.8023 | 0.8024 |
| 0.404 | 8.11 | 3000 | 0.4295 | 0.8001 | 0.8003 |
| 0.404 | 8.65 | 3200 | 0.4209 | 0.8054 | 0.8056 |
| 0.4043 | 9.19 | 3400 | 0.4243 | 0.8071 | 0.8071 |
| 0.4024 | 9.73 | 3600 | 0.4302 | 0.8033 | 0.8037 |
| 0.4022 | 10.27 | 3800 | 0.4269 | 0.8034 | 0.8037 |
| 0.4006 | 10.81 | 4000 | 0.4304 | 0.8042 | 0.8042 |
| 0.3963 | 11.35 | 4200 | 0.4246 | 0.8036 | 0.8039 |
| 0.3959 | 11.89 | 4400 | 0.4254 | 0.8037 | 0.8041 |
| 0.3943 | 12.43 | 4600 | 0.4254 | 0.8029 | 0.8029 |
| 0.3912 | 12.97 | 4800 | 0.4262 | 0.8036 | 0.8037 |
| 0.3924 | 13.51 | 5000 | 0.4351 | 0.7990 | 0.8 |
| 0.3908 | 14.05 | 5200 | 0.4232 | 0.8079 | 0.8079 |
| 0.3875 | 14.59 | 5400 | 0.4218 | 0.8084 | 0.8084 |
| 0.3879 | 15.14 | 5600 | 0.4291 | 0.8074 | 0.8074 |
| 0.3881 | 15.68 | 5800 | 0.4278 | 0.8037 | 0.8041 |
| 0.3809 | 16.22 | 6000 | 0.4286 | 0.8042 | 0.8044 |
| 0.3888 | 16.76 | 6200 | 0.4171 | 0.8088 | 0.8090 |
| 0.3879 | 17.3 | 6400 | 0.4229 | 0.8070 | 0.8073 |
| 0.3836 | 17.84 | 6600 | 0.4255 | 0.8047 | 0.8049 |
| 0.3787 | 18.38 | 6800 | 0.4352 | 0.7976 | 0.7986 |
| 0.3789 | 18.92 | 7000 | 0.4214 | 0.8086 | 0.8088 |
| 0.376 | 19.46 | 7200 | 0.4231 | 0.8084 | 0.8084 |
| 0.3864 | 20.0 | 7400 | 0.4186 | 0.8082 | 0.8083 |
| 0.3789 | 20.54 | 7600 | 0.4243 | 0.8051 | 0.8054 |
| 0.3781 | 21.08 | 7800 | 0.4221 | 0.8074 | 0.8076 |
| 0.3781 | 21.62 | 8000 | 0.4171 | 0.8080 | 0.8081 |
| 0.3727 | 22.16 | 8200 | 0.4221 | 0.8067 | 0.8069 |
| 0.3811 | 22.7 | 8400 | 0.4233 | 0.8069 | 0.8073 |
| 0.3725 | 23.24 | 8600 | 0.4180 | 0.8096 | 0.8096 |
| 0.3732 | 23.78 | 8800 | 0.4205 | 0.8070 | 0.8071 |
| 0.3704 | 24.32 | 9000 | 0.4216 | 0.8077 | 0.8078 |
| 0.3744 | 24.86 | 9200 | 0.4196 | 0.8060 | 0.8061 |
| 0.3814 | 25.41 | 9400 | 0.4205 | 0.8075 | 0.8076 |
| 0.367 | 25.95 | 9600 | 0.4235 | 0.8074 | 0.8076 |
| 0.372 | 26.49 | 9800 | 0.4239 | 0.8061 | 0.8063 |
| 0.372 | 27.03 | 10000 | 0.4234 | 0.8076 | 0.8078 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:15:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_15M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4073
* F1 Score: 0.8143
* Accuracy: 0.8144
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/u4t42vm | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:15:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3_on_scigen_fixedprompt_server
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Llama3_on_scigen_fixedprompt_server", "results": []}]} | moetezsa/Llama3_on_scigen_fixedprompt_server | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-03T17:16:06+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# Llama3_on_scigen_fixedprompt_server
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# Llama3_on_scigen_fixedprompt_server\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 64\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# Llama3_on_scigen_fixedprompt_server\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 64\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep46 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:16:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GritLM-7B - GGUF
- Model creator: https://huggingface.co/GritLM/
- Original model: https://huggingface.co/GritLM/GritLM-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GritLM-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [GritLM-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [GritLM-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [GritLM-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [GritLM-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [GritLM-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [GritLM-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [GritLM-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [GritLM-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [GritLM-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [GritLM-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [GritLM-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [GritLM-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [GritLM-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [GritLM-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [GritLM-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [GritLM-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [GritLM-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [GritLM-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [GritLM-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [GritLM-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- GritLM/tulu2
tags:
- mteb
model-index:
- name: GritLM-7B
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.478
- type: map_at_10
value: 54.955
- type: map_at_100
value: 54.955
- type: map_at_1000
value: 54.955
- type: map_at_3
value: 50.888999999999996
- type: map_at_5
value: 53.349999999999994
- type: mrr_at_1
value: 39.757999999999996
- type: mrr_at_10
value: 55.449000000000005
- type: mrr_at_100
value: 55.449000000000005
- type: mrr_at_1000
value: 55.449000000000005
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.822
- type: ndcg_at_1
value: 38.478
- type: ndcg_at_10
value: 63.239999999999995
- type: ndcg_at_100
value: 63.239999999999995
- type: ndcg_at_1000
value: 63.239999999999995
- type: ndcg_at_3
value: 54.935
- type: ndcg_at_5
value: 59.379000000000005
- type: precision_at_1
value: 38.478
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 22.214
- type: precision_at_5
value: 15.491
- type: recall_at_1
value: 38.478
- type: recall_at_10
value: 89.331
- type: recall_at_100
value: 89.331
- type: recall_at_1000
value: 89.331
- type: recall_at_3
value: 66.643
- type: recall_at_5
value: 77.45400000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 51.67144081472449
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.11256154264126
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.1935203751726
- type: cos_sim_spearman
value: 86.35497970498659
- type: euclidean_pearson
value: 85.46910708503744
- type: euclidean_spearman
value: 85.13928935405485
- type: manhattan_pearson
value: 85.68373836333303
- type: manhattan_spearman
value: 85.40013867117746
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.86793640310432
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 39.80291334130727
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.421
- type: map_at_10
value: 52.349000000000004
- type: map_at_100
value: 52.349000000000004
- type: map_at_1000
value: 52.349000000000004
- type: map_at_3
value: 48.17
- type: map_at_5
value: 50.432
- type: mrr_at_1
value: 47.353
- type: mrr_at_10
value: 58.387
- type: mrr_at_100
value: 58.387
- type: mrr_at_1000
value: 58.387
- type: mrr_at_3
value: 56.199
- type: mrr_at_5
value: 57.487
- type: ndcg_at_1
value: 47.353
- type: ndcg_at_10
value: 59.202
- type: ndcg_at_100
value: 58.848
- type: ndcg_at_1000
value: 58.831999999999994
- type: ndcg_at_3
value: 54.112
- type: ndcg_at_5
value: 56.312
- type: precision_at_1
value: 47.353
- type: precision_at_10
value: 11.459
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 18.627
- type: recall_at_1
value: 38.421
- type: recall_at_10
value: 71.89
- type: recall_at_100
value: 71.89
- type: recall_at_1000
value: 71.89
- type: recall_at_3
value: 56.58
- type: recall_at_5
value: 63.125
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.025999999999996
- type: map_at_10
value: 50.590999999999994
- type: map_at_100
value: 51.99700000000001
- type: map_at_1000
value: 52.11599999999999
- type: map_at_3
value: 47.435
- type: map_at_5
value: 49.236000000000004
- type: mrr_at_1
value: 48.28
- type: mrr_at_10
value: 56.814
- type: mrr_at_100
value: 57.446
- type: mrr_at_1000
value: 57.476000000000006
- type: mrr_at_3
value: 54.958
- type: mrr_at_5
value: 56.084999999999994
- type: ndcg_at_1
value: 48.28
- type: ndcg_at_10
value: 56.442
- type: ndcg_at_100
value: 60.651999999999994
- type: ndcg_at_1000
value: 62.187000000000005
- type: ndcg_at_3
value: 52.866
- type: ndcg_at_5
value: 54.515
- type: precision_at_1
value: 48.28
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.945
- type: precision_at_5
value: 18.076
- type: recall_at_1
value: 38.025999999999996
- type: recall_at_10
value: 66.11399999999999
- type: recall_at_100
value: 83.339
- type: recall_at_1000
value: 92.413
- type: recall_at_3
value: 54.493
- type: recall_at_5
value: 59.64699999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.905
- type: map_at_10
value: 61.58
- type: map_at_100
value: 62.605
- type: map_at_1000
value: 62.637
- type: map_at_3
value: 58.074000000000005
- type: map_at_5
value: 60.260000000000005
- type: mrr_at_1
value: 54.42
- type: mrr_at_10
value: 64.847
- type: mrr_at_100
value: 65.403
- type: mrr_at_1000
value: 65.41900000000001
- type: mrr_at_3
value: 62.675000000000004
- type: mrr_at_5
value: 64.101
- type: ndcg_at_1
value: 54.42
- type: ndcg_at_10
value: 67.394
- type: ndcg_at_100
value: 70.846
- type: ndcg_at_1000
value: 71.403
- type: ndcg_at_3
value: 62.025
- type: ndcg_at_5
value: 65.032
- type: precision_at_1
value: 54.42
- type: precision_at_10
value: 10.646
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 27.398
- type: precision_at_5
value: 18.796
- type: recall_at_1
value: 47.905
- type: recall_at_10
value: 80.84599999999999
- type: recall_at_100
value: 95.078
- type: recall_at_1000
value: 98.878
- type: recall_at_3
value: 67.05600000000001
- type: recall_at_5
value: 74.261
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.745
- type: map_at_10
value: 41.021
- type: map_at_100
value: 41.021
- type: map_at_1000
value: 41.021
- type: map_at_3
value: 37.714999999999996
- type: map_at_5
value: 39.766
- type: mrr_at_1
value: 33.559
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 43.537
- type: mrr_at_1000
value: 43.537
- type: mrr_at_3
value: 40.546
- type: mrr_at_5
value: 42.439
- type: ndcg_at_1
value: 33.559
- type: ndcg_at_10
value: 46.781
- type: ndcg_at_100
value: 46.781
- type: ndcg_at_1000
value: 46.781
- type: ndcg_at_3
value: 40.516000000000005
- type: ndcg_at_5
value: 43.957
- type: precision_at_1
value: 33.559
- type: precision_at_10
value: 7.198
- type: precision_at_100
value: 0.72
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 12.316
- type: recall_at_1
value: 30.745
- type: recall_at_10
value: 62.038000000000004
- type: recall_at_100
value: 62.038000000000004
- type: recall_at_1000
value: 62.038000000000004
- type: recall_at_3
value: 45.378
- type: recall_at_5
value: 53.580000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.637999999999998
- type: map_at_10
value: 31.05
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.05
- type: map_at_3
value: 27.628000000000004
- type: map_at_5
value: 29.767
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 36.131
- type: mrr_at_100
value: 36.131
- type: mrr_at_1000
value: 36.131
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 35.143
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 37.478
- type: ndcg_at_100
value: 37.469
- type: ndcg_at_1000
value: 37.469
- type: ndcg_at_3
value: 31.757999999999996
- type: ndcg_at_5
value: 34.821999999999996
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.188999999999999
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.841
- type: recall_at_1
value: 19.637999999999998
- type: recall_at_10
value: 51.836000000000006
- type: recall_at_100
value: 51.836000000000006
- type: recall_at_1000
value: 51.836000000000006
- type: recall_at_3
value: 36.384
- type: recall_at_5
value: 43.964
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.884
- type: map_at_10
value: 47.88
- type: map_at_100
value: 47.88
- type: map_at_1000
value: 47.88
- type: map_at_3
value: 43.85
- type: map_at_5
value: 46.414
- type: mrr_at_1
value: 43.022
- type: mrr_at_10
value: 53.569
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.569
- type: mrr_at_3
value: 51.075
- type: mrr_at_5
value: 52.725
- type: ndcg_at_1
value: 43.022
- type: ndcg_at_10
value: 54.461000000000006
- type: ndcg_at_100
value: 54.388000000000005
- type: ndcg_at_1000
value: 54.388000000000005
- type: ndcg_at_3
value: 48.864999999999995
- type: ndcg_at_5
value: 52.032000000000004
- type: precision_at_1
value: 43.022
- type: precision_at_10
value: 9.885
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.612
- type: precision_at_5
value: 16.997
- type: recall_at_1
value: 34.884
- type: recall_at_10
value: 68.12899999999999
- type: recall_at_100
value: 68.12899999999999
- type: recall_at_1000
value: 68.12899999999999
- type: recall_at_3
value: 52.428
- type: recall_at_5
value: 60.662000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.588
- type: map_at_10
value: 43.85
- type: map_at_100
value: 45.317
- type: map_at_1000
value: 45.408
- type: map_at_3
value: 39.73
- type: map_at_5
value: 42.122
- type: mrr_at_1
value: 38.927
- type: mrr_at_10
value: 49.582
- type: mrr_at_100
value: 50.39
- type: mrr_at_1000
value: 50.426
- type: mrr_at_3
value: 46.518
- type: mrr_at_5
value: 48.271
- type: ndcg_at_1
value: 38.927
- type: ndcg_at_10
value: 50.605999999999995
- type: ndcg_at_100
value: 56.22200000000001
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 47.233999999999995
- type: precision_at_1
value: 38.927
- type: precision_at_10
value: 9.429
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.271
- type: precision_at_5
value: 15.434000000000001
- type: recall_at_1
value: 31.588
- type: recall_at_10
value: 64.836
- type: recall_at_100
value: 88.066
- type: recall_at_1000
value: 97.748
- type: recall_at_3
value: 47.128
- type: recall_at_5
value: 54.954
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.956083333333336
- type: map_at_10
value: 43.33483333333333
- type: map_at_100
value: 44.64883333333333
- type: map_at_1000
value: 44.75
- type: map_at_3
value: 39.87741666666666
- type: map_at_5
value: 41.86766666666667
- type: mrr_at_1
value: 38.06341666666667
- type: mrr_at_10
value: 47.839666666666666
- type: mrr_at_100
value: 48.644000000000005
- type: mrr_at_1000
value: 48.68566666666667
- type: mrr_at_3
value: 45.26358333333334
- type: mrr_at_5
value: 46.790000000000006
- type: ndcg_at_1
value: 38.06341666666667
- type: ndcg_at_10
value: 49.419333333333334
- type: ndcg_at_100
value: 54.50166666666667
- type: ndcg_at_1000
value: 56.161166666666674
- type: ndcg_at_3
value: 43.982416666666666
- type: ndcg_at_5
value: 46.638083333333334
- type: precision_at_1
value: 38.06341666666667
- type: precision_at_10
value: 8.70858333333333
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.37816666666667
- type: precision_at_5
value: 14.516333333333334
- type: recall_at_1
value: 31.956083333333336
- type: recall_at_10
value: 62.69458333333334
- type: recall_at_100
value: 84.46433333333334
- type: recall_at_1000
value: 95.58449999999999
- type: recall_at_3
value: 47.52016666666666
- type: recall_at_5
value: 54.36066666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.912
- type: map_at_10
value: 38.291
- type: map_at_100
value: 39.44
- type: map_at_1000
value: 39.528
- type: map_at_3
value: 35.638
- type: map_at_5
value: 37.218
- type: mrr_at_1
value: 32.822
- type: mrr_at_10
value: 41.661
- type: mrr_at_100
value: 42.546
- type: mrr_at_1000
value: 42.603
- type: mrr_at_3
value: 39.238
- type: mrr_at_5
value: 40.726
- type: ndcg_at_1
value: 32.822
- type: ndcg_at_10
value: 43.373
- type: ndcg_at_100
value: 48.638
- type: ndcg_at_1000
value: 50.654999999999994
- type: ndcg_at_3
value: 38.643
- type: ndcg_at_5
value: 41.126000000000005
- type: precision_at_1
value: 32.822
- type: precision_at_10
value: 6.8709999999999996
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 16.82
- type: precision_at_5
value: 11.718
- type: recall_at_1
value: 28.912
- type: recall_at_10
value: 55.376999999999995
- type: recall_at_100
value: 79.066
- type: recall_at_1000
value: 93.664
- type: recall_at_3
value: 42.569
- type: recall_at_5
value: 48.719
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.181
- type: map_at_10
value: 31.462
- type: map_at_100
value: 32.73
- type: map_at_1000
value: 32.848
- type: map_at_3
value: 28.57
- type: map_at_5
value: 30.182
- type: mrr_at_1
value: 27.185
- type: mrr_at_10
value: 35.846000000000004
- type: mrr_at_100
value: 36.811
- type: mrr_at_1000
value: 36.873
- type: mrr_at_3
value: 33.437
- type: mrr_at_5
value: 34.813
- type: ndcg_at_1
value: 27.185
- type: ndcg_at_10
value: 36.858000000000004
- type: ndcg_at_100
value: 42.501
- type: ndcg_at_1000
value: 44.945
- type: ndcg_at_3
value: 32.066
- type: ndcg_at_5
value: 34.29
- type: precision_at_1
value: 27.185
- type: precision_at_10
value: 6.752
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 15.290000000000001
- type: precision_at_5
value: 11.004999999999999
- type: recall_at_1
value: 22.181
- type: recall_at_10
value: 48.513
- type: recall_at_100
value: 73.418
- type: recall_at_1000
value: 90.306
- type: recall_at_3
value: 35.003
- type: recall_at_5
value: 40.876000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.934999999999995
- type: map_at_10
value: 44.727
- type: map_at_100
value: 44.727
- type: map_at_1000
value: 44.727
- type: map_at_3
value: 40.918
- type: map_at_5
value: 42.961
- type: mrr_at_1
value: 39.646
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 48.898
- type: mrr_at_1000
value: 48.898
- type: mrr_at_3
value: 45.896
- type: mrr_at_5
value: 47.514
- type: ndcg_at_1
value: 39.646
- type: ndcg_at_10
value: 50.817
- type: ndcg_at_100
value: 50.803
- type: ndcg_at_1000
value: 50.803
- type: ndcg_at_3
value: 44.507999999999996
- type: ndcg_at_5
value: 47.259
- type: precision_at_1
value: 39.646
- type: precision_at_10
value: 8.759
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 20.274
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 33.934999999999995
- type: recall_at_10
value: 65.037
- type: recall_at_100
value: 65.037
- type: recall_at_1000
value: 65.037
- type: recall_at_3
value: 47.439
- type: recall_at_5
value: 54.567
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.058
- type: map_at_10
value: 43.137
- type: map_at_100
value: 43.137
- type: map_at_1000
value: 43.137
- type: map_at_3
value: 39.882
- type: map_at_5
value: 41.379
- type: mrr_at_1
value: 38.933
- type: mrr_at_10
value: 48.344
- type: mrr_at_100
value: 48.344
- type: mrr_at_1000
value: 48.344
- type: mrr_at_3
value: 45.652
- type: mrr_at_5
value: 46.877
- type: ndcg_at_1
value: 38.933
- type: ndcg_at_10
value: 49.964
- type: ndcg_at_100
value: 49.242000000000004
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 44.605
- type: ndcg_at_5
value: 46.501999999999995
- type: precision_at_1
value: 38.933
- type: precision_at_10
value: 9.427000000000001
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 20.685000000000002
- type: precision_at_5
value: 14.585
- type: recall_at_1
value: 32.058
- type: recall_at_10
value: 63.074
- type: recall_at_100
value: 63.074
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 47.509
- type: recall_at_5
value: 52.455
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.029000000000003
- type: map_at_10
value: 34.646
- type: map_at_100
value: 34.646
- type: map_at_1000
value: 34.646
- type: map_at_3
value: 31.456
- type: map_at_5
value: 33.138
- type: mrr_at_1
value: 28.281
- type: mrr_at_10
value: 36.905
- type: mrr_at_100
value: 36.905
- type: mrr_at_1000
value: 36.905
- type: mrr_at_3
value: 34.011
- type: mrr_at_5
value: 35.638
- type: ndcg_at_1
value: 28.281
- type: ndcg_at_10
value: 40.159
- type: ndcg_at_100
value: 40.159
- type: ndcg_at_1000
value: 40.159
- type: ndcg_at_3
value: 33.995
- type: ndcg_at_5
value: 36.836999999999996
- type: precision_at_1
value: 28.281
- type: precision_at_10
value: 6.358999999999999
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.064
- type: precision_at_3
value: 14.233
- type: precision_at_5
value: 10.314
- type: recall_at_1
value: 26.029000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 55.08
- type: recall_at_1000
value: 55.08
- type: recall_at_3
value: 38.487
- type: recall_at_5
value: 45.308
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.842999999999998
- type: map_at_10
value: 22.101000000000003
- type: map_at_100
value: 24.319
- type: map_at_1000
value: 24.51
- type: map_at_3
value: 18.372
- type: map_at_5
value: 20.323
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.321
- type: mrr_at_100
value: 41.262
- type: mrr_at_1000
value: 41.297
- type: mrr_at_3
value: 36.558
- type: mrr_at_5
value: 38.824999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.906
- type: ndcg_at_100
value: 38.986
- type: ndcg_at_1000
value: 42.136
- type: ndcg_at_3
value: 24.911
- type: ndcg_at_5
value: 27.168999999999997
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.798
- type: precision_at_100
value: 1.8399999999999999
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 18.328
- type: precision_at_5
value: 14.502
- type: recall_at_1
value: 12.842999999999998
- type: recall_at_10
value: 37.245
- type: recall_at_100
value: 64.769
- type: recall_at_1000
value: 82.055
- type: recall_at_3
value: 23.159
- type: recall_at_5
value: 29.113
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.934000000000001
- type: map_at_10
value: 21.915000000000003
- type: map_at_100
value: 21.915000000000003
- type: map_at_1000
value: 21.915000000000003
- type: map_at_3
value: 14.623
- type: map_at_5
value: 17.841
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 78.994
- type: mrr_at_100
value: 78.994
- type: mrr_at_1000
value: 78.994
- type: mrr_at_3
value: 77.208
- type: mrr_at_5
value: 78.55799999999999
- type: ndcg_at_1
value: 60.62499999999999
- type: ndcg_at_10
value: 46.604
- type: ndcg_at_100
value: 35.653
- type: ndcg_at_1000
value: 35.531
- type: ndcg_at_3
value: 50.605
- type: ndcg_at_5
value: 48.730000000000004
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 37.75
- type: precision_at_100
value: 3.775
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 54.417
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 8.934000000000001
- type: recall_at_10
value: 28.471000000000004
- type: recall_at_100
value: 28.471000000000004
- type: recall_at_1000
value: 28.471000000000004
- type: recall_at_3
value: 16.019
- type: recall_at_5
value: 21.410999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.81899999999999
- type: map_at_10
value: 78.034
- type: map_at_100
value: 78.034
- type: map_at_1000
value: 78.034
- type: map_at_3
value: 76.43100000000001
- type: map_at_5
value: 77.515
- type: mrr_at_1
value: 71.542
- type: mrr_at_10
value: 81.638
- type: mrr_at_100
value: 81.638
- type: mrr_at_1000
value: 81.638
- type: mrr_at_3
value: 80.403
- type: mrr_at_5
value: 81.256
- type: ndcg_at_1
value: 71.542
- type: ndcg_at_10
value: 82.742
- type: ndcg_at_100
value: 82.741
- type: ndcg_at_1000
value: 82.741
- type: ndcg_at_3
value: 80.039
- type: ndcg_at_5
value: 81.695
- type: precision_at_1
value: 71.542
- type: precision_at_10
value: 10.387
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 31.447999999999997
- type: precision_at_5
value: 19.91
- type: recall_at_1
value: 66.81899999999999
- type: recall_at_10
value: 93.372
- type: recall_at_100
value: 93.372
- type: recall_at_1000
value: 93.372
- type: recall_at_3
value: 86.33
- type: recall_at_5
value: 90.347
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.158
- type: map_at_10
value: 52.017
- type: map_at_100
value: 54.259
- type: map_at_1000
value: 54.367
- type: map_at_3
value: 45.738
- type: map_at_5
value: 49.283
- type: mrr_at_1
value: 57.87
- type: mrr_at_10
value: 66.215
- type: mrr_at_100
value: 66.735
- type: mrr_at_1000
value: 66.75
- type: mrr_at_3
value: 64.043
- type: mrr_at_5
value: 65.116
- type: ndcg_at_1
value: 57.87
- type: ndcg_at_10
value: 59.946999999999996
- type: ndcg_at_100
value: 66.31099999999999
- type: ndcg_at_1000
value: 67.75999999999999
- type: ndcg_at_3
value: 55.483000000000004
- type: ndcg_at_5
value: 56.891000000000005
- type: precision_at_1
value: 57.87
- type: precision_at_10
value: 16.497
- type: precision_at_100
value: 2.321
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.067999999999998
- type: recall_at_1
value: 31.158
- type: recall_at_10
value: 67.381
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.989
- type: recall_at_3
value: 50.553000000000004
- type: recall_at_5
value: 57.824
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.073
- type: map_at_10
value: 72.418
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.215
- type: map_at_3
value: 68.791
- type: map_at_5
value: 71.19
- type: mrr_at_1
value: 84.146
- type: mrr_at_10
value: 88.994
- type: mrr_at_100
value: 89.116
- type: mrr_at_1000
value: 89.12
- type: mrr_at_3
value: 88.373
- type: mrr_at_5
value: 88.82
- type: ndcg_at_1
value: 84.146
- type: ndcg_at_10
value: 79.404
- type: ndcg_at_100
value: 81.83200000000001
- type: ndcg_at_1000
value: 82.524
- type: ndcg_at_3
value: 74.595
- type: ndcg_at_5
value: 77.474
- type: precision_at_1
value: 84.146
- type: precision_at_10
value: 16.753999999999998
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 48.854
- type: precision_at_5
value: 31.579
- type: recall_at_1
value: 42.073
- type: recall_at_10
value: 83.768
- type: recall_at_100
value: 93.018
- type: recall_at_1000
value: 97.481
- type: recall_at_3
value: 73.282
- type: recall_at_5
value: 78.947
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.698
- type: map_at_10
value: 34.585
- type: map_at_100
value: 35.782000000000004
- type: map_at_1000
value: 35.825
- type: map_at_3
value: 30.397999999999996
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 22.192
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 36.218
- type: mrr_at_1000
value: 36.256
- type: mrr_at_3
value: 30.986000000000004
- type: mrr_at_5
value: 33.268
- type: ndcg_at_1
value: 22.192
- type: ndcg_at_10
value: 41.957
- type: ndcg_at_100
value: 47.658
- type: ndcg_at_1000
value: 48.697
- type: ndcg_at_3
value: 33.433
- type: ndcg_at_5
value: 37.551
- type: precision_at_1
value: 22.192
- type: precision_at_10
value: 6.781
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.365
- type: precision_at_5
value: 10.713000000000001
- type: recall_at_1
value: 21.698
- type: recall_at_10
value: 64.79
- type: recall_at_100
value: 91.071
- type: recall_at_1000
value: 98.883
- type: recall_at_3
value: 41.611
- type: recall_at_5
value: 51.459999999999994
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.52153488185864
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 36.80090398444147
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.920999999999999
- type: map_at_10
value: 16.049
- type: map_at_100
value: 16.049
- type: map_at_1000
value: 16.049
- type: map_at_3
value: 11.865
- type: map_at_5
value: 13.657
- type: mrr_at_1
value: 53.87
- type: mrr_at_10
value: 62.291
- type: mrr_at_100
value: 62.291
- type: mrr_at_1000
value: 62.291
- type: mrr_at_3
value: 60.681
- type: mrr_at_5
value: 61.61
- type: ndcg_at_1
value: 51.23799999999999
- type: ndcg_at_10
value: 40.892
- type: ndcg_at_100
value: 26.951999999999998
- type: ndcg_at_1000
value: 26.474999999999998
- type: ndcg_at_3
value: 46.821
- type: ndcg_at_5
value: 44.333
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 30.124000000000002
- type: precision_at_100
value: 3.012
- type: precision_at_1000
value: 0.301
- type: precision_at_3
value: 43.55
- type: precision_at_5
value: 38.266
- type: recall_at_1
value: 6.920999999999999
- type: recall_at_10
value: 20.852
- type: recall_at_100
value: 20.852
- type: recall_at_1000
value: 20.852
- type: recall_at_3
value: 13.628000000000002
- type: recall_at_5
value: 16.273
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.827999999999996
- type: map_at_10
value: 63.434000000000005
- type: map_at_100
value: 63.434000000000005
- type: map_at_1000
value: 63.434000000000005
- type: map_at_3
value: 59.794000000000004
- type: map_at_5
value: 62.08
- type: mrr_at_1
value: 52.288999999999994
- type: mrr_at_10
value: 65.95
- type: mrr_at_100
value: 65.95
- type: mrr_at_1000
value: 65.95
- type: mrr_at_3
value: 63.413
- type: mrr_at_5
value: 65.08
- type: ndcg_at_1
value: 52.288999999999994
- type: ndcg_at_10
value: 70.301
- type: ndcg_at_100
value: 70.301
- type: ndcg_at_1000
value: 70.301
- type: ndcg_at_3
value: 63.979
- type: ndcg_at_5
value: 67.582
- type: precision_at_1
value: 52.288999999999994
- type: precision_at_10
value: 10.576
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 28.177000000000003
- type: precision_at_5
value: 19.073
- type: recall_at_1
value: 46.827999999999996
- type: recall_at_10
value: 88.236
- type: recall_at_100
value: 88.236
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 72.371
- type: recall_at_5
value: 80.56
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.652
- type: map_at_10
value: 85.953
- type: map_at_100
value: 85.953
- type: map_at_1000
value: 85.953
- type: map_at_3
value: 83.05399999999999
- type: map_at_5
value: 84.89
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.473
- type: mrr_at_100
value: 88.473
- type: mrr_at_1000
value: 88.473
- type: mrr_at_3
value: 87.592
- type: mrr_at_5
value: 88.211
- type: ndcg_at_1
value: 82.44
- type: ndcg_at_10
value: 89.467
- type: ndcg_at_100
value: 89.33
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 86.822
- type: ndcg_at_5
value: 88.307
- type: precision_at_1
value: 82.44
- type: precision_at_10
value: 13.616
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 38.117000000000004
- type: precision_at_5
value: 25.05
- type: recall_at_1
value: 71.652
- type: recall_at_10
value: 96.224
- type: recall_at_100
value: 96.224
- type: recall_at_1000
value: 96.224
- type: recall_at_3
value: 88.571
- type: recall_at_5
value: 92.812
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.295010338050474
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.26380819328142
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.683
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 17.532
- type: map_at_1000
value: 17.875
- type: map_at_3
value: 10.392
- type: map_at_5
value: 12.592
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 39.951
- type: mrr_at_100
value: 41.025
- type: mrr_at_1000
value: 41.056
- type: mrr_at_3
value: 36.317
- type: mrr_at_5
value: 38.412
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.410999999999998
- type: ndcg_at_100
value: 33.79
- type: ndcg_at_1000
value: 39.035
- type: ndcg_at_3
value: 22.845
- type: ndcg_at_5
value: 20.080000000000002
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 12.790000000000001
- type: precision_at_100
value: 2.633
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 21.367
- type: precision_at_5
value: 17.7
- type: recall_at_1
value: 5.683
- type: recall_at_10
value: 25.91
- type: recall_at_100
value: 53.443
- type: recall_at_1000
value: 78.73
- type: recall_at_3
value: 13.003
- type: recall_at_5
value: 17.932000000000002
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.677978681023
- type: cos_sim_spearman
value: 83.13093441058189
- type: euclidean_pearson
value: 83.35535759341572
- type: euclidean_spearman
value: 83.42583744219611
- type: manhattan_pearson
value: 83.2243124045889
- type: manhattan_spearman
value: 83.39801618652632
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.68960206569666
- type: cos_sim_spearman
value: 77.3368966488535
- type: euclidean_pearson
value: 77.62828980560303
- type: euclidean_spearman
value: 76.77951481444651
- type: manhattan_pearson
value: 77.88637240839041
- type: manhattan_spearman
value: 77.22157841466188
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.18745821650724
- type: cos_sim_spearman
value: 85.04423285574542
- type: euclidean_pearson
value: 85.46604816931023
- type: euclidean_spearman
value: 85.5230593932974
- type: manhattan_pearson
value: 85.57912805986261
- type: manhattan_spearman
value: 85.65955905111873
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.6715333300355
- type: cos_sim_spearman
value: 82.9058522514908
- type: euclidean_pearson
value: 83.9640357424214
- type: euclidean_spearman
value: 83.60415457472637
- type: manhattan_pearson
value: 84.05621005853469
- type: manhattan_spearman
value: 83.87077724707746
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.82422928098886
- type: cos_sim_spearman
value: 88.12660311894628
- type: euclidean_pearson
value: 87.50974805056555
- type: euclidean_spearman
value: 87.91957275596677
- type: manhattan_pearson
value: 87.74119404878883
- type: manhattan_spearman
value: 88.2808922165719
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.80605838552093
- type: cos_sim_spearman
value: 86.24123388765678
- type: euclidean_pearson
value: 85.32648347339814
- type: euclidean_spearman
value: 85.60046671950158
- type: manhattan_pearson
value: 85.53800168487811
- type: manhattan_spearman
value: 85.89542420480763
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.87540978988132
- type: cos_sim_spearman
value: 90.12715295099461
- type: euclidean_pearson
value: 91.61085993525275
- type: euclidean_spearman
value: 91.31835942311758
- type: manhattan_pearson
value: 91.57500202032934
- type: manhattan_spearman
value: 91.1790925526635
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.87136205329556
- type: cos_sim_spearman
value: 68.6253154635078
- type: euclidean_pearson
value: 68.91536015034222
- type: euclidean_spearman
value: 67.63744649352542
- type: manhattan_pearson
value: 69.2000713045275
- type: manhattan_spearman
value: 68.16002901587316
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.21849551039082
- type: cos_sim_spearman
value: 85.6392959372461
- type: euclidean_pearson
value: 85.92050852609488
- type: euclidean_spearman
value: 85.97205649009734
- type: manhattan_pearson
value: 86.1031154802254
- type: manhattan_spearman
value: 86.26791155517466
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.994
- type: map_at_10
value: 74.763
- type: map_at_100
value: 75.127
- type: map_at_1000
value: 75.143
- type: map_at_3
value: 71.824
- type: map_at_5
value: 73.71
- type: mrr_at_1
value: 68.333
- type: mrr_at_10
value: 75.749
- type: mrr_at_100
value: 75.922
- type: mrr_at_1000
value: 75.938
- type: mrr_at_3
value: 73.556
- type: mrr_at_5
value: 74.739
- type: ndcg_at_1
value: 68.333
- type: ndcg_at_10
value: 79.174
- type: ndcg_at_100
value: 80.41
- type: ndcg_at_1000
value: 80.804
- type: ndcg_at_3
value: 74.361
- type: ndcg_at_5
value: 76.861
- type: precision_at_1
value: 68.333
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 64.994
- type: recall_at_10
value: 91.822
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.878
- type: recall_at_5
value: 85.172
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72079207920792
- type: cos_sim_ap
value: 93.00265215525152
- type: cos_sim_f1
value: 85.06596306068602
- type: cos_sim_precision
value: 90.05586592178771
- type: cos_sim_recall
value: 80.60000000000001
- type: dot_accuracy
value: 99.66039603960397
- type: dot_ap
value: 91.22371407479089
- type: dot_f1
value: 82.34693877551021
- type: dot_precision
value: 84.0625
- type: dot_recall
value: 80.7
- type: euclidean_accuracy
value: 99.71881188118812
- type: euclidean_ap
value: 92.88449963304728
- type: euclidean_f1
value: 85.19480519480518
- type: euclidean_precision
value: 88.64864864864866
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.73267326732673
- type: manhattan_ap
value: 93.23055393056883
- type: manhattan_f1
value: 85.88957055214725
- type: manhattan_precision
value: 87.86610878661088
- type: manhattan_recall
value: 84.0
- type: max_accuracy
value: 99.73267326732673
- type: max_ap
value: 93.23055393056883
- type: max_f1
value: 85.88957055214725
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 77.3305735900358
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 41.32967136540674
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.783007208997144
- type: cos_sim_spearman
value: 30.373444721540533
- type: dot_pearson
value: 29.210604111143905
- type: dot_spearman
value: 29.98809758085659
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.234
- type: map_at_10
value: 1.894
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.894
- type: map_at_3
value: 0.636
- type: map_at_5
value: 1.0
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 74.798
- type: ndcg_at_100
value: 16.462
- type: ndcg_at_1000
value: 7.0889999999999995
- type: ndcg_at_3
value: 80.754
- type: ndcg_at_5
value: 77.319
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 7.8
- type: precision_at_1000
value: 0.7799999999999999
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.80000000000001
- type: recall_at_1
value: 0.234
- type: recall_at_10
value: 2.093
- type: recall_at_100
value: 2.093
- type: recall_at_1000
value: 2.093
- type: recall_at_3
value: 0.662
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.703
- type: map_at_10
value: 10.866000000000001
- type: map_at_100
value: 10.866000000000001
- type: map_at_1000
value: 10.866000000000001
- type: map_at_3
value: 5.909
- type: map_at_5
value: 7.35
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 53.583000000000006
- type: mrr_at_100
value: 53.583000000000006
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 49.32
- type: mrr_at_5
value: 51.769
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 27.926000000000002
- type: ndcg_at_100
value: 22.701
- type: ndcg_at_1000
value: 22.701
- type: ndcg_at_3
value: 32.073
- type: ndcg_at_5
value: 28.327999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 24.694
- type: precision_at_100
value: 2.469
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.703
- type: recall_at_10
value: 17.702
- type: recall_at_100
value: 17.702
- type: recall_at_1000
value: 17.702
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 9.748999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 55.70352297774293
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 81.08262141256193
- type: cos_sim_f1
value: 73.82341501361338
- type: cos_sim_precision
value: 72.5720112159062
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 86.66030875603504
- type: dot_ap
value: 76.6052349228621
- type: dot_f1
value: 70.13897280966768
- type: dot_precision
value: 64.70457079152732
- type: dot_recall
value: 76.56992084432717
- type: euclidean_accuracy
value: 88.37098408535495
- type: euclidean_ap
value: 81.12515230092113
- type: euclidean_f1
value: 74.10338225909379
- type: euclidean_precision
value: 71.76761433868974
- type: euclidean_recall
value: 76.59630606860158
- type: manhattan_accuracy
value: 88.34118137926924
- type: manhattan_ap
value: 80.95751834536561
- type: manhattan_f1
value: 73.9119496855346
- type: manhattan_precision
value: 70.625
- type: manhattan_recall
value: 77.5197889182058
- type: max_accuracy
value: 88.37098408535495
- type: max_ap
value: 81.12515230092113
- type: max_f1
value: 74.10338225909379
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.79896767182831
- type: cos_sim_ap
value: 87.40071784061065
- type: cos_sim_f1
value: 79.87753144712087
- type: cos_sim_precision
value: 76.67304015296367
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.95486474948578
- type: dot_ap
value: 86.00227979119943
- type: dot_f1
value: 78.54601474525914
- type: dot_precision
value: 75.00525394045535
- type: dot_recall
value: 82.43763473975977
- type: euclidean_accuracy
value: 89.7892653393876
- type: euclidean_ap
value: 87.42174706480819
- type: euclidean_f1
value: 80.07283321194465
- type: euclidean_precision
value: 75.96738529574351
- type: euclidean_recall
value: 84.6473668001232
- type: manhattan_accuracy
value: 89.8474793340319
- type: manhattan_ap
value: 87.47814292587448
- type: manhattan_f1
value: 80.15461150280949
- type: manhattan_precision
value: 74.88798234468
- type: manhattan_recall
value: 86.21804742839544
- type: max_accuracy
value: 89.8474793340319
- type: max_ap
value: 87.47814292587448
- type: max_f1
value: 80.15461150280949
---
# Model Summary
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
- **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm)
- **Paper:** https://arxiv.org/abs/2402.09906
- **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview
- **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh
| Model | Description |
|-------|-------------|
| [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT |
| [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT |
# Use
The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference).
# Citation
```bibtex
@misc{muennighoff2024generative,
title={Generative Representational Instruction Tuning},
author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela},
year={2024},
eprint={2402.09906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/GritLM_-_GritLM-7B-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-03T17:18:16+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
GritLM-7B - GGUF
* Model creator: URL
* Original model: URL
Name: GritLM-7B.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB
Name: GritLM-7B.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB
Name: GritLM-7B.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB
Name: GritLM-7B.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB
Name: GritLM-7B.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB
Name: GritLM-7B.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB
Name: GritLM-7B.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB
Name: GritLM-7B.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB
Name: GritLM-7B.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB
Name: GritLM-7B.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB
Name: GritLM-7B.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB
Name: GritLM-7B.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB
Name: GritLM-7B.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB
Name: GritLM-7B.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB
Name: GritLM-7B.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB
Name: GritLM-7B.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB
Name: GritLM-7B.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB
Name: GritLM-7B.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB
Name: GritLM-7B.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB
Name: GritLM-7B.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB
Name: GritLM-7B.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB
Original model description:
---------------------------
pipeline\_tag: text-generation
inference: true
license: apache-2.0
datasets:
* GritLM/tulu2
tags:
* mteb
model-index:
* name: GritLM-7B
results:
+ task:
type: Classification
dataset:
type: mteb/amazon\_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
+ task:
type: Classification
dataset:
type: mteb/amazon\_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
+ task:
type: Classification
dataset:
type: mteb/amazon\_reviews\_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
+ task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.478
- type: map\_at\_10
value: 54.955
- type: map\_at\_100
value: 54.955
- type: map\_at\_1000
value: 54.955
- type: map\_at\_3
value: 50.888999999999996
- type: map\_at\_5
value: 53.349999999999994
- type: mrr\_at\_1
value: 39.757999999999996
- type: mrr\_at\_10
value: 55.449000000000005
- type: mrr\_at\_100
value: 55.449000000000005
- type: mrr\_at\_1000
value: 55.449000000000005
- type: mrr\_at\_3
value: 51.37500000000001
- type: mrr\_at\_5
value: 53.822
- type: ndcg\_at\_1
value: 38.478
- type: ndcg\_at\_10
value: 63.239999999999995
- type: ndcg\_at\_100
value: 63.239999999999995
- type: ndcg\_at\_1000
value: 63.239999999999995
- type: ndcg\_at\_3
value: 54.935
- type: ndcg\_at\_5
value: 59.379000000000005
- type: precision\_at\_1
value: 38.478
- type: precision\_at\_10
value: 8.933
- type: precision\_at\_100
value: 0.893
- type: precision\_at\_1000
value: 0.089
- type: precision\_at\_3
value: 22.214
- type: precision\_at\_5
value: 15.491
- type: recall\_at\_1
value: 38.478
- type: recall\_at\_10
value: 89.331
- type: recall\_at\_100
value: 89.331
- type: recall\_at\_1000
value: 89.331
- type: recall\_at\_3
value: 66.643
- type: recall\_at\_5
value: 77.45400000000001
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v\_measure
value: 51.67144081472449
+ task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v\_measure
value: 48.11256154264126
+ task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
+ task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos\_sim\_pearson
value: 88.1935203751726
- type: cos\_sim\_spearman
value: 86.35497970498659
- type: euclidean\_pearson
value: 85.46910708503744
- type: euclidean\_spearman
value: 85.13928935405485
- type: manhattan\_pearson
value: 85.68373836333303
- type: manhattan\_spearman
value: 85.40013867117746
+ task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v\_measure
value: 40.86793640310432
+ task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v\_measure
value: 39.80291334130727
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.421
- type: map\_at\_10
value: 52.349000000000004
- type: map\_at\_100
value: 52.349000000000004
- type: map\_at\_1000
value: 52.349000000000004
- type: map\_at\_3
value: 48.17
- type: map\_at\_5
value: 50.432
- type: mrr\_at\_1
value: 47.353
- type: mrr\_at\_10
value: 58.387
- type: mrr\_at\_100
value: 58.387
- type: mrr\_at\_1000
value: 58.387
- type: mrr\_at\_3
value: 56.199
- type: mrr\_at\_5
value: 57.487
- type: ndcg\_at\_1
value: 47.353
- type: ndcg\_at\_10
value: 59.202
- type: ndcg\_at\_100
value: 58.848
- type: ndcg\_at\_1000
value: 58.831999999999994
- type: ndcg\_at\_3
value: 54.112
- type: ndcg\_at\_5
value: 56.312
- type: precision\_at\_1
value: 47.353
- type: precision\_at\_10
value: 11.459
- type: precision\_at\_100
value: 1.146
- type: precision\_at\_1000
value: 0.11499999999999999
- type: precision\_at\_3
value: 26.133
- type: precision\_at\_5
value: 18.627
- type: recall\_at\_1
value: 38.421
- type: recall\_at\_10
value: 71.89
- type: recall\_at\_100
value: 71.89
- type: recall\_at\_1000
value: 71.89
- type: recall\_at\_3
value: 56.58
- type: recall\_at\_5
value: 63.125
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 38.025999999999996
- type: map\_at\_10
value: 50.590999999999994
- type: map\_at\_100
value: 51.99700000000001
- type: map\_at\_1000
value: 52.11599999999999
- type: map\_at\_3
value: 47.435
- type: map\_at\_5
value: 49.236000000000004
- type: mrr\_at\_1
value: 48.28
- type: mrr\_at\_10
value: 56.814
- type: mrr\_at\_100
value: 57.446
- type: mrr\_at\_1000
value: 57.476000000000006
- type: mrr\_at\_3
value: 54.958
- type: mrr\_at\_5
value: 56.084999999999994
- type: ndcg\_at\_1
value: 48.28
- type: ndcg\_at\_10
value: 56.442
- type: ndcg\_at\_100
value: 60.651999999999994
- type: ndcg\_at\_1000
value: 62.187000000000005
- type: ndcg\_at\_3
value: 52.866
- type: ndcg\_at\_5
value: 54.515
- type: precision\_at\_1
value: 48.28
- type: precision\_at\_10
value: 10.586
- type: precision\_at\_100
value: 1.6310000000000002
- type: precision\_at\_1000
value: 0.20600000000000002
- type: precision\_at\_3
value: 25.945
- type: precision\_at\_5
value: 18.076
- type: recall\_at\_1
value: 38.025999999999996
- type: recall\_at\_10
value: 66.11399999999999
- type: recall\_at\_100
value: 83.339
- type: recall\_at\_1000
value: 92.413
- type: recall\_at\_3
value: 54.493
- type: recall\_at\_5
value: 59.64699999999999
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 47.905
- type: map\_at\_10
value: 61.58
- type: map\_at\_100
value: 62.605
- type: map\_at\_1000
value: 62.637
- type: map\_at\_3
value: 58.074000000000005
- type: map\_at\_5
value: 60.260000000000005
- type: mrr\_at\_1
value: 54.42
- type: mrr\_at\_10
value: 64.847
- type: mrr\_at\_100
value: 65.403
- type: mrr\_at\_1000
value: 65.41900000000001
- type: mrr\_at\_3
value: 62.675000000000004
- type: mrr\_at\_5
value: 64.101
- type: ndcg\_at\_1
value: 54.42
- type: ndcg\_at\_10
value: 67.394
- type: ndcg\_at\_100
value: 70.846
- type: ndcg\_at\_1000
value: 71.403
- type: ndcg\_at\_3
value: 62.025
- type: ndcg\_at\_5
value: 65.032
- type: precision\_at\_1
value: 54.42
- type: precision\_at\_10
value: 10.646
- type: precision\_at\_100
value: 1.325
- type: precision\_at\_1000
value: 0.13999999999999999
- type: precision\_at\_3
value: 27.398
- type: precision\_at\_5
value: 18.796
- type: recall\_at\_1
value: 47.905
- type: recall\_at\_10
value: 80.84599999999999
- type: recall\_at\_100
value: 95.078
- type: recall\_at\_1000
value: 98.878
- type: recall\_at\_3
value: 67.05600000000001
- type: recall\_at\_5
value: 74.261
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 30.745
- type: map\_at\_10
value: 41.021
- type: map\_at\_100
value: 41.021
- type: map\_at\_1000
value: 41.021
- type: map\_at\_3
value: 37.714999999999996
- type: map\_at\_5
value: 39.766
- type: mrr\_at\_1
value: 33.559
- type: mrr\_at\_10
value: 43.537
- type: mrr\_at\_100
value: 43.537
- type: mrr\_at\_1000
value: 43.537
- type: mrr\_at\_3
value: 40.546
- type: mrr\_at\_5
value: 42.439
- type: ndcg\_at\_1
value: 33.559
- type: ndcg\_at\_10
value: 46.781
- type: ndcg\_at\_100
value: 46.781
- type: ndcg\_at\_1000
value: 46.781
- type: ndcg\_at\_3
value: 40.516000000000005
- type: ndcg\_at\_5
value: 43.957
- type: precision\_at\_1
value: 33.559
- type: precision\_at\_10
value: 7.198
- type: precision\_at\_100
value: 0.72
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 17.1
- type: precision\_at\_5
value: 12.316
- type: recall\_at\_1
value: 30.745
- type: recall\_at\_10
value: 62.038000000000004
- type: recall\_at\_100
value: 62.038000000000004
- type: recall\_at\_1000
value: 62.038000000000004
- type: recall\_at\_3
value: 45.378
- type: recall\_at\_5
value: 53.580000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 19.637999999999998
- type: map\_at\_10
value: 31.05
- type: map\_at\_100
value: 31.05
- type: map\_at\_1000
value: 31.05
- type: map\_at\_3
value: 27.628000000000004
- type: map\_at\_5
value: 29.767
- type: mrr\_at\_1
value: 25.0
- type: mrr\_at\_10
value: 36.131
- type: mrr\_at\_100
value: 36.131
- type: mrr\_at\_1000
value: 36.131
- type: mrr\_at\_3
value: 33.333
- type: mrr\_at\_5
value: 35.143
- type: ndcg\_at\_1
value: 25.0
- type: ndcg\_at\_10
value: 37.478
- type: ndcg\_at\_100
value: 37.469
- type: ndcg\_at\_1000
value: 37.469
- type: ndcg\_at\_3
value: 31.757999999999996
- type: ndcg\_at\_5
value: 34.821999999999996
- type: precision\_at\_1
value: 25.0
- type: precision\_at\_10
value: 7.188999999999999
- type: precision\_at\_100
value: 0.719
- type: precision\_at\_1000
value: 0.07200000000000001
- type: precision\_at\_3
value: 15.837000000000002
- type: precision\_at\_5
value: 11.841
- type: recall\_at\_1
value: 19.637999999999998
- type: recall\_at\_10
value: 51.836000000000006
- type: recall\_at\_100
value: 51.836000000000006
- type: recall\_at\_1000
value: 51.836000000000006
- type: recall\_at\_3
value: 36.384
- type: recall\_at\_5
value: 43.964
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 34.884
- type: map\_at\_10
value: 47.88
- type: map\_at\_100
value: 47.88
- type: map\_at\_1000
value: 47.88
- type: map\_at\_3
value: 43.85
- type: map\_at\_5
value: 46.414
- type: mrr\_at\_1
value: 43.022
- type: mrr\_at\_10
value: 53.569
- type: mrr\_at\_100
value: 53.569
- type: mrr\_at\_1000
value: 53.569
- type: mrr\_at\_3
value: 51.075
- type: mrr\_at\_5
value: 52.725
- type: ndcg\_at\_1
value: 43.022
- type: ndcg\_at\_10
value: 54.461000000000006
- type: ndcg\_at\_100
value: 54.388000000000005
- type: ndcg\_at\_1000
value: 54.388000000000005
- type: ndcg\_at\_3
value: 48.864999999999995
- type: ndcg\_at\_5
value: 52.032000000000004
- type: precision\_at\_1
value: 43.022
- type: precision\_at\_10
value: 9.885
- type: precision\_at\_100
value: 0.988
- type: precision\_at\_1000
value: 0.099
- type: precision\_at\_3
value: 23.612
- type: precision\_at\_5
value: 16.997
- type: recall\_at\_1
value: 34.884
- type: recall\_at\_10
value: 68.12899999999999
- type: recall\_at\_100
value: 68.12899999999999
- type: recall\_at\_1000
value: 68.12899999999999
- type: recall\_at\_3
value: 52.428
- type: recall\_at\_5
value: 60.662000000000006
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.588
- type: map\_at\_10
value: 43.85
- type: map\_at\_100
value: 45.317
- type: map\_at\_1000
value: 45.408
- type: map\_at\_3
value: 39.73
- type: map\_at\_5
value: 42.122
- type: mrr\_at\_1
value: 38.927
- type: mrr\_at\_10
value: 49.582
- type: mrr\_at\_100
value: 50.39
- type: mrr\_at\_1000
value: 50.426
- type: mrr\_at\_3
value: 46.518
- type: mrr\_at\_5
value: 48.271
- type: ndcg\_at\_1
value: 38.927
- type: ndcg\_at\_10
value: 50.605999999999995
- type: ndcg\_at\_100
value: 56.22200000000001
- type: ndcg\_at\_1000
value: 57.724
- type: ndcg\_at\_3
value: 44.232
- type: ndcg\_at\_5
value: 47.233999999999995
- type: precision\_at\_1
value: 38.927
- type: precision\_at\_10
value: 9.429
- type: precision\_at\_100
value: 1.435
- type: precision\_at\_1000
value: 0.172
- type: precision\_at\_3
value: 21.271
- type: precision\_at\_5
value: 15.434000000000001
- type: recall\_at\_1
value: 31.588
- type: recall\_at\_10
value: 64.836
- type: recall\_at\_100
value: 88.066
- type: recall\_at\_1000
value: 97.748
- type: recall\_at\_3
value: 47.128
- type: recall\_at\_5
value: 54.954
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.956083333333336
- type: map\_at\_10
value: 43.33483333333333
- type: map\_at\_100
value: 44.64883333333333
- type: map\_at\_1000
value: 44.75
- type: map\_at\_3
value: 39.87741666666666
- type: map\_at\_5
value: 41.86766666666667
- type: mrr\_at\_1
value: 38.06341666666667
- type: mrr\_at\_10
value: 47.839666666666666
- type: mrr\_at\_100
value: 48.644000000000005
- type: mrr\_at\_1000
value: 48.68566666666667
- type: mrr\_at\_3
value: 45.26358333333334
- type: mrr\_at\_5
value: 46.790000000000006
- type: ndcg\_at\_1
value: 38.06341666666667
- type: ndcg\_at\_10
value: 49.419333333333334
- type: ndcg\_at\_100
value: 54.50166666666667
- type: ndcg\_at\_1000
value: 56.161166666666674
- type: ndcg\_at\_3
value: 43.982416666666666
- type: ndcg\_at\_5
value: 46.638083333333334
- type: precision\_at\_1
value: 38.06341666666667
- type: precision\_at\_10
value: 8.70858333333333
- type: precision\_at\_100
value: 1.327
- type: precision\_at\_1000
value: 0.165
- type: precision\_at\_3
value: 20.37816666666667
- type: precision\_at\_5
value: 14.516333333333334
- type: recall\_at\_1
value: 31.956083333333336
- type: recall\_at\_10
value: 62.69458333333334
- type: recall\_at\_100
value: 84.46433333333334
- type: recall\_at\_1000
value: 95.58449999999999
- type: recall\_at\_3
value: 47.52016666666666
- type: recall\_at\_5
value: 54.36066666666666
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 28.912
- type: map\_at\_10
value: 38.291
- type: map\_at\_100
value: 39.44
- type: map\_at\_1000
value: 39.528
- type: map\_at\_3
value: 35.638
- type: map\_at\_5
value: 37.218
- type: mrr\_at\_1
value: 32.822
- type: mrr\_at\_10
value: 41.661
- type: mrr\_at\_100
value: 42.546
- type: mrr\_at\_1000
value: 42.603
- type: mrr\_at\_3
value: 39.238
- type: mrr\_at\_5
value: 40.726
- type: ndcg\_at\_1
value: 32.822
- type: ndcg\_at\_10
value: 43.373
- type: ndcg\_at\_100
value: 48.638
- type: ndcg\_at\_1000
value: 50.654999999999994
- type: ndcg\_at\_3
value: 38.643
- type: ndcg\_at\_5
value: 41.126000000000005
- type: precision\_at\_1
value: 32.822
- type: precision\_at\_10
value: 6.8709999999999996
- type: precision\_at\_100
value: 1.032
- type: precision\_at\_1000
value: 0.128
- type: precision\_at\_3
value: 16.82
- type: precision\_at\_5
value: 11.718
- type: recall\_at\_1
value: 28.912
- type: recall\_at\_10
value: 55.376999999999995
- type: recall\_at\_100
value: 79.066
- type: recall\_at\_1000
value: 93.664
- type: recall\_at\_3
value: 42.569
- type: recall\_at\_5
value: 48.719
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 22.181
- type: map\_at\_10
value: 31.462
- type: map\_at\_100
value: 32.73
- type: map\_at\_1000
value: 32.848
- type: map\_at\_3
value: 28.57
- type: map\_at\_5
value: 30.182
- type: mrr\_at\_1
value: 27.185
- type: mrr\_at\_10
value: 35.846000000000004
- type: mrr\_at\_100
value: 36.811
- type: mrr\_at\_1000
value: 36.873
- type: mrr\_at\_3
value: 33.437
- type: mrr\_at\_5
value: 34.813
- type: ndcg\_at\_1
value: 27.185
- type: ndcg\_at\_10
value: 36.858000000000004
- type: ndcg\_at\_100
value: 42.501
- type: ndcg\_at\_1000
value: 44.945
- type: ndcg\_at\_3
value: 32.066
- type: ndcg\_at\_5
value: 34.29
- type: precision\_at\_1
value: 27.185
- type: precision\_at\_10
value: 6.752
- type: precision\_at\_100
value: 1.111
- type: precision\_at\_1000
value: 0.151
- type: precision\_at\_3
value: 15.290000000000001
- type: precision\_at\_5
value: 11.004999999999999
- type: recall\_at\_1
value: 22.181
- type: recall\_at\_10
value: 48.513
- type: recall\_at\_100
value: 73.418
- type: recall\_at\_1000
value: 90.306
- type: recall\_at\_3
value: 35.003
- type: recall\_at\_5
value: 40.876000000000005
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 33.934999999999995
- type: map\_at\_10
value: 44.727
- type: map\_at\_100
value: 44.727
- type: map\_at\_1000
value: 44.727
- type: map\_at\_3
value: 40.918
- type: map\_at\_5
value: 42.961
- type: mrr\_at\_1
value: 39.646
- type: mrr\_at\_10
value: 48.898
- type: mrr\_at\_100
value: 48.898
- type: mrr\_at\_1000
value: 48.898
- type: mrr\_at\_3
value: 45.896
- type: mrr\_at\_5
value: 47.514
- type: ndcg\_at\_1
value: 39.646
- type: ndcg\_at\_10
value: 50.817
- type: ndcg\_at\_100
value: 50.803
- type: ndcg\_at\_1000
value: 50.803
- type: ndcg\_at\_3
value: 44.507999999999996
- type: ndcg\_at\_5
value: 47.259
- type: precision\_at\_1
value: 39.646
- type: precision\_at\_10
value: 8.759
- type: precision\_at\_100
value: 0.876
- type: precision\_at\_1000
value: 0.08800000000000001
- type: precision\_at\_3
value: 20.274
- type: precision\_at\_5
value: 14.366000000000001
- type: recall\_at\_1
value: 33.934999999999995
- type: recall\_at\_10
value: 65.037
- type: recall\_at\_100
value: 65.037
- type: recall\_at\_1000
value: 65.037
- type: recall\_at\_3
value: 47.439
- type: recall\_at\_5
value: 54.567
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 32.058
- type: map\_at\_10
value: 43.137
- type: map\_at\_100
value: 43.137
- type: map\_at\_1000
value: 43.137
- type: map\_at\_3
value: 39.882
- type: map\_at\_5
value: 41.379
- type: mrr\_at\_1
value: 38.933
- type: mrr\_at\_10
value: 48.344
- type: mrr\_at\_100
value: 48.344
- type: mrr\_at\_1000
value: 48.344
- type: mrr\_at\_3
value: 45.652
- type: mrr\_at\_5
value: 46.877
- type: ndcg\_at\_1
value: 38.933
- type: ndcg\_at\_10
value: 49.964
- type: ndcg\_at\_100
value: 49.242000000000004
- type: ndcg\_at\_1000
value: 49.222
- type: ndcg\_at\_3
value: 44.605
- type: ndcg\_at\_5
value: 46.501999999999995
- type: precision\_at\_1
value: 38.933
- type: precision\_at\_10
value: 9.427000000000001
- type: precision\_at\_100
value: 0.943
- type: precision\_at\_1000
value: 0.094
- type: precision\_at\_3
value: 20.685000000000002
- type: precision\_at\_5
value: 14.585
- type: recall\_at\_1
value: 32.058
- type: recall\_at\_10
value: 63.074
- type: recall\_at\_100
value: 63.074
- type: recall\_at\_1000
value: 63.074
- type: recall\_at\_3
value: 47.509
- type: recall\_at\_5
value: 52.455
+ task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 26.029000000000003
- type: map\_at\_10
value: 34.646
- type: map\_at\_100
value: 34.646
- type: map\_at\_1000
value: 34.646
- type: map\_at\_3
value: 31.456
- type: map\_at\_5
value: 33.138
- type: mrr\_at\_1
value: 28.281
- type: mrr\_at\_10
value: 36.905
- type: mrr\_at\_100
value: 36.905
- type: mrr\_at\_1000
value: 36.905
- type: mrr\_at\_3
value: 34.011
- type: mrr\_at\_5
value: 35.638
- type: ndcg\_at\_1
value: 28.281
- type: ndcg\_at\_10
value: 40.159
- type: ndcg\_at\_100
value: 40.159
- type: ndcg\_at\_1000
value: 40.159
- type: ndcg\_at\_3
value: 33.995
- type: ndcg\_at\_5
value: 36.836999999999996
- type: precision\_at\_1
value: 28.281
- type: precision\_at\_10
value: 6.358999999999999
- type: precision\_at\_100
value: 0.636
- type: precision\_at\_1000
value: 0.064
- type: precision\_at\_3
value: 14.233
- type: precision\_at\_5
value: 10.314
- type: recall\_at\_1
value: 26.029000000000003
- type: recall\_at\_10
value: 55.08
- type: recall\_at\_100
value: 55.08
- type: recall\_at\_1000
value: 55.08
- type: recall\_at\_3
value: 38.487
- type: recall\_at\_5
value: 45.308
+ task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 12.842999999999998
- type: map\_at\_10
value: 22.101000000000003
- type: map\_at\_100
value: 24.319
- type: map\_at\_1000
value: 24.51
- type: map\_at\_3
value: 18.372
- type: map\_at\_5
value: 20.323
- type: mrr\_at\_1
value: 27.948
- type: mrr\_at\_10
value: 40.321
- type: mrr\_at\_100
value: 41.262
- type: mrr\_at\_1000
value: 41.297
- type: mrr\_at\_3
value: 36.558
- type: mrr\_at\_5
value: 38.824999999999996
- type: ndcg\_at\_1
value: 27.948
- type: ndcg\_at\_10
value: 30.906
- type: ndcg\_at\_100
value: 38.986
- type: ndcg\_at\_1000
value: 42.136
- type: ndcg\_at\_3
value: 24.911
- type: ndcg\_at\_5
value: 27.168999999999997
- type: precision\_at\_1
value: 27.948
- type: precision\_at\_10
value: 9.798
- type: precision\_at\_100
value: 1.8399999999999999
- type: precision\_at\_1000
value: 0.243
- type: precision\_at\_3
value: 18.328
- type: precision\_at\_5
value: 14.502
- type: recall\_at\_1
value: 12.842999999999998
- type: recall\_at\_10
value: 37.245
- type: recall\_at\_100
value: 64.769
- type: recall\_at\_1000
value: 82.055
- type: recall\_at\_3
value: 23.159
- type: recall\_at\_5
value: 29.113
+ task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 8.934000000000001
- type: map\_at\_10
value: 21.915000000000003
- type: map\_at\_100
value: 21.915000000000003
- type: map\_at\_1000
value: 21.915000000000003
- type: map\_at\_3
value: 14.623
- type: map\_at\_5
value: 17.841
- type: mrr\_at\_1
value: 71.25
- type: mrr\_at\_10
value: 78.994
- type: mrr\_at\_100
value: 78.994
- type: mrr\_at\_1000
value: 78.994
- type: mrr\_at\_3
value: 77.208
- type: mrr\_at\_5
value: 78.55799999999999
- type: ndcg\_at\_1
value: 60.62499999999999
- type: ndcg\_at\_10
value: 46.604
- type: ndcg\_at\_100
value: 35.653
- type: ndcg\_at\_1000
value: 35.531
- type: ndcg\_at\_3
value: 50.605
- type: ndcg\_at\_5
value: 48.730000000000004
- type: precision\_at\_1
value: 71.25
- type: precision\_at\_10
value: 37.75
- type: precision\_at\_100
value: 3.775
- type: precision\_at\_1000
value: 0.377
- type: precision\_at\_3
value: 54.417
- type: precision\_at\_5
value: 48.15
- type: recall\_at\_1
value: 8.934000000000001
- type: recall\_at\_10
value: 28.471000000000004
- type: recall\_at\_100
value: 28.471000000000004
- type: recall\_at\_1000
value: 28.471000000000004
- type: recall\_at\_3
value: 16.019
- type: recall\_at\_5
value: 21.410999999999998
+ task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
+ task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 66.81899999999999
- type: map\_at\_10
value: 78.034
- type: map\_at\_100
value: 78.034
- type: map\_at\_1000
value: 78.034
- type: map\_at\_3
value: 76.43100000000001
- type: map\_at\_5
value: 77.515
- type: mrr\_at\_1
value: 71.542
- type: mrr\_at\_10
value: 81.638
- type: mrr\_at\_100
value: 81.638
- type: mrr\_at\_1000
value: 81.638
- type: mrr\_at\_3
value: 80.403
- type: mrr\_at\_5
value: 81.256
- type: ndcg\_at\_1
value: 71.542
- type: ndcg\_at\_10
value: 82.742
- type: ndcg\_at\_100
value: 82.741
- type: ndcg\_at\_1000
value: 82.741
- type: ndcg\_at\_3
value: 80.039
- type: ndcg\_at\_5
value: 81.695
- type: precision\_at\_1
value: 71.542
- type: precision\_at\_10
value: 10.387
- type: precision\_at\_100
value: 1.039
- type: precision\_at\_1000
value: 0.104
- type: precision\_at\_3
value: 31.447999999999997
- type: precision\_at\_5
value: 19.91
- type: recall\_at\_1
value: 66.81899999999999
- type: recall\_at\_10
value: 93.372
- type: recall\_at\_100
value: 93.372
- type: recall\_at\_1000
value: 93.372
- type: recall\_at\_3
value: 86.33
- type: recall\_at\_5
value: 90.347
+ task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 31.158
- type: map\_at\_10
value: 52.017
- type: map\_at\_100
value: 54.259
- type: map\_at\_1000
value: 54.367
- type: map\_at\_3
value: 45.738
- type: map\_at\_5
value: 49.283
- type: mrr\_at\_1
value: 57.87
- type: mrr\_at\_10
value: 66.215
- type: mrr\_at\_100
value: 66.735
- type: mrr\_at\_1000
value: 66.75
- type: mrr\_at\_3
value: 64.043
- type: mrr\_at\_5
value: 65.116
- type: ndcg\_at\_1
value: 57.87
- type: ndcg\_at\_10
value: 59.946999999999996
- type: ndcg\_at\_100
value: 66.31099999999999
- type: ndcg\_at\_1000
value: 67.75999999999999
- type: ndcg\_at\_3
value: 55.483000000000004
- type: ndcg\_at\_5
value: 56.891000000000005
- type: precision\_at\_1
value: 57.87
- type: precision\_at\_10
value: 16.497
- type: precision\_at\_100
value: 2.321
- type: precision\_at\_1000
value: 0.258
- type: precision\_at\_3
value: 37.14
- type: precision\_at\_5
value: 27.067999999999998
- type: recall\_at\_1
value: 31.158
- type: recall\_at\_10
value: 67.381
- type: recall\_at\_100
value: 89.464
- type: recall\_at\_1000
value: 97.989
- type: recall\_at\_3
value: 50.553000000000004
- type: recall\_at\_5
value: 57.824
+ task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 42.073
- type: map\_at\_10
value: 72.418
- type: map\_at\_100
value: 73.175
- type: map\_at\_1000
value: 73.215
- type: map\_at\_3
value: 68.791
- type: map\_at\_5
value: 71.19
- type: mrr\_at\_1
value: 84.146
- type: mrr\_at\_10
value: 88.994
- type: mrr\_at\_100
value: 89.116
- type: mrr\_at\_1000
value: 89.12
- type: mrr\_at\_3
value: 88.373
- type: mrr\_at\_5
value: 88.82
- type: ndcg\_at\_1
value: 84.146
- type: ndcg\_at\_10
value: 79.404
- type: ndcg\_at\_100
value: 81.83200000000001
- type: ndcg\_at\_1000
value: 82.524
- type: ndcg\_at\_3
value: 74.595
- type: ndcg\_at\_5
value: 77.474
- type: precision\_at\_1
value: 84.146
- type: precision\_at\_10
value: 16.753999999999998
- type: precision\_at\_100
value: 1.8599999999999999
- type: precision\_at\_1000
value: 0.19499999999999998
- type: precision\_at\_3
value: 48.854
- type: precision\_at\_5
value: 31.579
- type: recall\_at\_1
value: 42.073
- type: recall\_at\_10
value: 83.768
- type: recall\_at\_100
value: 93.018
- type: recall\_at\_1000
value: 97.481
- type: recall\_at\_3
value: 73.282
- type: recall\_at\_5
value: 78.947
+ task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
+ task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map\_at\_1
value: 21.698
- type: map\_at\_10
value: 34.585
- type: map\_at\_100
value: 35.782000000000004
- type: map\_at\_1000
value: 35.825
- type: map\_at\_3
value: 30.397999999999996
- type: map\_at\_5
value: 32.72
- type: mrr\_at\_1
value: 22.192
- type: mrr\_at\_10
value: 35.085
- type: mrr\_at\_100
value: 36.218
- type: mrr\_at\_1000
value: 36.256
- type: mrr\_at\_3
value: 30.986000000000004
- type: mrr\_at\_5
value: 33.268
- type: ndcg\_at\_1
value: 22.192
- type: ndcg\_at\_10
value: 41.957
- type: ndcg\_at\_100
value: 47.658
- type: ndcg\_at\_1000
value: 48.697
- type: ndcg\_at\_3
value: 33.433
- type: ndcg\_at\_5
value: 37.551
- type: precision\_at\_1
value: 22.192
- type: precision\_at\_10
value: 6.781
- type: precision\_at\_100
value: 0.963
- type: precision\_at\_1000
value: 0.105
- type: precision\_at\_3
value: 14.365
- type: precision\_at\_5
value: 10.713000000000001
- type: recall\_at\_1
value: 21.698
- type: recall\_at\_10
value: 64.79
- type: recall\_at\_100
value: 91.071
- type: recall\_at\_1000
value: 98.883
- type: recall\_at\_3
value: 41.611
- type: recall\_at\_5
value: 51.459999999999994
+ task:
type: Classification
dataset:
type: mteb/mtop\_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
+ task:
type: Classification
dataset:
type: mteb/mtop\_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
+ task:
type: Classification
dataset:
type: mteb/amazon\_massive\_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v\_measure
value: 36.52153488185864
+ task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v\_measure
value: 36.80090398444147
+ task:
type: Reranking
dataset:
type: mteb/mind\_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
+ task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 6.920999999999999
- type: map\_at\_10
value: 16.049
- type: map\_at\_100
value: 16.049
- type: map\_at\_1000
value: 16.049
- type: map\_at\_3
value: 11.865
- type: map\_at\_5
value: 13.657
- type: mrr\_at\_1
value: 53.87
- type: mrr\_at\_10
value: 62.291
- type: mrr\_at\_100
value: 62.291
- type: mrr\_at\_1000
value: 62.291
- type: mrr\_at\_3
value: 60.681
- type: mrr\_at\_5
value: 61.61
- type: ndcg\_at\_1
value: 51.23799999999999
- type: ndcg\_at\_10
value: 40.892
- type: ndcg\_at\_100
value: 26.951999999999998
- type: ndcg\_at\_1000
value: 26.474999999999998
- type: ndcg\_at\_3
value: 46.821
- type: ndcg\_at\_5
value: 44.333
- type: precision\_at\_1
value: 53.251000000000005
- type: precision\_at\_10
value: 30.124000000000002
- type: precision\_at\_100
value: 3.012
- type: precision\_at\_1000
value: 0.301
- type: precision\_at\_3
value: 43.55
- type: precision\_at\_5
value: 38.266
- type: recall\_at\_1
value: 6.920999999999999
- type: recall\_at\_10
value: 20.852
- type: recall\_at\_100
value: 20.852
- type: recall\_at\_1000
value: 20.852
- type: recall\_at\_3
value: 13.628000000000002
- type: recall\_at\_5
value: 16.273
+ task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 46.827999999999996
- type: map\_at\_10
value: 63.434000000000005
- type: map\_at\_100
value: 63.434000000000005
- type: map\_at\_1000
value: 63.434000000000005
- type: map\_at\_3
value: 59.794000000000004
- type: map\_at\_5
value: 62.08
- type: mrr\_at\_1
value: 52.288999999999994
- type: mrr\_at\_10
value: 65.95
- type: mrr\_at\_100
value: 65.95
- type: mrr\_at\_1000
value: 65.95
- type: mrr\_at\_3
value: 63.413
- type: mrr\_at\_5
value: 65.08
- type: ndcg\_at\_1
value: 52.288999999999994
- type: ndcg\_at\_10
value: 70.301
- type: ndcg\_at\_100
value: 70.301
- type: ndcg\_at\_1000
value: 70.301
- type: ndcg\_at\_3
value: 63.979
- type: ndcg\_at\_5
value: 67.582
- type: precision\_at\_1
value: 52.288999999999994
- type: precision\_at\_10
value: 10.576
- type: precision\_at\_100
value: 1.058
- type: precision\_at\_1000
value: 0.106
- type: precision\_at\_3
value: 28.177000000000003
- type: precision\_at\_5
value: 19.073
- type: recall\_at\_1
value: 46.827999999999996
- type: recall\_at\_10
value: 88.236
- type: recall\_at\_100
value: 88.236
- type: recall\_at\_1000
value: 88.236
- type: recall\_at\_3
value: 72.371
- type: recall\_at\_5
value: 80.56
+ task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 71.652
- type: map\_at\_10
value: 85.953
- type: map\_at\_100
value: 85.953
- type: map\_at\_1000
value: 85.953
- type: map\_at\_3
value: 83.05399999999999
- type: map\_at\_5
value: 84.89
- type: mrr\_at\_1
value: 82.42
- type: mrr\_at\_10
value: 88.473
- type: mrr\_at\_100
value: 88.473
- type: mrr\_at\_1000
value: 88.473
- type: mrr\_at\_3
value: 87.592
- type: mrr\_at\_5
value: 88.211
- type: ndcg\_at\_1
value: 82.44
- type: ndcg\_at\_10
value: 89.467
- type: ndcg\_at\_100
value: 89.33
- type: ndcg\_at\_1000
value: 89.33
- type: ndcg\_at\_3
value: 86.822
- type: ndcg\_at\_5
value: 88.307
- type: precision\_at\_1
value: 82.44
- type: precision\_at\_10
value: 13.616
- type: precision\_at\_100
value: 1.362
- type: precision\_at\_1000
value: 0.136
- type: precision\_at\_3
value: 38.117000000000004
- type: precision\_at\_5
value: 25.05
- type: recall\_at\_1
value: 71.652
- type: recall\_at\_10
value: 96.224
- type: recall\_at\_100
value: 96.224
- type: recall\_at\_1000
value: 96.224
- type: recall\_at\_3
value: 88.571
- type: recall\_at\_5
value: 92.812
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v\_measure
value: 61.295010338050474
+ task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v\_measure
value: 67.26380819328142
+ task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 5.683
- type: map\_at\_10
value: 14.924999999999999
- type: map\_at\_100
value: 17.532
- type: map\_at\_1000
value: 17.875
- type: map\_at\_3
value: 10.392
- type: map\_at\_5
value: 12.592
- type: mrr\_at\_1
value: 28.000000000000004
- type: mrr\_at\_10
value: 39.951
- type: mrr\_at\_100
value: 41.025
- type: mrr\_at\_1000
value: 41.056
- type: mrr\_at\_3
value: 36.317
- type: mrr\_at\_5
value: 38.412
- type: ndcg\_at\_1
value: 28.000000000000004
- type: ndcg\_at\_10
value: 24.410999999999998
- type: ndcg\_at\_100
value: 33.79
- type: ndcg\_at\_1000
value: 39.035
- type: ndcg\_at\_3
value: 22.845
- type: ndcg\_at\_5
value: 20.080000000000002
- type: precision\_at\_1
value: 28.000000000000004
- type: precision\_at\_10
value: 12.790000000000001
- type: precision\_at\_100
value: 2.633
- type: precision\_at\_1000
value: 0.388
- type: precision\_at\_3
value: 21.367
- type: precision\_at\_5
value: 17.7
- type: recall\_at\_1
value: 5.683
- type: recall\_at\_10
value: 25.91
- type: recall\_at\_100
value: 53.443
- type: recall\_at\_1000
value: 78.73
- type: recall\_at\_3
value: 13.003
- type: recall\_at\_5
value: 17.932000000000002
+ task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos\_sim\_pearson
value: 84.677978681023
- type: cos\_sim\_spearman
value: 83.13093441058189
- type: euclidean\_pearson
value: 83.35535759341572
- type: euclidean\_spearman
value: 83.42583744219611
- type: manhattan\_pearson
value: 83.2243124045889
- type: manhattan\_spearman
value: 83.39801618652632
+ task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos\_sim\_pearson
value: 81.68960206569666
- type: cos\_sim\_spearman
value: 77.3368966488535
- type: euclidean\_pearson
value: 77.62828980560303
- type: euclidean\_spearman
value: 76.77951481444651
- type: manhattan\_pearson
value: 77.88637240839041
- type: manhattan\_spearman
value: 77.22157841466188
+ task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos\_sim\_pearson
value: 84.18745821650724
- type: cos\_sim\_spearman
value: 85.04423285574542
- type: euclidean\_pearson
value: 85.46604816931023
- type: euclidean\_spearman
value: 85.5230593932974
- type: manhattan\_pearson
value: 85.57912805986261
- type: manhattan\_spearman
value: 85.65955905111873
+ task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos\_sim\_pearson
value: 83.6715333300355
- type: cos\_sim\_spearman
value: 82.9058522514908
- type: euclidean\_pearson
value: 83.9640357424214
- type: euclidean\_spearman
value: 83.60415457472637
- type: manhattan\_pearson
value: 84.05621005853469
- type: manhattan\_spearman
value: 83.87077724707746
+ task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos\_sim\_pearson
value: 87.82422928098886
- type: cos\_sim\_spearman
value: 88.12660311894628
- type: euclidean\_pearson
value: 87.50974805056555
- type: euclidean\_spearman
value: 87.91957275596677
- type: manhattan\_pearson
value: 87.74119404878883
- type: manhattan\_spearman
value: 88.2808922165719
+ task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos\_sim\_pearson
value: 84.80605838552093
- type: cos\_sim\_spearman
value: 86.24123388765678
- type: euclidean\_pearson
value: 85.32648347339814
- type: euclidean\_spearman
value: 85.60046671950158
- type: manhattan\_pearson
value: 85.53800168487811
- type: manhattan\_spearman
value: 85.89542420480763
+ task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos\_sim\_pearson
value: 89.87540978988132
- type: cos\_sim\_spearman
value: 90.12715295099461
- type: euclidean\_pearson
value: 91.61085993525275
- type: euclidean\_spearman
value: 91.31835942311758
- type: manhattan\_pearson
value: 91.57500202032934
- type: manhattan\_spearman
value: 91.1790925526635
+ task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos\_sim\_pearson
value: 69.87136205329556
- type: cos\_sim\_spearman
value: 68.6253154635078
- type: euclidean\_pearson
value: 68.91536015034222
- type: euclidean\_spearman
value: 67.63744649352542
- type: manhattan\_pearson
value: 69.2000713045275
- type: manhattan\_spearman
value: 68.16002901587316
+ task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos\_sim\_pearson
value: 85.21849551039082
- type: cos\_sim\_spearman
value: 85.6392959372461
- type: euclidean\_pearson
value: 85.92050852609488
- type: euclidean\_spearman
value: 85.97205649009734
- type: manhattan\_pearson
value: 86.1031154802254
- type: manhattan\_spearman
value: 86.26791155517466
+ task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
+ task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 64.994
- type: map\_at\_10
value: 74.763
- type: map\_at\_100
value: 75.127
- type: map\_at\_1000
value: 75.143
- type: map\_at\_3
value: 71.824
- type: map\_at\_5
value: 73.71
- type: mrr\_at\_1
value: 68.333
- type: mrr\_at\_10
value: 75.749
- type: mrr\_at\_100
value: 75.922
- type: mrr\_at\_1000
value: 75.938
- type: mrr\_at\_3
value: 73.556
- type: mrr\_at\_5
value: 74.739
- type: ndcg\_at\_1
value: 68.333
- type: ndcg\_at\_10
value: 79.174
- type: ndcg\_at\_100
value: 80.41
- type: ndcg\_at\_1000
value: 80.804
- type: ndcg\_at\_3
value: 74.361
- type: ndcg\_at\_5
value: 76.861
- type: precision\_at\_1
value: 68.333
- type: precision\_at\_10
value: 10.333
- type: precision\_at\_100
value: 1.0999999999999999
- type: precision\_at\_1000
value: 0.11299999999999999
- type: precision\_at\_3
value: 28.778
- type: precision\_at\_5
value: 19.067
- type: recall\_at\_1
value: 64.994
- type: recall\_at\_10
value: 91.822
- type: recall\_at\_100
value: 97.0
- type: recall\_at\_1000
value: 100.0
- type: recall\_at\_3
value: 78.878
- type: recall\_at\_5
value: 85.172
+ task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos\_sim\_accuracy
value: 99.72079207920792
- type: cos\_sim\_ap
value: 93.00265215525152
- type: cos\_sim\_f1
value: 85.06596306068602
- type: cos\_sim\_precision
value: 90.05586592178771
- type: cos\_sim\_recall
value: 80.60000000000001
- type: dot\_accuracy
value: 99.66039603960397
- type: dot\_ap
value: 91.22371407479089
- type: dot\_f1
value: 82.34693877551021
- type: dot\_precision
value: 84.0625
- type: dot\_recall
value: 80.7
- type: euclidean\_accuracy
value: 99.71881188118812
- type: euclidean\_ap
value: 92.88449963304728
- type: euclidean\_f1
value: 85.19480519480518
- type: euclidean\_precision
value: 88.64864864864866
- type: euclidean\_recall
value: 82.0
- type: manhattan\_accuracy
value: 99.73267326732673
- type: manhattan\_ap
value: 93.23055393056883
- type: manhattan\_f1
value: 85.88957055214725
- type: manhattan\_precision
value: 87.86610878661088
- type: manhattan\_recall
value: 84.0
- type: max\_accuracy
value: 99.73267326732673
- type: max\_ap
value: 93.23055393056883
- type: max\_f1
value: 85.88957055214725
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v\_measure
value: 77.3305735900358
+ task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v\_measure
value: 41.32967136540674
+ task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
+ task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos\_sim\_pearson
value: 30.783007208997144
- type: cos\_sim\_spearman
value: 30.373444721540533
- type: dot\_pearson
value: 29.210604111143905
- type: dot\_spearman
value: 29.98809758085659
+ task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 0.234
- type: map\_at\_10
value: 1.894
- type: map\_at\_100
value: 1.894
- type: map\_at\_1000
value: 1.894
- type: map\_at\_3
value: 0.636
- type: map\_at\_5
value: 1.0
- type: mrr\_at\_1
value: 88.0
- type: mrr\_at\_10
value: 93.667
- type: mrr\_at\_100
value: 93.667
- type: mrr\_at\_1000
value: 93.667
- type: mrr\_at\_3
value: 93.667
- type: mrr\_at\_5
value: 93.667
- type: ndcg\_at\_1
value: 85.0
- type: ndcg\_at\_10
value: 74.798
- type: ndcg\_at\_100
value: 16.462
- type: ndcg\_at\_1000
value: 7.0889999999999995
- type: ndcg\_at\_3
value: 80.754
- type: ndcg\_at\_5
value: 77.319
- type: precision\_at\_1
value: 88.0
- type: precision\_at\_10
value: 78.0
- type: precision\_at\_100
value: 7.8
- type: precision\_at\_1000
value: 0.7799999999999999
- type: precision\_at\_3
value: 83.333
- type: precision\_at\_5
value: 80.80000000000001
- type: recall\_at\_1
value: 0.234
- type: recall\_at\_10
value: 2.093
- type: recall\_at\_100
value: 2.093
- type: recall\_at\_1000
value: 2.093
- type: recall\_at\_3
value: 0.662
- type: recall\_at\_5
value: 1.0739999999999998
+ task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map\_at\_1
value: 2.703
- type: map\_at\_10
value: 10.866000000000001
- type: map\_at\_100
value: 10.866000000000001
- type: map\_at\_1000
value: 10.866000000000001
- type: map\_at\_3
value: 5.909
- type: map\_at\_5
value: 7.35
- type: mrr\_at\_1
value: 36.735
- type: mrr\_at\_10
value: 53.583000000000006
- type: mrr\_at\_100
value: 53.583000000000006
- type: mrr\_at\_1000
value: 53.583000000000006
- type: mrr\_at\_3
value: 49.32
- type: mrr\_at\_5
value: 51.769
- type: ndcg\_at\_1
value: 34.694
- type: ndcg\_at\_10
value: 27.926000000000002
- type: ndcg\_at\_100
value: 22.701
- type: ndcg\_at\_1000
value: 22.701
- type: ndcg\_at\_3
value: 32.073
- type: ndcg\_at\_5
value: 28.327999999999996
- type: precision\_at\_1
value: 36.735
- type: precision\_at\_10
value: 24.694
- type: precision\_at\_100
value: 2.469
- type: precision\_at\_1000
value: 0.247
- type: precision\_at\_3
value: 31.973000000000003
- type: precision\_at\_5
value: 26.939
- type: recall\_at\_1
value: 2.703
- type: recall\_at\_10
value: 17.702
- type: recall\_at\_100
value: 17.702
- type: recall\_at\_1000
value: 17.702
- type: recall\_at\_3
value: 7.208
- type: recall\_at\_5
value: 9.748999999999999
+ task:
type: Classification
dataset:
type: mteb/toxic\_conversations\_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
+ task:
type: Classification
dataset:
type: mteb/tweet\_sentiment\_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
+ task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v\_measure
value: 55.70352297774293
+ task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos\_sim\_accuracy
value: 88.27561542588067
- type: cos\_sim\_ap
value: 81.08262141256193
- type: cos\_sim\_f1
value: 73.82341501361338
- type: cos\_sim\_precision
value: 72.5720112159062
- type: cos\_sim\_recall
value: 75.11873350923483
- type: dot\_accuracy
value: 86.66030875603504
- type: dot\_ap
value: 76.6052349228621
- type: dot\_f1
value: 70.13897280966768
- type: dot\_precision
value: 64.70457079152732
- type: dot\_recall
value: 76.56992084432717
- type: euclidean\_accuracy
value: 88.37098408535495
- type: euclidean\_ap
value: 81.12515230092113
- type: euclidean\_f1
value: 74.10338225909379
- type: euclidean\_precision
value: 71.76761433868974
- type: euclidean\_recall
value: 76.59630606860158
- type: manhattan\_accuracy
value: 88.34118137926924
- type: manhattan\_ap
value: 80.95751834536561
- type: manhattan\_f1
value: 73.9119496855346
- type: manhattan\_precision
value: 70.625
- type: manhattan\_recall
value: 77.5197889182058
- type: max\_accuracy
value: 88.37098408535495
- type: max\_ap
value: 81.12515230092113
- type: max\_f1
value: 74.10338225909379
+ task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos\_sim\_accuracy
value: 89.79896767182831
- type: cos\_sim\_ap
value: 87.40071784061065
- type: cos\_sim\_f1
value: 79.87753144712087
- type: cos\_sim\_precision
value: 76.67304015296367
- type: cos\_sim\_recall
value: 83.3615645210964
- type: dot\_accuracy
value: 88.95486474948578
- type: dot\_ap
value: 86.00227979119943
- type: dot\_f1
value: 78.54601474525914
- type: dot\_precision
value: 75.00525394045535
- type: dot\_recall
value: 82.43763473975977
- type: euclidean\_accuracy
value: 89.7892653393876
- type: euclidean\_ap
value: 87.42174706480819
- type: euclidean\_f1
value: 80.07283321194465
- type: euclidean\_precision
value: 75.96738529574351
- type: euclidean\_recall
value: 84.6473668001232
- type: manhattan\_accuracy
value: 89.8474793340319
- type: manhattan\_ap
value: 87.47814292587448
- type: manhattan\_f1
value: 80.15461150280949
- type: manhattan\_precision
value: 74.88798234468
- type: manhattan\_recall
value: 86.21804742839544
- type: max\_accuracy
value: 89.8474793340319
- type: max\_ap
value: 87.47814292587448
- type: max\_f1
value: 80.15461150280949
---
Model Summary
=============
>
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
>
>
>
* Repository: ContextualAI/gritlm
* Paper: URL
* Logs: URL
* Script: URL
Use
===
The model usage is documented here.
| [] | [
"TAGS\n#gguf #region-us \n"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_copa_bert
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0295
- Accuracy: 0.54
- F1: 0.5407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7066 | 1.0 | 50 | 0.6907 | 0.54 | 0.5411 |
| 0.6897 | 2.0 | 100 | 0.6880 | 0.57 | 0.5709 |
| 0.6001 | 3.0 | 150 | 0.7025 | 0.55 | 0.5511 |
| 0.4629 | 4.0 | 200 | 0.7810 | 0.53 | 0.5310 |
| 0.3402 | 5.0 | 250 | 1.0003 | 0.55 | 0.5511 |
| 0.2299 | 6.0 | 300 | 1.0220 | 0.55 | 0.5511 |
| 0.1874 | 7.0 | 350 | 0.9956 | 0.56 | 0.5611 |
| 0.1133 | 8.0 | 400 | 1.0295 | 0.54 | 0.5407 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_copa_bert", "results": []}]} | lenatr99/fine_tuned_copa_bert | null | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:18:56+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #multiple-choice #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| fine\_tuned\_copa\_bert
=======================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0295
* Accuracy: 0.54
* F1: 0.5407
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #multiple-choice #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/r2igr19 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:20:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cosmosDPO_CodeTest
This model is a fine-tuned version of [ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5271
- Rewards/chosen: -2.6242
- Rewards/rejected: -6.3552
- Rewards/accuracies: 0.2667
- Rewards/margins: 3.7309
- Logps/rejected: -749.125
- Logps/chosen: -350.9360
- Logits/rejected: -5.2606
- Logits/chosen: -4.5085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5159 | 1.3072 | 100 | 0.5241 | -0.9182 | -3.2061 | 0.2676 | 2.2879 | -434.2115 | -180.3287 | -4.0572 | -3.5729 |
| 0.5227 | 2.6144 | 200 | 0.5217 | -2.1076 | -5.3791 | 0.2695 | 3.2715 | -651.5153 | -299.2687 | -4.8098 | -4.1931 |
| 0.4937 | 3.9216 | 300 | 0.5271 | -2.6242 | -6.3552 | 0.2667 | 3.7309 | -749.125 | -350.9360 | -5.2606 | -4.5085 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1", "model-index": [{"name": "cosmosDPO_v0.1", "results": []}]} | meguzn/cosmosDPO_v0.1 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1",
"license:mit",
"region:us"
] | null | 2024-05-03T17:21:46+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 #license-mit #region-us
| cosmosDPO\_CodeTest
===================
This model is a fine-tuned version of ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5271
* Rewards/chosen: -2.6242
* Rewards/rejected: -6.3552
* Rewards/accuracies: 0.2667
* Rewards/margins: 3.7309
* Logps/rejected: -749.125
* Logps/chosen: -350.9360
* Logits/rejected: -5.2606
* Logits/chosen: -4.5085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4181
- F1 Score: 0.8048
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5559 | 0.54 | 200 | 0.4704 | 0.7803 | 0.7804 |
| 0.4687 | 1.08 | 400 | 0.4646 | 0.7875 | 0.7878 |
| 0.4525 | 1.62 | 600 | 0.4497 | 0.7929 | 0.7931 |
| 0.4437 | 2.16 | 800 | 0.4473 | 0.7944 | 0.7944 |
| 0.4405 | 2.7 | 1000 | 0.4449 | 0.7921 | 0.7922 |
| 0.4363 | 3.24 | 1200 | 0.4399 | 0.7961 | 0.7963 |
| 0.4331 | 3.78 | 1400 | 0.4419 | 0.7909 | 0.7912 |
| 0.4313 | 4.32 | 1600 | 0.4438 | 0.7967 | 0.7968 |
| 0.4309 | 4.86 | 1800 | 0.4419 | 0.7937 | 0.7941 |
| 0.4266 | 5.41 | 2000 | 0.4388 | 0.7927 | 0.7929 |
| 0.4243 | 5.95 | 2200 | 0.4391 | 0.7981 | 0.7981 |
| 0.4279 | 6.49 | 2400 | 0.4341 | 0.7965 | 0.7965 |
| 0.4206 | 7.03 | 2600 | 0.4416 | 0.7977 | 0.7981 |
| 0.4231 | 7.57 | 2800 | 0.4348 | 0.7976 | 0.7976 |
| 0.4171 | 8.11 | 3000 | 0.4362 | 0.7944 | 0.7946 |
| 0.419 | 8.65 | 3200 | 0.4297 | 0.8017 | 0.8017 |
| 0.4207 | 9.19 | 3400 | 0.4331 | 0.7992 | 0.7992 |
| 0.418 | 9.73 | 3600 | 0.4378 | 0.7949 | 0.7954 |
| 0.4182 | 10.27 | 3800 | 0.4330 | 0.7982 | 0.7983 |
| 0.4164 | 10.81 | 4000 | 0.4360 | 0.7977 | 0.7978 |
| 0.414 | 11.35 | 4200 | 0.4330 | 0.7973 | 0.7975 |
| 0.4143 | 11.89 | 4400 | 0.4336 | 0.7964 | 0.7966 |
| 0.4115 | 12.43 | 4600 | 0.4335 | 0.8025 | 0.8025 |
| 0.4108 | 12.97 | 4800 | 0.4331 | 0.7990 | 0.7992 |
| 0.4133 | 13.51 | 5000 | 0.4407 | 0.7934 | 0.7943 |
| 0.4114 | 14.05 | 5200 | 0.4303 | 0.8029 | 0.8029 |
| 0.4085 | 14.59 | 5400 | 0.4288 | 0.8022 | 0.8022 |
| 0.4081 | 15.14 | 5600 | 0.4326 | 0.8021 | 0.8022 |
| 0.4096 | 15.68 | 5800 | 0.4334 | 0.7985 | 0.7988 |
| 0.4037 | 16.22 | 6000 | 0.4312 | 0.8023 | 0.8025 |
| 0.4114 | 16.76 | 6200 | 0.4254 | 0.8015 | 0.8015 |
| 0.4119 | 17.3 | 6400 | 0.4278 | 0.8046 | 0.8047 |
| 0.4072 | 17.84 | 6600 | 0.4294 | 0.8014 | 0.8015 |
| 0.4035 | 18.38 | 6800 | 0.4337 | 0.7972 | 0.7978 |
| 0.4047 | 18.92 | 7000 | 0.4277 | 0.8021 | 0.8022 |
| 0.4011 | 19.46 | 7200 | 0.4286 | 0.8035 | 0.8035 |
| 0.4118 | 20.0 | 7400 | 0.4264 | 0.8045 | 0.8046 |
| 0.4066 | 20.54 | 7600 | 0.4286 | 0.8025 | 0.8027 |
| 0.4031 | 21.08 | 7800 | 0.4275 | 0.8038 | 0.8039 |
| 0.4044 | 21.62 | 8000 | 0.4255 | 0.8037 | 0.8037 |
| 0.402 | 22.16 | 8200 | 0.4259 | 0.8040 | 0.8041 |
| 0.4101 | 22.7 | 8400 | 0.4265 | 0.8027 | 0.8029 |
| 0.4006 | 23.24 | 8600 | 0.4249 | 0.8047 | 0.8047 |
| 0.4005 | 23.78 | 8800 | 0.4271 | 0.8038 | 0.8039 |
| 0.3983 | 24.32 | 9000 | 0.4269 | 0.8045 | 0.8046 |
| 0.4017 | 24.86 | 9200 | 0.4259 | 0.8038 | 0.8039 |
| 0.4117 | 25.41 | 9400 | 0.4257 | 0.8043 | 0.8044 |
| 0.3956 | 25.95 | 9600 | 0.4271 | 0.8048 | 0.8049 |
| 0.4029 | 26.49 | 9800 | 0.4272 | 0.8050 | 0.8051 |
| 0.4004 | 27.03 | 10000 | 0.4271 | 0.8046 | 0.8047 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:22:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_15M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4181
* F1 Score: 0.8048
* Accuracy: 0.8049
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3823
- F1 Score: 0.8291
- Accuracy: 0.8291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5574 | 0.6 | 200 | 0.4140 | 0.8126 | 0.8127 |
| 0.4313 | 1.2 | 400 | 0.3888 | 0.8259 | 0.8259 |
| 0.4159 | 1.81 | 600 | 0.3816 | 0.8268 | 0.8268 |
| 0.4102 | 2.41 | 800 | 0.3759 | 0.8310 | 0.8310 |
| 0.4006 | 3.01 | 1000 | 0.3744 | 0.8299 | 0.8298 |
| 0.3973 | 3.61 | 1200 | 0.3701 | 0.8379 | 0.8379 |
| 0.3992 | 4.22 | 1400 | 0.3703 | 0.8355 | 0.8355 |
| 0.396 | 4.82 | 1600 | 0.3687 | 0.8381 | 0.8381 |
| 0.3877 | 5.42 | 1800 | 0.3757 | 0.8292 | 0.8293 |
| 0.3935 | 6.02 | 2000 | 0.3690 | 0.8383 | 0.8383 |
| 0.3911 | 6.63 | 2200 | 0.3674 | 0.8381 | 0.8381 |
| 0.3879 | 7.23 | 2400 | 0.3679 | 0.8380 | 0.8381 |
| 0.3887 | 7.83 | 2600 | 0.3662 | 0.8396 | 0.8396 |
| 0.3825 | 8.43 | 2800 | 0.3721 | 0.8344 | 0.8349 |
| 0.3879 | 9.04 | 3000 | 0.3663 | 0.8402 | 0.8402 |
| 0.3812 | 9.64 | 3200 | 0.3637 | 0.8397 | 0.8396 |
| 0.3823 | 10.24 | 3400 | 0.3647 | 0.8406 | 0.8406 |
| 0.383 | 10.84 | 3600 | 0.3643 | 0.8400 | 0.8400 |
| 0.3815 | 11.45 | 3800 | 0.3640 | 0.8382 | 0.8381 |
| 0.3804 | 12.05 | 4000 | 0.3629 | 0.8381 | 0.8381 |
| 0.3746 | 12.65 | 4200 | 0.3634 | 0.8382 | 0.8383 |
| 0.3799 | 13.25 | 4400 | 0.3635 | 0.8376 | 0.8376 |
| 0.378 | 13.86 | 4600 | 0.3636 | 0.8400 | 0.8400 |
| 0.3771 | 14.46 | 4800 | 0.3633 | 0.8415 | 0.8415 |
| 0.3741 | 15.06 | 5000 | 0.3615 | 0.8415 | 0.8415 |
| 0.371 | 15.66 | 5200 | 0.3612 | 0.8412 | 0.8412 |
| 0.3728 | 16.27 | 5400 | 0.3642 | 0.8400 | 0.8400 |
| 0.3718 | 16.87 | 5600 | 0.3679 | 0.8361 | 0.8364 |
| 0.3698 | 17.47 | 5800 | 0.3664 | 0.8369 | 0.8372 |
| 0.3758 | 18.07 | 6000 | 0.3624 | 0.8393 | 0.8395 |
| 0.3725 | 18.67 | 6200 | 0.3605 | 0.8412 | 0.8413 |
| 0.3716 | 19.28 | 6400 | 0.3618 | 0.8408 | 0.8408 |
| 0.3703 | 19.88 | 6600 | 0.3613 | 0.8388 | 0.8389 |
| 0.3658 | 20.48 | 6800 | 0.3606 | 0.8409 | 0.8410 |
| 0.3759 | 21.08 | 7000 | 0.3640 | 0.8363 | 0.8366 |
| 0.3748 | 21.69 | 7200 | 0.3612 | 0.8415 | 0.8415 |
| 0.3651 | 22.29 | 7400 | 0.3610 | 0.8399 | 0.8400 |
| 0.3673 | 22.89 | 7600 | 0.3609 | 0.8424 | 0.8425 |
| 0.3681 | 23.49 | 7800 | 0.3622 | 0.8380 | 0.8381 |
| 0.3688 | 24.1 | 8000 | 0.3629 | 0.8393 | 0.8395 |
| 0.3692 | 24.7 | 8200 | 0.3639 | 0.8388 | 0.8391 |
| 0.3645 | 25.3 | 8400 | 0.3642 | 0.8396 | 0.8398 |
| 0.3692 | 25.9 | 8600 | 0.3609 | 0.8422 | 0.8423 |
| 0.3687 | 26.51 | 8800 | 0.3615 | 0.8415 | 0.8415 |
| 0.3671 | 27.11 | 9000 | 0.3610 | 0.8409 | 0.8410 |
| 0.3726 | 27.71 | 9200 | 0.3617 | 0.8399 | 0.8400 |
| 0.3626 | 28.31 | 9400 | 0.3631 | 0.8387 | 0.8389 |
| 0.3658 | 28.92 | 9600 | 0.3618 | 0.8396 | 0.8396 |
| 0.3724 | 29.52 | 9800 | 0.3614 | 0.8392 | 0.8393 |
| 0.3612 | 30.12 | 10000 | 0.3615 | 0.8395 | 0.8396 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:22:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_15M-L8\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3823
* F1 Score: 0.8291
* Accuracy: 0.8291
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hrangel/Mistral_7B_qlora_CoT_Matematicals | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:23:34+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3910
- F1 Score: 0.8233
- Accuracy: 0.8233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6131 | 0.6 | 200 | 0.4693 | 0.7777 | 0.7777 |
| 0.4721 | 1.2 | 400 | 0.4169 | 0.8106 | 0.8106 |
| 0.4443 | 1.81 | 600 | 0.4046 | 0.8150 | 0.8150 |
| 0.4403 | 2.41 | 800 | 0.3968 | 0.8210 | 0.8210 |
| 0.4255 | 3.01 | 1000 | 0.3953 | 0.8213 | 0.8214 |
| 0.4233 | 3.61 | 1200 | 0.3889 | 0.8217 | 0.8217 |
| 0.4223 | 4.22 | 1400 | 0.3869 | 0.8225 | 0.8225 |
| 0.4197 | 4.82 | 1600 | 0.3844 | 0.8223 | 0.8223 |
| 0.4106 | 5.42 | 1800 | 0.3869 | 0.8248 | 0.8249 |
| 0.4124 | 6.02 | 2000 | 0.3819 | 0.8262 | 0.8263 |
| 0.4112 | 6.63 | 2200 | 0.3791 | 0.8285 | 0.8285 |
| 0.407 | 7.23 | 2400 | 0.3801 | 0.8313 | 0.8314 |
| 0.4063 | 7.83 | 2600 | 0.3787 | 0.8288 | 0.8289 |
| 0.4012 | 8.43 | 2800 | 0.3808 | 0.8296 | 0.8298 |
| 0.406 | 9.04 | 3000 | 0.3761 | 0.8322 | 0.8323 |
| 0.3994 | 9.64 | 3200 | 0.3734 | 0.8312 | 0.8312 |
| 0.4003 | 10.24 | 3400 | 0.3750 | 0.8323 | 0.8323 |
| 0.4008 | 10.84 | 3600 | 0.3741 | 0.8336 | 0.8336 |
| 0.3994 | 11.45 | 3800 | 0.3736 | 0.8327 | 0.8327 |
| 0.3982 | 12.05 | 4000 | 0.3729 | 0.8340 | 0.8340 |
| 0.3933 | 12.65 | 4200 | 0.3739 | 0.8342 | 0.8342 |
| 0.3995 | 13.25 | 4400 | 0.3707 | 0.8349 | 0.8349 |
| 0.3967 | 13.86 | 4600 | 0.3721 | 0.8355 | 0.8355 |
| 0.3951 | 14.46 | 4800 | 0.3723 | 0.8351 | 0.8351 |
| 0.3916 | 15.06 | 5000 | 0.3705 | 0.8336 | 0.8336 |
| 0.3907 | 15.66 | 5200 | 0.3703 | 0.8376 | 0.8376 |
| 0.3905 | 16.27 | 5400 | 0.3728 | 0.8355 | 0.8355 |
| 0.3917 | 16.87 | 5600 | 0.3738 | 0.8364 | 0.8366 |
| 0.39 | 17.47 | 5800 | 0.3720 | 0.8365 | 0.8366 |
| 0.3961 | 18.07 | 6000 | 0.3706 | 0.8377 | 0.8378 |
| 0.3917 | 18.67 | 6200 | 0.3694 | 0.8379 | 0.8379 |
| 0.3923 | 19.28 | 6400 | 0.3711 | 0.8374 | 0.8374 |
| 0.389 | 19.88 | 6600 | 0.3690 | 0.8377 | 0.8378 |
| 0.3847 | 20.48 | 6800 | 0.3701 | 0.8371 | 0.8372 |
| 0.3949 | 21.08 | 7000 | 0.3710 | 0.8359 | 0.8361 |
| 0.3961 | 21.69 | 7200 | 0.3680 | 0.8379 | 0.8379 |
| 0.386 | 22.29 | 7400 | 0.3684 | 0.8393 | 0.8393 |
| 0.387 | 22.89 | 7600 | 0.3698 | 0.8378 | 0.8378 |
| 0.388 | 23.49 | 7800 | 0.3683 | 0.8391 | 0.8391 |
| 0.3887 | 24.1 | 8000 | 0.3689 | 0.8381 | 0.8381 |
| 0.3889 | 24.7 | 8200 | 0.3693 | 0.8360 | 0.8361 |
| 0.3844 | 25.3 | 8400 | 0.3699 | 0.8389 | 0.8389 |
| 0.3902 | 25.9 | 8600 | 0.3678 | 0.8398 | 0.8398 |
| 0.3906 | 26.51 | 8800 | 0.3681 | 0.8383 | 0.8383 |
| 0.3874 | 27.11 | 9000 | 0.3682 | 0.8389 | 0.8389 |
| 0.3929 | 27.71 | 9200 | 0.3682 | 0.8393 | 0.8393 |
| 0.3847 | 28.31 | 9400 | 0.3689 | 0.8396 | 0.8396 |
| 0.3874 | 28.92 | 9600 | 0.3684 | 0.8393 | 0.8393 |
| 0.3929 | 29.52 | 9800 | 0.3680 | 0.8391 | 0.8391 |
| 0.3819 | 30.12 | 10000 | 0.3682 | 0.8391 | 0.8391 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:24:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_15M-L1\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3910
* F1 Score: 0.8233
* Accuracy: 0.8233
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain18 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:24:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3783
- F1 Score: 0.8328
- Accuracy: 0.8329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5299 | 0.6 | 200 | 0.4006 | 0.8167 | 0.8168 |
| 0.4157 | 1.2 | 400 | 0.3794 | 0.8336 | 0.8336 |
| 0.4031 | 1.81 | 600 | 0.3786 | 0.8307 | 0.8308 |
| 0.399 | 2.41 | 800 | 0.3696 | 0.8336 | 0.8336 |
| 0.3916 | 3.01 | 1000 | 0.3681 | 0.8352 | 0.8353 |
| 0.387 | 3.61 | 1200 | 0.3638 | 0.8390 | 0.8391 |
| 0.3907 | 4.22 | 1400 | 0.3662 | 0.8395 | 0.8395 |
| 0.386 | 4.82 | 1600 | 0.3624 | 0.8418 | 0.8419 |
| 0.377 | 5.42 | 1800 | 0.3715 | 0.8339 | 0.8340 |
| 0.3833 | 6.02 | 2000 | 0.3665 | 0.8392 | 0.8393 |
| 0.3794 | 6.63 | 2200 | 0.3616 | 0.8398 | 0.8398 |
| 0.3765 | 7.23 | 2400 | 0.3654 | 0.8416 | 0.8417 |
| 0.3776 | 7.83 | 2600 | 0.3619 | 0.8391 | 0.8391 |
| 0.3695 | 8.43 | 2800 | 0.3655 | 0.8373 | 0.8378 |
| 0.3752 | 9.04 | 3000 | 0.3597 | 0.8442 | 0.8442 |
| 0.368 | 9.64 | 3200 | 0.3595 | 0.8425 | 0.8425 |
| 0.3675 | 10.24 | 3400 | 0.3602 | 0.8417 | 0.8417 |
| 0.3692 | 10.84 | 3600 | 0.3594 | 0.8407 | 0.8408 |
| 0.3657 | 11.45 | 3800 | 0.3580 | 0.8440 | 0.8440 |
| 0.3651 | 12.05 | 4000 | 0.3583 | 0.8419 | 0.8419 |
| 0.3594 | 12.65 | 4200 | 0.3580 | 0.8431 | 0.8432 |
| 0.3633 | 13.25 | 4400 | 0.3588 | 0.8428 | 0.8428 |
| 0.361 | 13.86 | 4600 | 0.3606 | 0.8413 | 0.8413 |
| 0.359 | 14.46 | 4800 | 0.3588 | 0.8434 | 0.8434 |
| 0.3573 | 15.06 | 5000 | 0.3560 | 0.8452 | 0.8453 |
| 0.3505 | 15.66 | 5200 | 0.3603 | 0.8428 | 0.8428 |
| 0.3549 | 16.27 | 5400 | 0.3618 | 0.8434 | 0.8434 |
| 0.3528 | 16.87 | 5600 | 0.3677 | 0.8386 | 0.8391 |
| 0.3501 | 17.47 | 5800 | 0.3639 | 0.8427 | 0.8430 |
| 0.3573 | 18.07 | 6000 | 0.3615 | 0.8446 | 0.8447 |
| 0.3517 | 18.67 | 6200 | 0.3582 | 0.8442 | 0.8444 |
| 0.3509 | 19.28 | 6400 | 0.3615 | 0.8432 | 0.8432 |
| 0.3489 | 19.88 | 6600 | 0.3584 | 0.8425 | 0.8427 |
| 0.3444 | 20.48 | 6800 | 0.3580 | 0.8447 | 0.8447 |
| 0.3544 | 21.08 | 7000 | 0.3644 | 0.8404 | 0.8408 |
| 0.3525 | 21.69 | 7200 | 0.3604 | 0.8423 | 0.8423 |
| 0.3441 | 22.29 | 7400 | 0.3598 | 0.8448 | 0.8449 |
| 0.346 | 22.89 | 7600 | 0.3610 | 0.8424 | 0.8425 |
| 0.346 | 23.49 | 7800 | 0.3613 | 0.8412 | 0.8413 |
| 0.347 | 24.1 | 8000 | 0.3645 | 0.8417 | 0.8419 |
| 0.3462 | 24.7 | 8200 | 0.3650 | 0.8416 | 0.8419 |
| 0.3401 | 25.3 | 8400 | 0.3669 | 0.8421 | 0.8423 |
| 0.3471 | 25.9 | 8600 | 0.3612 | 0.8428 | 0.8428 |
| 0.3451 | 26.51 | 8800 | 0.3618 | 0.8432 | 0.8432 |
| 0.3456 | 27.11 | 9000 | 0.3604 | 0.8432 | 0.8432 |
| 0.3485 | 27.71 | 9200 | 0.3626 | 0.8425 | 0.8427 |
| 0.3388 | 28.31 | 9400 | 0.3632 | 0.8442 | 0.8444 |
| 0.3412 | 28.92 | 9600 | 0.3632 | 0.8420 | 0.8421 |
| 0.3492 | 29.52 | 9800 | 0.3614 | 0.8422 | 0.8423 |
| 0.3355 | 30.12 | 10000 | 0.3620 | 0.8431 | 0.8432 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:24:39+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_15M-L32\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3783
* F1 Score: 0.8328
* Accuracy: 0.8329
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4439
- F1 Score: 0.8286
- Accuracy: 0.8287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.629 | 5.13 | 200 | 0.5857 | 0.7072 | 0.7080 |
| 0.5743 | 10.26 | 400 | 0.5824 | 0.6910 | 0.6949 |
| 0.5505 | 15.38 | 600 | 0.5729 | 0.7040 | 0.7096 |
| 0.53 | 20.51 | 800 | 0.5425 | 0.7238 | 0.7243 |
| 0.5137 | 25.64 | 1000 | 0.5271 | 0.7352 | 0.7357 |
| 0.4924 | 30.77 | 1200 | 0.4966 | 0.7730 | 0.7732 |
| 0.466 | 35.9 | 1400 | 0.4742 | 0.7879 | 0.7879 |
| 0.4452 | 41.03 | 1600 | 0.4655 | 0.7808 | 0.7814 |
| 0.4341 | 46.15 | 1800 | 0.4457 | 0.8010 | 0.8010 |
| 0.4182 | 51.28 | 2000 | 0.4385 | 0.8042 | 0.8042 |
| 0.4107 | 56.41 | 2200 | 0.4363 | 0.8075 | 0.8075 |
| 0.4042 | 61.54 | 2400 | 0.4199 | 0.8074 | 0.8075 |
| 0.3981 | 66.67 | 2600 | 0.4153 | 0.8108 | 0.8108 |
| 0.3883 | 71.79 | 2800 | 0.4141 | 0.8075 | 0.8075 |
| 0.383 | 76.92 | 3000 | 0.4142 | 0.8140 | 0.8140 |
| 0.3755 | 82.05 | 3200 | 0.4044 | 0.8205 | 0.8206 |
| 0.3734 | 87.18 | 3400 | 0.4064 | 0.8222 | 0.8222 |
| 0.3695 | 92.31 | 3600 | 0.4026 | 0.8238 | 0.8238 |
| 0.3625 | 97.44 | 3800 | 0.3999 | 0.8352 | 0.8352 |
| 0.3664 | 102.56 | 4000 | 0.3976 | 0.8303 | 0.8303 |
| 0.3595 | 107.69 | 4200 | 0.3992 | 0.8303 | 0.8303 |
| 0.352 | 112.82 | 4400 | 0.3970 | 0.8303 | 0.8303 |
| 0.347 | 117.95 | 4600 | 0.3906 | 0.8303 | 0.8303 |
| 0.3497 | 123.08 | 4800 | 0.3944 | 0.8351 | 0.8352 |
| 0.3398 | 128.21 | 5000 | 0.3941 | 0.8352 | 0.8352 |
| 0.3432 | 133.33 | 5200 | 0.3897 | 0.8352 | 0.8352 |
| 0.3371 | 138.46 | 5400 | 0.3878 | 0.8369 | 0.8369 |
| 0.3331 | 143.59 | 5600 | 0.3882 | 0.8352 | 0.8352 |
| 0.3377 | 148.72 | 5800 | 0.3883 | 0.8352 | 0.8352 |
| 0.3288 | 153.85 | 6000 | 0.3889 | 0.8352 | 0.8352 |
| 0.3261 | 158.97 | 6200 | 0.3843 | 0.8401 | 0.8401 |
| 0.3284 | 164.1 | 6400 | 0.3902 | 0.8335 | 0.8336 |
| 0.3293 | 169.23 | 6600 | 0.3837 | 0.8384 | 0.8385 |
| 0.3242 | 174.36 | 6800 | 0.3899 | 0.8385 | 0.8385 |
| 0.3263 | 179.49 | 7000 | 0.3861 | 0.8352 | 0.8352 |
| 0.3193 | 184.62 | 7200 | 0.3874 | 0.8434 | 0.8434 |
| 0.3187 | 189.74 | 7400 | 0.3903 | 0.8385 | 0.8385 |
| 0.3201 | 194.87 | 7600 | 0.3908 | 0.8385 | 0.8385 |
| 0.3194 | 200.0 | 7800 | 0.3860 | 0.8466 | 0.8467 |
| 0.3187 | 205.13 | 8000 | 0.3869 | 0.8449 | 0.8450 |
| 0.3163 | 210.26 | 8200 | 0.3877 | 0.8401 | 0.8401 |
| 0.313 | 215.38 | 8400 | 0.3892 | 0.8417 | 0.8418 |
| 0.316 | 220.51 | 8600 | 0.3888 | 0.8385 | 0.8385 |
| 0.3144 | 225.64 | 8800 | 0.3886 | 0.8417 | 0.8418 |
| 0.3124 | 230.77 | 9000 | 0.3866 | 0.8449 | 0.8450 |
| 0.3119 | 235.9 | 9200 | 0.3874 | 0.8417 | 0.8418 |
| 0.3125 | 241.03 | 9400 | 0.3884 | 0.8450 | 0.8450 |
| 0.3151 | 246.15 | 9600 | 0.3868 | 0.8417 | 0.8418 |
| 0.3084 | 251.28 | 9800 | 0.3879 | 0.8417 | 0.8418 |
| 0.3116 | 256.41 | 10000 | 0.3878 | 0.8450 | 0.8450 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:25:18+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_15M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4439
* F1 Score: 0.8286
* Accuracy: 0.8287
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
image-segmentation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-raw_img_ready2train_patches
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the raw_img_ready2train_patches dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6829
- Mean Iou: 0.4110
- Mean Accuracy: 0.7629
- Overall Accuracy: 0.7631
- Accuracy Unlabeled: nan
- Accuracy Eczema: 0.7673
- Accuracy Background: 0.7585
- Iou Unlabeled: 0.0
- Iou Eczema: 0.6284
- Iou Background: 0.6047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Eczema | Accuracy Background | Iou Unlabeled | Iou Eczema | Iou Background |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------:|:----------:|:--------------:|
| 1.0753 | 0.0312 | 5 | 1.0925 | 0.2358 | 0.4682 | 0.4698 | nan | 0.5042 | 0.4322 | 0.0 | 0.3705 | 0.3367 |
| 0.9863 | 0.0625 | 10 | 1.0697 | 0.2994 | 0.6182 | 0.6306 | nan | 0.8979 | 0.3385 | 0.0 | 0.5784 | 0.3198 |
| 1.0056 | 0.0938 | 15 | 1.0377 | 0.3303 | 0.6678 | 0.6792 | nan | 0.9236 | 0.4121 | 0.0 | 0.6064 | 0.3844 |
| 1.0133 | 0.125 | 20 | 1.0006 | 0.3478 | 0.6869 | 0.6950 | nan | 0.8710 | 0.5027 | 0.0 | 0.6008 | 0.4425 |
| 0.9748 | 0.1562 | 25 | 0.9689 | 0.3543 | 0.6947 | 0.7022 | nan | 0.8647 | 0.5246 | 0.0 | 0.6043 | 0.4586 |
| 0.9367 | 0.1875 | 30 | 0.9417 | 0.3566 | 0.6950 | 0.6965 | nan | 0.7290 | 0.6610 | 0.0 | 0.5583 | 0.5114 |
| 0.8363 | 0.2188 | 35 | 0.9118 | 0.3557 | 0.6940 | 0.6959 | nan | 0.7366 | 0.6514 | 0.0 | 0.5600 | 0.5069 |
| 1.1431 | 0.25 | 40 | 0.8830 | 0.3575 | 0.6963 | 0.6989 | nan | 0.7556 | 0.6370 | 0.0 | 0.5686 | 0.5039 |
| 0.7312 | 0.2812 | 45 | 0.8592 | 0.3680 | 0.7098 | 0.7133 | nan | 0.7888 | 0.6307 | 0.0 | 0.5907 | 0.5133 |
| 0.8135 | 0.3125 | 50 | 0.8268 | 0.3559 | 0.6994 | 0.7083 | nan | 0.8992 | 0.4997 | 0.0 | 0.6173 | 0.4505 |
| 0.7528 | 0.3438 | 55 | 0.8110 | 0.3525 | 0.6960 | 0.7053 | nan | 0.9055 | 0.4866 | 0.0 | 0.6162 | 0.4412 |
| 0.8405 | 0.375 | 60 | 0.7967 | 0.3518 | 0.6950 | 0.7041 | nan | 0.9008 | 0.4893 | 0.0 | 0.6140 | 0.4415 |
| 0.7865 | 0.4062 | 65 | 0.7791 | 0.3561 | 0.6992 | 0.7075 | nan | 0.8869 | 0.5116 | 0.0 | 0.6130 | 0.4553 |
| 0.8309 | 0.4375 | 70 | 0.7650 | 0.3652 | 0.7083 | 0.7147 | nan | 0.8512 | 0.5655 | 0.0 | 0.6090 | 0.4864 |
| 0.6775 | 0.4688 | 75 | 0.7615 | 0.3613 | 0.7044 | 0.7115 | nan | 0.8651 | 0.5437 | 0.0 | 0.6102 | 0.4738 |
| 0.7033 | 0.5 | 80 | 0.7498 | 0.3737 | 0.7179 | 0.7227 | nan | 0.8260 | 0.6099 | 0.0 | 0.6087 | 0.5125 |
| 0.8377 | 0.5312 | 85 | 0.7443 | 0.3790 | 0.7243 | 0.7290 | nan | 0.8303 | 0.6184 | 0.0 | 0.6154 | 0.5217 |
| 0.825 | 0.5625 | 90 | 0.7547 | 0.3676 | 0.7125 | 0.7201 | nan | 0.8840 | 0.5411 | 0.0 | 0.6225 | 0.4802 |
| 0.7408 | 0.5938 | 95 | 0.7415 | 0.3767 | 0.7228 | 0.7295 | nan | 0.8747 | 0.5708 | 0.0 | 0.6281 | 0.5021 |
| 0.8087 | 0.625 | 100 | 0.7201 | 0.3926 | 0.7404 | 0.7445 | nan | 0.8318 | 0.6491 | 0.0 | 0.6296 | 0.5483 |
| 0.7146 | 0.6562 | 105 | 0.7096 | 0.4002 | 0.7493 | 0.7520 | nan | 0.8109 | 0.6877 | 0.0 | 0.6307 | 0.5699 |
| 0.6875 | 0.6875 | 110 | 0.7047 | 0.4010 | 0.7502 | 0.7541 | nan | 0.8398 | 0.6606 | 0.0 | 0.6407 | 0.5621 |
| 0.6382 | 0.7188 | 115 | 0.7031 | 0.3982 | 0.7471 | 0.7519 | nan | 0.8543 | 0.6400 | 0.0 | 0.6426 | 0.5521 |
| 0.6551 | 0.75 | 120 | 0.6953 | 0.4018 | 0.7512 | 0.7553 | nan | 0.8450 | 0.6573 | 0.0 | 0.6433 | 0.5621 |
| 0.7074 | 0.7812 | 125 | 0.6912 | 0.4054 | 0.7553 | 0.7583 | nan | 0.8236 | 0.6871 | 0.0 | 0.6402 | 0.5760 |
| 0.768 | 0.8125 | 130 | 0.6866 | 0.4048 | 0.7546 | 0.7579 | nan | 0.8278 | 0.6814 | 0.0 | 0.6410 | 0.5736 |
| 0.7543 | 0.8438 | 135 | 0.6851 | 0.4031 | 0.7526 | 0.7564 | nan | 0.8374 | 0.6679 | 0.0 | 0.6422 | 0.5671 |
| 0.7107 | 0.875 | 140 | 0.6803 | 0.6122 | 0.7586 | 0.7608 | nan | 0.8071 | 0.7101 | nan | 0.6379 | 0.5865 |
| 0.7054 | 0.9062 | 145 | 0.6799 | 0.4098 | 0.7608 | 0.7622 | nan | 0.7924 | 0.7292 | 0.0 | 0.6350 | 0.5943 |
| 1.1302 | 0.9375 | 150 | 0.6801 | 0.4103 | 0.7616 | 0.7626 | nan | 0.7840 | 0.7393 | 0.0 | 0.6330 | 0.5981 |
| 0.6037 | 0.9688 | 155 | 0.6827 | 0.4111 | 0.7628 | 0.7632 | nan | 0.7721 | 0.7534 | 0.0 | 0.6300 | 0.6032 |
| 0.8577 | 1.0 | 160 | 0.6829 | 0.4110 | 0.7629 | 0.7631 | nan | 0.7673 | 0.7585 | 0.0 | 0.6284 | 0.6047 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-finetuned-raw_img_ready2train_patches", "results": []}]} | ruisusanofi/segformer-b0-finetuned-raw_img_ready2train_patches | null | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:27:04+00:00 | [] | [] | TAGS
#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
| segformer-b0-finetuned-raw\_img\_ready2train\_patches
=====================================================
This model is a fine-tuned version of nvidia/mit-b0 on the raw\_img\_ready2train\_patches dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6829
* Mean Iou: 0.4110
* Mean Accuracy: 0.7629
* Overall Accuracy: 0.7631
* Accuracy Unlabeled: nan
* Accuracy Eczema: 0.7673
* Accuracy Background: 0.7585
* Iou Unlabeled: 0.0
* Iou Eczema: 0.6284
* Iou Background: 0.6047
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4594
- F1 Score: 0.8271
- Accuracy: 0.8271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6032 | 5.13 | 200 | 0.5634 | 0.7185 | 0.7194 |
| 0.5278 | 10.26 | 400 | 0.5256 | 0.7344 | 0.7357 |
| 0.4699 | 15.38 | 600 | 0.4605 | 0.7856 | 0.7863 |
| 0.4245 | 20.51 | 800 | 0.4272 | 0.7976 | 0.7977 |
| 0.3889 | 25.64 | 1000 | 0.4069 | 0.8155 | 0.8157 |
| 0.3665 | 30.77 | 1200 | 0.3898 | 0.8236 | 0.8238 |
| 0.3478 | 35.9 | 1400 | 0.3919 | 0.8320 | 0.8320 |
| 0.3314 | 41.03 | 1600 | 0.3985 | 0.8265 | 0.8271 |
| 0.3178 | 46.15 | 1800 | 0.3865 | 0.8352 | 0.8352 |
| 0.3045 | 51.28 | 2000 | 0.3880 | 0.8319 | 0.8320 |
| 0.2962 | 56.41 | 2200 | 0.3923 | 0.8434 | 0.8434 |
| 0.2901 | 61.54 | 2400 | 0.3825 | 0.8401 | 0.8401 |
| 0.2789 | 66.67 | 2600 | 0.3828 | 0.8352 | 0.8352 |
| 0.2688 | 71.79 | 2800 | 0.3823 | 0.8367 | 0.8369 |
| 0.2668 | 76.92 | 3000 | 0.3948 | 0.8352 | 0.8352 |
| 0.2553 | 82.05 | 3200 | 0.3873 | 0.8385 | 0.8385 |
| 0.25 | 87.18 | 3400 | 0.3933 | 0.8385 | 0.8385 |
| 0.2466 | 92.31 | 3600 | 0.3986 | 0.8466 | 0.8467 |
| 0.2419 | 97.44 | 3800 | 0.3981 | 0.8465 | 0.8467 |
| 0.2396 | 102.56 | 4000 | 0.3904 | 0.8596 | 0.8597 |
| 0.2347 | 107.69 | 4200 | 0.4066 | 0.8548 | 0.8548 |
| 0.2237 | 112.82 | 4400 | 0.4169 | 0.8548 | 0.8548 |
| 0.2197 | 117.95 | 4600 | 0.4028 | 0.8613 | 0.8613 |
| 0.2178 | 123.08 | 4800 | 0.4289 | 0.8483 | 0.8483 |
| 0.2117 | 128.21 | 5000 | 0.4253 | 0.8499 | 0.8499 |
| 0.2147 | 133.33 | 5200 | 0.4187 | 0.8596 | 0.8597 |
| 0.2068 | 138.46 | 5400 | 0.4218 | 0.8611 | 0.8613 |
| 0.2019 | 143.59 | 5600 | 0.4296 | 0.8466 | 0.8467 |
| 0.2023 | 148.72 | 5800 | 0.4374 | 0.8548 | 0.8548 |
| 0.1959 | 153.85 | 6000 | 0.4354 | 0.8515 | 0.8515 |
| 0.1974 | 158.97 | 6200 | 0.4282 | 0.8564 | 0.8564 |
| 0.1983 | 164.1 | 6400 | 0.4305 | 0.8515 | 0.8515 |
| 0.1928 | 169.23 | 6600 | 0.4352 | 0.8581 | 0.8581 |
| 0.1889 | 174.36 | 6800 | 0.4507 | 0.8532 | 0.8532 |
| 0.1909 | 179.49 | 7000 | 0.4417 | 0.8450 | 0.8450 |
| 0.1855 | 184.62 | 7200 | 0.4481 | 0.8548 | 0.8548 |
| 0.1824 | 189.74 | 7400 | 0.4513 | 0.8564 | 0.8564 |
| 0.1837 | 194.87 | 7600 | 0.4567 | 0.8515 | 0.8515 |
| 0.1841 | 200.0 | 7800 | 0.4383 | 0.8630 | 0.8630 |
| 0.1819 | 205.13 | 8000 | 0.4506 | 0.8532 | 0.8532 |
| 0.1809 | 210.26 | 8200 | 0.4516 | 0.8499 | 0.8499 |
| 0.1753 | 215.38 | 8400 | 0.4639 | 0.8467 | 0.8467 |
| 0.1771 | 220.51 | 8600 | 0.4612 | 0.8548 | 0.8548 |
| 0.1777 | 225.64 | 8800 | 0.4593 | 0.8483 | 0.8483 |
| 0.1723 | 230.77 | 9000 | 0.4591 | 0.8499 | 0.8499 |
| 0.1727 | 235.9 | 9200 | 0.4602 | 0.8467 | 0.8467 |
| 0.1714 | 241.03 | 9400 | 0.4662 | 0.8548 | 0.8548 |
| 0.1739 | 246.15 | 9600 | 0.4643 | 0.8450 | 0.8450 |
| 0.1721 | 251.28 | 9800 | 0.4632 | 0.8532 | 0.8532 |
| 0.1689 | 256.41 | 10000 | 0.4628 | 0.8532 | 0.8532 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:28:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_15M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4594
* F1 Score: 0.8271
* Accuracy: 0.8271
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4640
- F1 Score: 0.8320
- Accuracy: 0.8320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5782 | 5.13 | 200 | 0.5315 | 0.7406 | 0.7406 |
| 0.4665 | 10.26 | 400 | 0.4535 | 0.7895 | 0.7896 |
| 0.3912 | 15.38 | 600 | 0.3940 | 0.8189 | 0.8189 |
| 0.3406 | 20.51 | 800 | 0.3676 | 0.8482 | 0.8483 |
| 0.301 | 25.64 | 1000 | 0.3680 | 0.8676 | 0.8679 |
| 0.2781 | 30.77 | 1200 | 0.3465 | 0.8596 | 0.8597 |
| 0.2586 | 35.9 | 1400 | 0.3497 | 0.8662 | 0.8662 |
| 0.2365 | 41.03 | 1600 | 0.3888 | 0.8575 | 0.8581 |
| 0.2231 | 46.15 | 1800 | 0.3801 | 0.8547 | 0.8548 |
| 0.2111 | 51.28 | 2000 | 0.3956 | 0.8612 | 0.8613 |
| 0.1949 | 56.41 | 2200 | 0.4369 | 0.8532 | 0.8532 |
| 0.1843 | 61.54 | 2400 | 0.4161 | 0.8611 | 0.8613 |
| 0.1706 | 66.67 | 2600 | 0.4586 | 0.8659 | 0.8662 |
| 0.1597 | 71.79 | 2800 | 0.4525 | 0.8679 | 0.8679 |
| 0.1529 | 76.92 | 3000 | 0.4764 | 0.8449 | 0.8450 |
| 0.1405 | 82.05 | 3200 | 0.5161 | 0.8547 | 0.8548 |
| 0.1323 | 87.18 | 3400 | 0.5201 | 0.8662 | 0.8662 |
| 0.1275 | 92.31 | 3600 | 0.5121 | 0.8628 | 0.8630 |
| 0.1212 | 97.44 | 3800 | 0.5360 | 0.8645 | 0.8646 |
| 0.1135 | 102.56 | 4000 | 0.5797 | 0.8595 | 0.8597 |
| 0.11 | 107.69 | 4200 | 0.5665 | 0.8613 | 0.8613 |
| 0.1041 | 112.82 | 4400 | 0.5754 | 0.8597 | 0.8597 |
| 0.1008 | 117.95 | 4600 | 0.5795 | 0.8547 | 0.8548 |
| 0.093 | 123.08 | 4800 | 0.6056 | 0.8630 | 0.8630 |
| 0.0896 | 128.21 | 5000 | 0.6137 | 0.8564 | 0.8564 |
| 0.0883 | 133.33 | 5200 | 0.6119 | 0.8564 | 0.8564 |
| 0.0813 | 138.46 | 5400 | 0.6257 | 0.8629 | 0.8630 |
| 0.0794 | 143.59 | 5600 | 0.6374 | 0.8630 | 0.8630 |
| 0.0781 | 148.72 | 5800 | 0.6801 | 0.8597 | 0.8597 |
| 0.0753 | 153.85 | 6000 | 0.6478 | 0.8580 | 0.8581 |
| 0.0709 | 158.97 | 6200 | 0.6664 | 0.8630 | 0.8630 |
| 0.0725 | 164.1 | 6400 | 0.6262 | 0.8564 | 0.8564 |
| 0.067 | 169.23 | 6600 | 0.6659 | 0.8581 | 0.8581 |
| 0.0632 | 174.36 | 6800 | 0.6947 | 0.8564 | 0.8564 |
| 0.067 | 179.49 | 7000 | 0.6948 | 0.8564 | 0.8564 |
| 0.0627 | 184.62 | 7200 | 0.7080 | 0.8564 | 0.8564 |
| 0.0611 | 189.74 | 7400 | 0.7102 | 0.8548 | 0.8548 |
| 0.0595 | 194.87 | 7600 | 0.7069 | 0.8629 | 0.8630 |
| 0.062 | 200.0 | 7800 | 0.6852 | 0.8646 | 0.8646 |
| 0.0554 | 205.13 | 8000 | 0.7127 | 0.8613 | 0.8613 |
| 0.0596 | 210.26 | 8200 | 0.6846 | 0.8548 | 0.8548 |
| 0.0534 | 215.38 | 8400 | 0.7266 | 0.8597 | 0.8597 |
| 0.0561 | 220.51 | 8600 | 0.7142 | 0.8532 | 0.8532 |
| 0.0517 | 225.64 | 8800 | 0.7146 | 0.8532 | 0.8532 |
| 0.0512 | 230.77 | 9000 | 0.7151 | 0.8564 | 0.8564 |
| 0.0523 | 235.9 | 9200 | 0.6998 | 0.8581 | 0.8581 |
| 0.053 | 241.03 | 9400 | 0.7092 | 0.8662 | 0.8662 |
| 0.0495 | 246.15 | 9600 | 0.7234 | 0.8613 | 0.8613 |
| 0.0514 | 251.28 | 9800 | 0.7236 | 0.8613 | 0.8613 |
| 0.0514 | 256.41 | 10000 | 0.7248 | 0.8597 | 0.8597 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:28:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_15M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4640
* F1 Score: 0.8320
* Accuracy: 0.8320
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 15 | 0.1063 | 1.0 |
| No log | 2.0 | 30 | 0.0621 | 1.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "question_classifier", "results": []}]} | philgrey/question_classifier | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:29:08+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| question\_classifier
====================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0621
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.1.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- F1 Score: 0.9135
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4293 | 0.54 | 200 | 0.2948 | 0.8828 | 0.8828 |
| 0.3066 | 1.08 | 400 | 0.2688 | 0.8919 | 0.8919 |
| 0.2856 | 1.62 | 600 | 0.2528 | 0.8953 | 0.8953 |
| 0.2612 | 2.16 | 800 | 0.2449 | 0.9021 | 0.9022 |
| 0.2536 | 2.7 | 1000 | 0.2343 | 0.9061 | 0.9061 |
| 0.2466 | 3.24 | 1200 | 0.2309 | 0.9101 | 0.9101 |
| 0.2442 | 3.78 | 1400 | 0.2255 | 0.9123 | 0.9123 |
| 0.2415 | 4.32 | 1600 | 0.2236 | 0.9142 | 0.9142 |
| 0.2313 | 4.86 | 1800 | 0.2214 | 0.9160 | 0.9160 |
| 0.232 | 5.41 | 2000 | 0.2196 | 0.9165 | 0.9166 |
| 0.2312 | 5.95 | 2200 | 0.2174 | 0.9179 | 0.9179 |
| 0.2288 | 6.49 | 2400 | 0.2151 | 0.9184 | 0.9184 |
| 0.2271 | 7.03 | 2600 | 0.2132 | 0.9179 | 0.9179 |
| 0.2222 | 7.57 | 2800 | 0.2103 | 0.9199 | 0.9199 |
| 0.2241 | 8.11 | 3000 | 0.2105 | 0.9206 | 0.9206 |
| 0.2221 | 8.65 | 3200 | 0.2076 | 0.9218 | 0.9218 |
| 0.2162 | 9.19 | 3400 | 0.2091 | 0.9213 | 0.9213 |
| 0.2148 | 9.73 | 3600 | 0.2041 | 0.9235 | 0.9235 |
| 0.2211 | 10.27 | 3800 | 0.2025 | 0.9233 | 0.9233 |
| 0.2149 | 10.81 | 4000 | 0.2022 | 0.9243 | 0.9243 |
| 0.2168 | 11.35 | 4200 | 0.2010 | 0.9241 | 0.9242 |
| 0.2128 | 11.89 | 4400 | 0.2016 | 0.9270 | 0.9270 |
| 0.2117 | 12.43 | 4600 | 0.1994 | 0.9223 | 0.9223 |
| 0.2135 | 12.97 | 4800 | 0.1967 | 0.9280 | 0.9280 |
| 0.2084 | 13.51 | 5000 | 0.1976 | 0.9262 | 0.9262 |
| 0.2139 | 14.05 | 5200 | 0.1957 | 0.9265 | 0.9265 |
| 0.2089 | 14.59 | 5400 | 0.1966 | 0.9260 | 0.9260 |
| 0.2067 | 15.14 | 5600 | 0.1960 | 0.9255 | 0.9255 |
| 0.2062 | 15.68 | 5800 | 0.1948 | 0.9284 | 0.9284 |
| 0.2084 | 16.22 | 6000 | 0.1950 | 0.9253 | 0.9253 |
| 0.2052 | 16.76 | 6200 | 0.1935 | 0.9285 | 0.9285 |
| 0.2056 | 17.3 | 6400 | 0.1949 | 0.9260 | 0.9260 |
| 0.2074 | 17.84 | 6600 | 0.1934 | 0.9258 | 0.9258 |
| 0.2021 | 18.38 | 6800 | 0.1926 | 0.9277 | 0.9277 |
| 0.2082 | 18.92 | 7000 | 0.1913 | 0.9284 | 0.9284 |
| 0.2074 | 19.46 | 7200 | 0.1923 | 0.9282 | 0.9282 |
| 0.2013 | 20.0 | 7400 | 0.1917 | 0.9282 | 0.9282 |
| 0.2033 | 20.54 | 7600 | 0.1910 | 0.9284 | 0.9284 |
| 0.2014 | 21.08 | 7800 | 0.1903 | 0.9294 | 0.9294 |
| 0.2051 | 21.62 | 8000 | 0.1904 | 0.9287 | 0.9287 |
| 0.2025 | 22.16 | 8200 | 0.1903 | 0.9291 | 0.9291 |
| 0.1986 | 22.7 | 8400 | 0.1903 | 0.9282 | 0.9282 |
| 0.2057 | 23.24 | 8600 | 0.1898 | 0.9289 | 0.9289 |
| 0.2012 | 23.78 | 8800 | 0.1893 | 0.9289 | 0.9289 |
| 0.2033 | 24.32 | 9000 | 0.1896 | 0.9294 | 0.9294 |
| 0.2009 | 24.86 | 9200 | 0.1898 | 0.9291 | 0.9291 |
| 0.2009 | 25.41 | 9400 | 0.1902 | 0.9291 | 0.9291 |
| 0.1996 | 25.95 | 9600 | 0.1899 | 0.9289 | 0.9289 |
| 0.2019 | 26.49 | 9800 | 0.1894 | 0.9296 | 0.9296 |
| 0.2001 | 27.03 | 10000 | 0.1895 | 0.9287 | 0.9287 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:29:51+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_15M-L1\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2156
* F1 Score: 0.9135
* Accuracy: 0.9135
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1974
- F1 Score: 0.9209
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3768 | 0.54 | 200 | 0.2589 | 0.8973 | 0.8973 |
| 0.2631 | 1.08 | 400 | 0.2351 | 0.9071 | 0.9071 |
| 0.2483 | 1.62 | 600 | 0.2179 | 0.9130 | 0.9130 |
| 0.2279 | 2.16 | 800 | 0.2134 | 0.9146 | 0.9147 |
| 0.2247 | 2.7 | 1000 | 0.2063 | 0.9203 | 0.9203 |
| 0.2187 | 3.24 | 1200 | 0.2048 | 0.9182 | 0.9182 |
| 0.2199 | 3.78 | 1400 | 0.1983 | 0.9216 | 0.9216 |
| 0.2136 | 4.32 | 1600 | 0.1935 | 0.9238 | 0.9238 |
| 0.2055 | 4.86 | 1800 | 0.1926 | 0.9250 | 0.925 |
| 0.2062 | 5.41 | 2000 | 0.1902 | 0.9292 | 0.9292 |
| 0.2048 | 5.95 | 2200 | 0.1900 | 0.9240 | 0.9240 |
| 0.2034 | 6.49 | 2400 | 0.1868 | 0.9270 | 0.9270 |
| 0.2024 | 7.03 | 2600 | 0.1869 | 0.9284 | 0.9284 |
| 0.194 | 7.57 | 2800 | 0.1862 | 0.9287 | 0.9287 |
| 0.2 | 8.11 | 3000 | 0.1853 | 0.9302 | 0.9302 |
| 0.1959 | 8.65 | 3200 | 0.1851 | 0.9292 | 0.9292 |
| 0.1885 | 9.19 | 3400 | 0.1864 | 0.9296 | 0.9296 |
| 0.1888 | 9.73 | 3600 | 0.1827 | 0.9280 | 0.9280 |
| 0.1944 | 10.27 | 3800 | 0.1824 | 0.9292 | 0.9292 |
| 0.1895 | 10.81 | 4000 | 0.1819 | 0.9304 | 0.9304 |
| 0.1917 | 11.35 | 4200 | 0.1797 | 0.9306 | 0.9306 |
| 0.1854 | 11.89 | 4400 | 0.1828 | 0.9307 | 0.9307 |
| 0.1873 | 12.43 | 4600 | 0.1790 | 0.9296 | 0.9296 |
| 0.1861 | 12.97 | 4800 | 0.1771 | 0.9314 | 0.9314 |
| 0.1823 | 13.51 | 5000 | 0.1789 | 0.9289 | 0.9289 |
| 0.187 | 14.05 | 5200 | 0.1809 | 0.9280 | 0.9280 |
| 0.1817 | 14.59 | 5400 | 0.1778 | 0.9323 | 0.9323 |
| 0.1801 | 15.14 | 5600 | 0.1776 | 0.9316 | 0.9316 |
| 0.1801 | 15.68 | 5800 | 0.1781 | 0.9304 | 0.9304 |
| 0.179 | 16.22 | 6000 | 0.1787 | 0.9316 | 0.9316 |
| 0.1784 | 16.76 | 6200 | 0.1779 | 0.9296 | 0.9296 |
| 0.1787 | 17.3 | 6400 | 0.1792 | 0.9277 | 0.9277 |
| 0.1794 | 17.84 | 6600 | 0.1755 | 0.9328 | 0.9328 |
| 0.1748 | 18.38 | 6800 | 0.1776 | 0.9294 | 0.9294 |
| 0.1804 | 18.92 | 7000 | 0.1763 | 0.9292 | 0.9292 |
| 0.1802 | 19.46 | 7200 | 0.1765 | 0.9316 | 0.9316 |
| 0.1741 | 20.0 | 7400 | 0.1755 | 0.9326 | 0.9326 |
| 0.1767 | 20.54 | 7600 | 0.1752 | 0.9309 | 0.9309 |
| 0.1739 | 21.08 | 7800 | 0.1747 | 0.9312 | 0.9313 |
| 0.1747 | 21.62 | 8000 | 0.1748 | 0.9311 | 0.9311 |
| 0.1758 | 22.16 | 8200 | 0.1758 | 0.9319 | 0.9319 |
| 0.1724 | 22.7 | 8400 | 0.1738 | 0.9336 | 0.9336 |
| 0.1762 | 23.24 | 8600 | 0.1753 | 0.9306 | 0.9306 |
| 0.1759 | 23.78 | 8800 | 0.1744 | 0.9312 | 0.9313 |
| 0.1751 | 24.32 | 9000 | 0.1756 | 0.9307 | 0.9307 |
| 0.1727 | 24.86 | 9200 | 0.1742 | 0.9318 | 0.9318 |
| 0.1718 | 25.41 | 9400 | 0.1766 | 0.9309 | 0.9309 |
| 0.1719 | 25.95 | 9600 | 0.1750 | 0.9321 | 0.9321 |
| 0.173 | 26.49 | 9800 | 0.1745 | 0.9311 | 0.9311 |
| 0.1729 | 27.03 | 10000 | 0.1746 | 0.9311 | 0.9311 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:30:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_15M-L8\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1974
* F1 Score: 0.9209
* Accuracy: 0.9209
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1996
- F1 Score: 0.9221
- Accuracy: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3384 | 0.54 | 200 | 0.2347 | 0.9054 | 0.9054 |
| 0.238 | 1.08 | 400 | 0.2090 | 0.9182 | 0.9182 |
| 0.2278 | 1.62 | 600 | 0.1982 | 0.9226 | 0.9226 |
| 0.2116 | 2.16 | 800 | 0.1958 | 0.9222 | 0.9223 |
| 0.2086 | 2.7 | 1000 | 0.1936 | 0.9238 | 0.9238 |
| 0.2038 | 3.24 | 1200 | 0.1907 | 0.9248 | 0.9248 |
| 0.2055 | 3.78 | 1400 | 0.1871 | 0.9264 | 0.9264 |
| 0.1993 | 4.32 | 1600 | 0.1862 | 0.9258 | 0.9258 |
| 0.1938 | 4.86 | 1800 | 0.1810 | 0.9292 | 0.9292 |
| 0.1914 | 5.41 | 2000 | 0.1831 | 0.9319 | 0.9319 |
| 0.1908 | 5.95 | 2200 | 0.1822 | 0.9272 | 0.9272 |
| 0.1883 | 6.49 | 2400 | 0.1779 | 0.9301 | 0.9301 |
| 0.1878 | 7.03 | 2600 | 0.1818 | 0.9321 | 0.9321 |
| 0.1787 | 7.57 | 2800 | 0.1776 | 0.9321 | 0.9321 |
| 0.1846 | 8.11 | 3000 | 0.1798 | 0.9304 | 0.9304 |
| 0.1792 | 8.65 | 3200 | 0.1748 | 0.9321 | 0.9321 |
| 0.1713 | 9.19 | 3400 | 0.1808 | 0.9311 | 0.9311 |
| 0.1723 | 9.73 | 3600 | 0.1742 | 0.9307 | 0.9307 |
| 0.1774 | 10.27 | 3800 | 0.1742 | 0.9311 | 0.9311 |
| 0.1732 | 10.81 | 4000 | 0.1763 | 0.9346 | 0.9346 |
| 0.1724 | 11.35 | 4200 | 0.1725 | 0.9345 | 0.9345 |
| 0.167 | 11.89 | 4400 | 0.1760 | 0.9346 | 0.9346 |
| 0.1691 | 12.43 | 4600 | 0.1716 | 0.9333 | 0.9333 |
| 0.1638 | 12.97 | 4800 | 0.1699 | 0.9311 | 0.9311 |
| 0.1619 | 13.51 | 5000 | 0.1736 | 0.9302 | 0.9302 |
| 0.1661 | 14.05 | 5200 | 0.1766 | 0.9273 | 0.9274 |
| 0.16 | 14.59 | 5400 | 0.1720 | 0.9309 | 0.9309 |
| 0.1591 | 15.14 | 5600 | 0.1725 | 0.9323 | 0.9323 |
| 0.1584 | 15.68 | 5800 | 0.1710 | 0.9318 | 0.9318 |
| 0.1562 | 16.22 | 6000 | 0.1739 | 0.9309 | 0.9309 |
| 0.1552 | 16.76 | 6200 | 0.1748 | 0.9321 | 0.9321 |
| 0.1551 | 17.3 | 6400 | 0.1751 | 0.9309 | 0.9309 |
| 0.1566 | 17.84 | 6600 | 0.1718 | 0.9331 | 0.9331 |
| 0.1509 | 18.38 | 6800 | 0.1730 | 0.9314 | 0.9314 |
| 0.1546 | 18.92 | 7000 | 0.1714 | 0.9331 | 0.9331 |
| 0.1538 | 19.46 | 7200 | 0.1716 | 0.9334 | 0.9334 |
| 0.15 | 20.0 | 7400 | 0.1728 | 0.9339 | 0.9340 |
| 0.1513 | 20.54 | 7600 | 0.1715 | 0.9328 | 0.9328 |
| 0.1485 | 21.08 | 7800 | 0.1698 | 0.9326 | 0.9326 |
| 0.1484 | 21.62 | 8000 | 0.1706 | 0.9326 | 0.9326 |
| 0.1494 | 22.16 | 8200 | 0.1711 | 0.9331 | 0.9331 |
| 0.1448 | 22.7 | 8400 | 0.1689 | 0.9333 | 0.9333 |
| 0.1468 | 23.24 | 8600 | 0.1715 | 0.9323 | 0.9323 |
| 0.1478 | 23.78 | 8800 | 0.1719 | 0.9317 | 0.9318 |
| 0.1448 | 24.32 | 9000 | 0.1721 | 0.9317 | 0.9318 |
| 0.1454 | 24.86 | 9200 | 0.1707 | 0.9331 | 0.9331 |
| 0.1432 | 25.41 | 9400 | 0.1746 | 0.9328 | 0.9328 |
| 0.1437 | 25.95 | 9600 | 0.1727 | 0.9326 | 0.9326 |
| 0.1448 | 26.49 | 9800 | 0.1718 | 0.9328 | 0.9328 |
| 0.1434 | 27.03 | 10000 | 0.1715 | 0.9331 | 0.9331 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:30:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_15M-L32\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1996
* F1 Score: 0.9221
* Accuracy: 0.9221
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5179
- F1 Score: 0.7401
- Accuracy: 0.7398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.621 | 0.97 | 200 | 0.5813 | 0.7034 | 0.7014 |
| 0.5829 | 1.93 | 400 | 0.5649 | 0.7244 | 0.7234 |
| 0.5699 | 2.9 | 600 | 0.5881 | 0.7000 | 0.6989 |
| 0.5629 | 3.86 | 800 | 0.5478 | 0.7259 | 0.7277 |
| 0.5513 | 4.83 | 1000 | 0.5659 | 0.7205 | 0.7189 |
| 0.5467 | 5.8 | 1200 | 0.5586 | 0.7257 | 0.7241 |
| 0.5421 | 6.76 | 1400 | 0.5415 | 0.7344 | 0.7325 |
| 0.54 | 7.73 | 1600 | 0.5395 | 0.7347 | 0.7328 |
| 0.5343 | 8.7 | 1800 | 0.5408 | 0.7313 | 0.7295 |
| 0.5335 | 9.66 | 2000 | 0.5432 | 0.7340 | 0.7322 |
| 0.5333 | 10.63 | 2200 | 0.5558 | 0.7252 | 0.7241 |
| 0.5269 | 11.59 | 2400 | 0.5283 | 0.7408 | 0.7392 |
| 0.5308 | 12.56 | 2600 | 0.5436 | 0.7342 | 0.7325 |
| 0.5281 | 13.53 | 2800 | 0.5438 | 0.7280 | 0.7265 |
| 0.5271 | 14.49 | 3000 | 0.5531 | 0.7231 | 0.7222 |
| 0.5225 | 15.46 | 3200 | 0.5235 | 0.7473 | 0.7461 |
| 0.5232 | 16.43 | 3400 | 0.5536 | 0.7240 | 0.7231 |
| 0.5238 | 17.39 | 3600 | 0.5289 | 0.7389 | 0.7371 |
| 0.52 | 18.36 | 3800 | 0.5192 | 0.7531 | 0.7525 |
| 0.5196 | 19.32 | 4000 | 0.5257 | 0.7443 | 0.7425 |
| 0.5165 | 20.29 | 4200 | 0.5332 | 0.7413 | 0.7395 |
| 0.5193 | 21.26 | 4400 | 0.5360 | 0.7372 | 0.7356 |
| 0.5184 | 22.22 | 4600 | 0.5446 | 0.7270 | 0.7259 |
| 0.5189 | 23.19 | 4800 | 0.5232 | 0.7500 | 0.7483 |
| 0.5167 | 24.15 | 5000 | 0.5251 | 0.7461 | 0.7443 |
| 0.5142 | 25.12 | 5200 | 0.5545 | 0.7270 | 0.7262 |
| 0.5155 | 26.09 | 5400 | 0.5322 | 0.7387 | 0.7371 |
| 0.5159 | 27.05 | 5600 | 0.5536 | 0.7217 | 0.7213 |
| 0.5137 | 28.02 | 5800 | 0.5214 | 0.7500 | 0.7483 |
| 0.514 | 28.99 | 6000 | 0.5382 | 0.7318 | 0.7304 |
| 0.5121 | 29.95 | 6200 | 0.5395 | 0.7333 | 0.7319 |
| 0.5146 | 30.92 | 6400 | 0.5213 | 0.7512 | 0.7495 |
| 0.5135 | 31.88 | 6600 | 0.5305 | 0.7396 | 0.7380 |
| 0.509 | 32.85 | 6800 | 0.5327 | 0.7377 | 0.7362 |
| 0.5134 | 33.82 | 7000 | 0.5423 | 0.7309 | 0.7298 |
| 0.51 | 34.78 | 7200 | 0.5412 | 0.7326 | 0.7313 |
| 0.5122 | 35.75 | 7400 | 0.5335 | 0.7362 | 0.7346 |
| 0.508 | 36.71 | 7600 | 0.5288 | 0.7417 | 0.7401 |
| 0.509 | 37.68 | 7800 | 0.5311 | 0.7423 | 0.7407 |
| 0.5105 | 38.65 | 8000 | 0.5237 | 0.7482 | 0.7464 |
| 0.5139 | 39.61 | 8200 | 0.5312 | 0.7398 | 0.7383 |
| 0.5052 | 40.58 | 8400 | 0.5363 | 0.7345 | 0.7331 |
| 0.5068 | 41.55 | 8600 | 0.5293 | 0.7438 | 0.7422 |
| 0.5084 | 42.51 | 8800 | 0.5338 | 0.7380 | 0.7365 |
| 0.5113 | 43.48 | 9000 | 0.5397 | 0.7341 | 0.7328 |
| 0.5068 | 44.44 | 9200 | 0.5338 | 0.7383 | 0.7368 |
| 0.5112 | 45.41 | 9400 | 0.5303 | 0.7402 | 0.7386 |
| 0.504 | 46.38 | 9600 | 0.5351 | 0.7373 | 0.7359 |
| 0.5109 | 47.34 | 9800 | 0.5327 | 0.7380 | 0.7365 |
| 0.5066 | 48.31 | 10000 | 0.5302 | 0.7408 | 0.7392 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:30:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_15M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5179
* F1 Score: 0.7401
* Accuracy: 0.7398
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | amuseix/w2v-bert-2.0-bulgarian-CV17.0-FLEURS | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:31:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** eugeniosegala
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | eugeniosegala/model | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:33:07+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: eugeniosegala
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: eugeniosegala\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: eugeniosegala\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
<br/><br/>
8bpw/h8 exl2 quantization of [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) using default exllamav2 calibration dataset.
---
**ORIGINAL CARD:**
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-8B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]} | JayhC/Llama-3-Lumimaid-8B-v0.1-8bpw-h8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-03T17:33:49+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
<br/><br/>
8bpw/h8 exl2 quantization of NeverSleep/Llama-3-Lumimaid-8B-v0.1 using default exllamav2 calibration dataset.
---
ORIGINAL CARD:
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="URL style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 prompting format
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-8B-v0.1.
Switch: 8B - 70B - 70B-alt
## Training data used:
- Aesir datasets
- NoRobots
- limarp - 8k ctx
- toxic-dpo-v0.1-sharegpt
- ToxicQAFinal
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)
- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)
- cgato/SlimOrcaDedupCleaned - 5% (randomly)
- Airoboros (reduced)
- Capybara (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
## Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek | [
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-8B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5143
- F1 Score: 0.7518
- Accuracy: 0.7507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6069 | 0.97 | 200 | 0.5639 | 0.7202 | 0.7189 |
| 0.5659 | 1.93 | 400 | 0.5476 | 0.7285 | 0.7271 |
| 0.5461 | 2.9 | 600 | 0.5649 | 0.7130 | 0.7123 |
| 0.5404 | 3.86 | 800 | 0.5287 | 0.7451 | 0.7440 |
| 0.5311 | 4.83 | 1000 | 0.5559 | 0.7271 | 0.7259 |
| 0.5299 | 5.8 | 1200 | 0.5429 | 0.7335 | 0.7319 |
| 0.5247 | 6.76 | 1400 | 0.5236 | 0.7494 | 0.7477 |
| 0.5209 | 7.73 | 1600 | 0.5307 | 0.7500 | 0.7483 |
| 0.5164 | 8.7 | 1800 | 0.5279 | 0.7429 | 0.7413 |
| 0.5144 | 9.66 | 2000 | 0.5352 | 0.7374 | 0.7359 |
| 0.5133 | 10.63 | 2200 | 0.5353 | 0.7373 | 0.7359 |
| 0.5063 | 11.59 | 2400 | 0.5131 | 0.7591 | 0.7576 |
| 0.5103 | 12.56 | 2600 | 0.5332 | 0.7430 | 0.7416 |
| 0.5067 | 13.53 | 2800 | 0.5274 | 0.7448 | 0.7434 |
| 0.5053 | 14.49 | 3000 | 0.5314 | 0.7414 | 0.7401 |
| 0.4984 | 15.46 | 3200 | 0.5152 | 0.7564 | 0.7549 |
| 0.5017 | 16.43 | 3400 | 0.5355 | 0.7398 | 0.7386 |
| 0.5011 | 17.39 | 3600 | 0.5153 | 0.7557 | 0.7540 |
| 0.4956 | 18.36 | 3800 | 0.5074 | 0.7642 | 0.7634 |
| 0.4947 | 19.32 | 4000 | 0.5103 | 0.7619 | 0.7604 |
| 0.4904 | 20.29 | 4200 | 0.5248 | 0.7575 | 0.7558 |
| 0.4948 | 21.26 | 4400 | 0.5249 | 0.7508 | 0.7492 |
| 0.4924 | 22.22 | 4600 | 0.5366 | 0.7369 | 0.7359 |
| 0.4933 | 23.19 | 4800 | 0.5116 | 0.7598 | 0.7582 |
| 0.4892 | 24.15 | 5000 | 0.5158 | 0.7530 | 0.7513 |
| 0.4868 | 25.12 | 5200 | 0.5430 | 0.7402 | 0.7392 |
| 0.4865 | 26.09 | 5400 | 0.5305 | 0.7469 | 0.7455 |
| 0.4888 | 27.05 | 5600 | 0.5468 | 0.7348 | 0.7340 |
| 0.4838 | 28.02 | 5800 | 0.5166 | 0.7548 | 0.7531 |
| 0.4852 | 28.99 | 6000 | 0.5230 | 0.7511 | 0.7495 |
| 0.4821 | 29.95 | 6200 | 0.5328 | 0.7448 | 0.7434 |
| 0.4827 | 30.92 | 6400 | 0.5079 | 0.7651 | 0.7637 |
| 0.4839 | 31.88 | 6600 | 0.5158 | 0.7536 | 0.7519 |
| 0.4765 | 32.85 | 6800 | 0.5259 | 0.7498 | 0.7483 |
| 0.4826 | 33.82 | 7000 | 0.5297 | 0.7448 | 0.7434 |
| 0.4768 | 34.78 | 7200 | 0.5302 | 0.7472 | 0.7458 |
| 0.481 | 35.75 | 7400 | 0.5245 | 0.7505 | 0.7489 |
| 0.4745 | 36.71 | 7600 | 0.5234 | 0.7523 | 0.7507 |
| 0.4762 | 37.68 | 7800 | 0.5197 | 0.7526 | 0.7510 |
| 0.4771 | 38.65 | 8000 | 0.5158 | 0.7521 | 0.7504 |
| 0.4792 | 39.61 | 8200 | 0.5203 | 0.7526 | 0.7510 |
| 0.4711 | 40.58 | 8400 | 0.5316 | 0.7458 | 0.7443 |
| 0.4719 | 41.55 | 8600 | 0.5230 | 0.7523 | 0.7507 |
| 0.4748 | 42.51 | 8800 | 0.5263 | 0.7511 | 0.7495 |
| 0.4772 | 43.48 | 9000 | 0.5299 | 0.7468 | 0.7452 |
| 0.4734 | 44.44 | 9200 | 0.5273 | 0.7502 | 0.7486 |
| 0.478 | 45.41 | 9400 | 0.5242 | 0.7502 | 0.7486 |
| 0.4673 | 46.38 | 9600 | 0.5285 | 0.7480 | 0.7464 |
| 0.4758 | 47.34 | 9800 | 0.5244 | 0.7505 | 0.7489 |
| 0.4699 | 48.31 | 10000 | 0.5224 | 0.7511 | 0.7495 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:33:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_15M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5143
* F1 Score: 0.7518
* Accuracy: 0.7507
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/VcZWbW_eZkJAZZ5ricL4B.png)
# Llama-3-Giraffe-70B-Instruct
Abacus.AI presents our longer-necked variant of Llama 3 70B - now with the instruct variant!
This model has an effective context length of approximately 128k.
We have currently trained on ~1.5B tokens.
There are our Needle-in-a-Haystack heatmap results. We are conducting further evals of model efficacy and will update our model card as these come in:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/Z4uUhcjgf1P7EPGQyRLkW.png)
## Training Methodology
The methodology for training uses [PoSE](https://arxiv.org/abs/2309.10400) and dynamic-NTK interpolation.
### NTK-scaling
The scale factor for NTK is 4. Note that we also tried theta-scaling but this did not work as well as NTK scaling in our experiments.
### PoSE
We utilise Positional Skip-wise Training (PoSE) with the following parameters:
- **Number of Chunks**: 5
- **Max position ID**: 32768
### Data
We use on average ~8K long samples from [RedPajama](https://github.com/togethercomputer/RedPajama-Data).
### Hardware
We train on 8xH100 GPUs with Deepspeed Zero Stage 3.
## Evaluation Methodology
We use the [EasyContext](https://github.com/abacusai/EasyContext/blob/eval_runs/eval_needle.py) implementation of Needle-in-a-Haystack to evaluate Llama-3-Giraffe-70B.
We evaluate with the following parameters:
- **Min context length**: 2000
- **Max context length**: 128000
- **Context interval**: 4000
- **Depth interval**: 0.1
- **Num samples**: 2
- **Rnd number digits**: 7
- **Haystack dir**: PaulGrahamEssays
### Adapter Transfer
We apply the above techniques first to Llama-3-70B-Base, using LoRA on the Q and K weights only. This adapter is then applied to Llama-3-70B-Instruct, and we
release the merged version here. | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | abacusai/Llama-3-Giraffe-70B-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"arxiv:2309.10400",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:34:02+00:00 | [
"2309.10400"
] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #arxiv-2309.10400 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!image/png
!image/png
# Llama-3-Giraffe-70B-Instruct
Abacus.AI presents our longer-necked variant of Llama 3 70B - now with the instruct variant!
This model has an effective context length of approximately 128k.
We have currently trained on ~1.5B tokens.
There are our Needle-in-a-Haystack heatmap results. We are conducting further evals of model efficacy and will update our model card as these come in:
!image/png
## Training Methodology
The methodology for training uses PoSE and dynamic-NTK interpolation.
### NTK-scaling
The scale factor for NTK is 4. Note that we also tried theta-scaling but this did not work as well as NTK scaling in our experiments.
### PoSE
We utilise Positional Skip-wise Training (PoSE) with the following parameters:
- Number of Chunks: 5
- Max position ID: 32768
### Data
We use on average ~8K long samples from RedPajama.
### Hardware
We train on 8xH100 GPUs with Deepspeed Zero Stage 3.
## Evaluation Methodology
We use the EasyContext implementation of Needle-in-a-Haystack to evaluate Llama-3-Giraffe-70B.
We evaluate with the following parameters:
- Min context length: 2000
- Max context length: 128000
- Context interval: 4000
- Depth interval: 0.1
- Num samples: 2
- Rnd number digits: 7
- Haystack dir: PaulGrahamEssays
### Adapter Transfer
We apply the above techniques first to Llama-3-70B-Base, using LoRA on the Q and K weights only. This adapter is then applied to Llama-3-70B-Instruct, and we
release the merged version here. | [
"# Llama-3-Giraffe-70B-Instruct\n\nAbacus.AI presents our longer-necked variant of Llama 3 70B - now with the instruct variant!\n\nThis model has an effective context length of approximately 128k.\n\nWe have currently trained on ~1.5B tokens.\n\nThere are our Needle-in-a-Haystack heatmap results. We are conducting further evals of model efficacy and will update our model card as these come in:\n\n!image/png",
"## Training Methodology\n\nThe methodology for training uses PoSE and dynamic-NTK interpolation.",
"### NTK-scaling\n\nThe scale factor for NTK is 4. Note that we also tried theta-scaling but this did not work as well as NTK scaling in our experiments.",
"### PoSE\n\nWe utilise Positional Skip-wise Training (PoSE) with the following parameters:\n\n- Number of Chunks: 5\n- Max position ID: 32768",
"### Data\n\nWe use on average ~8K long samples from RedPajama.",
"### Hardware\n\nWe train on 8xH100 GPUs with Deepspeed Zero Stage 3.",
"## Evaluation Methodology\n\nWe use the EasyContext implementation of Needle-in-a-Haystack to evaluate Llama-3-Giraffe-70B.\n\nWe evaluate with the following parameters:\n\n- Min context length: 2000\n- Max context length: 128000\n- Context interval: 4000\n- Depth interval: 0.1\n- Num samples: 2\n- Rnd number digits: 7\n- Haystack dir: PaulGrahamEssays",
"### Adapter Transfer\n\nWe apply the above techniques first to Llama-3-70B-Base, using LoRA on the Q and K weights only. This adapter is then applied to Llama-3-70B-Instruct, and we\nrelease the merged version here."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #arxiv-2309.10400 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama-3-Giraffe-70B-Instruct\n\nAbacus.AI presents our longer-necked variant of Llama 3 70B - now with the instruct variant!\n\nThis model has an effective context length of approximately 128k.\n\nWe have currently trained on ~1.5B tokens.\n\nThere are our Needle-in-a-Haystack heatmap results. We are conducting further evals of model efficacy and will update our model card as these come in:\n\n!image/png",
"## Training Methodology\n\nThe methodology for training uses PoSE and dynamic-NTK interpolation.",
"### NTK-scaling\n\nThe scale factor for NTK is 4. Note that we also tried theta-scaling but this did not work as well as NTK scaling in our experiments.",
"### PoSE\n\nWe utilise Positional Skip-wise Training (PoSE) with the following parameters:\n\n- Number of Chunks: 5\n- Max position ID: 32768",
"### Data\n\nWe use on average ~8K long samples from RedPajama.",
"### Hardware\n\nWe train on 8xH100 GPUs with Deepspeed Zero Stage 3.",
"## Evaluation Methodology\n\nWe use the EasyContext implementation of Needle-in-a-Haystack to evaluate Llama-3-Giraffe-70B.\n\nWe evaluate with the following parameters:\n\n- Min context length: 2000\n- Max context length: 128000\n- Context interval: 4000\n- Depth interval: 0.1\n- Num samples: 2\n- Rnd number digits: 7\n- Haystack dir: PaulGrahamEssays",
"### Adapter Transfer\n\nWe apply the above techniques first to Llama-3-70B-Base, using LoRA on the Q and K weights only. This adapter is then applied to Llama-3-70B-Instruct, and we\nrelease the merged version here."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6021
- F1 Score: 0.6656
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6627 | 1.04 | 200 | 0.6377 | 0.5711 | 0.6331 |
| 0.6293 | 2.08 | 400 | 0.6279 | 0.6475 | 0.6471 |
| 0.6201 | 3.12 | 600 | 0.6184 | 0.6407 | 0.6641 |
| 0.6192 | 4.17 | 800 | 0.6200 | 0.6465 | 0.6536 |
| 0.6176 | 5.21 | 1000 | 0.6203 | 0.6515 | 0.6540 |
| 0.6144 | 6.25 | 1200 | 0.6163 | 0.6488 | 0.6543 |
| 0.609 | 7.29 | 1400 | 0.6247 | 0.6541 | 0.6533 |
| 0.6101 | 8.33 | 1600 | 0.6215 | 0.6542 | 0.6543 |
| 0.6095 | 9.38 | 1800 | 0.6298 | 0.6494 | 0.6471 |
| 0.6088 | 10.42 | 2000 | 0.6181 | 0.6592 | 0.6598 |
| 0.6097 | 11.46 | 2200 | 0.6094 | 0.6533 | 0.6628 |
| 0.6028 | 12.5 | 2400 | 0.6121 | 0.6578 | 0.6621 |
| 0.6016 | 13.54 | 2600 | 0.6112 | 0.6534 | 0.6611 |
| 0.6037 | 14.58 | 2800 | 0.6100 | 0.6536 | 0.6605 |
| 0.6058 | 15.62 | 3000 | 0.6102 | 0.6558 | 0.6621 |
| 0.6011 | 16.67 | 3200 | 0.6148 | 0.6607 | 0.6621 |
| 0.6004 | 17.71 | 3400 | 0.6086 | 0.6574 | 0.6644 |
| 0.6031 | 18.75 | 3600 | 0.6099 | 0.6617 | 0.6660 |
| 0.6016 | 19.79 | 3800 | 0.6130 | 0.6658 | 0.6680 |
| 0.5948 | 20.83 | 4000 | 0.6156 | 0.6632 | 0.6637 |
| 0.6 | 21.88 | 4200 | 0.6166 | 0.6623 | 0.6631 |
| 0.5969 | 22.92 | 4400 | 0.6148 | 0.6644 | 0.6657 |
| 0.5979 | 23.96 | 4600 | 0.6176 | 0.6650 | 0.6650 |
| 0.5961 | 25.0 | 4800 | 0.6084 | 0.6649 | 0.6699 |
| 0.594 | 26.04 | 5000 | 0.6150 | 0.6680 | 0.6689 |
| 0.5947 | 27.08 | 5200 | 0.6137 | 0.6665 | 0.6676 |
| 0.5937 | 28.12 | 5400 | 0.6101 | 0.6647 | 0.6676 |
| 0.5947 | 29.17 | 5600 | 0.6156 | 0.6682 | 0.6683 |
| 0.5904 | 30.21 | 5800 | 0.6164 | 0.6698 | 0.6699 |
| 0.5929 | 31.25 | 6000 | 0.6136 | 0.6693 | 0.6699 |
| 0.5924 | 32.29 | 6200 | 0.6135 | 0.6682 | 0.6689 |
| 0.5925 | 33.33 | 6400 | 0.6170 | 0.6693 | 0.6693 |
| 0.5933 | 34.38 | 6600 | 0.6090 | 0.6683 | 0.6719 |
| 0.5905 | 35.42 | 6800 | 0.6095 | 0.6691 | 0.6722 |
| 0.5904 | 36.46 | 7000 | 0.6083 | 0.6705 | 0.6742 |
| 0.5866 | 37.5 | 7200 | 0.6134 | 0.6711 | 0.6719 |
| 0.5887 | 38.54 | 7400 | 0.6110 | 0.6729 | 0.6748 |
| 0.5927 | 39.58 | 7600 | 0.6105 | 0.6705 | 0.6725 |
| 0.5898 | 40.62 | 7800 | 0.6198 | 0.6666 | 0.6654 |
| 0.5882 | 41.67 | 8000 | 0.6124 | 0.6703 | 0.6709 |
| 0.5878 | 42.71 | 8200 | 0.6088 | 0.6686 | 0.6729 |
| 0.5902 | 43.75 | 8400 | 0.6109 | 0.6714 | 0.6729 |
| 0.5885 | 44.79 | 8600 | 0.6156 | 0.6702 | 0.6699 |
| 0.5862 | 45.83 | 8800 | 0.6122 | 0.6709 | 0.6722 |
| 0.5905 | 46.88 | 9000 | 0.6144 | 0.6695 | 0.6696 |
| 0.5869 | 47.92 | 9200 | 0.6138 | 0.6689 | 0.6693 |
| 0.5888 | 48.96 | 9400 | 0.6124 | 0.6695 | 0.6706 |
| 0.5884 | 50.0 | 9600 | 0.6128 | 0.6682 | 0.6689 |
| 0.5867 | 51.04 | 9800 | 0.6131 | 0.6699 | 0.6706 |
| 0.5862 | 52.08 | 10000 | 0.6128 | 0.6678 | 0.6686 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:34:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_15M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6021
* F1 Score: 0.6656
* Accuracy: 0.6667
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_copa_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.51
- F1: 0.4857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7257 | 1.0 | 50 | 0.6931 | 0.49 | 0.4838 |
| 0.7001 | 2.0 | 100 | 0.6931 | 0.48 | 0.48 |
| 0.7196 | 3.0 | 150 | 0.6931 | 0.52 | 0.4603 |
| 0.6895 | 4.0 | 200 | 0.6931 | 0.5 | 0.4926 |
| 0.745 | 5.0 | 250 | 0.6931 | 0.46 | 0.4244 |
| 0.7102 | 6.0 | 300 | 0.6931 | 0.5 | 0.4861 |
| 0.7245 | 7.0 | 350 | 0.6931 | 0.55 | 0.5391 |
| 0.7283 | 8.0 | 400 | 0.6931 | 0.51 | 0.4857 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_copa_croslo", "results": []}]} | lenatr99/fine_tuned_copa_croslo | null | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:35:12+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #multiple-choice #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #endpoints_compatible #region-us
| fine\_tuned\_copa\_croslo
=========================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6931
* Accuracy: 0.51
* F1: 0.4857
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #multiple-choice #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Steelskull/L3-Arcania-4x8b
| {} | mradermacher/L3-Arcania-4x8b-GGUF | null | [
"gguf",
"region:us"
] | null | 2024-05-03T17:36:03+00:00 | [] | [] | TAGS
#gguf #region-us
|
static quants of URL
| [] | [
"TAGS\n#gguf #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6890
- Accuracy: 0.52
- F1: 0.5212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7072 | 1.0 | 50 | 0.6896 | 0.52 | 0.5212 |
| 0.6973 | 2.0 | 100 | 0.6894 | 0.53 | 0.5312 |
| 0.6988 | 3.0 | 150 | 0.6892 | 0.54 | 0.5411 |
| 0.7016 | 4.0 | 200 | 0.6891 | 0.53 | 0.5312 |
| 0.7034 | 5.0 | 250 | 0.6890 | 0.52 | 0.5212 |
| 0.6978 | 6.0 | 300 | 0.6890 | 0.51 | 0.5112 |
| 0.6965 | 7.0 | 350 | 0.6890 | 0.51 | 0.5112 |
| 0.6907 | 8.0 | 400 | 0.6890 | 0.52 | 0.5212 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "loha_fine_tuned_croslo", "results": []}]} | lenatr99/loha_fine_tuned_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T17:36:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| loha\_fine\_tuned\_croslo
=========================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6890
* Accuracy: 0.52
* F1: 0.5212
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
robotics | null |
# Introduction
This model is currently being tested, further details to be added in future for now see [robot_learning_baselines](https://github.com/peterdavidfagan/robot_learning_baselines). | {"license": "apache-2.0", "datasets": ["peterdavidfagan/transporter_networks"], "pipeline_tag": "robotics"} | peterdavidfagan/transporter_networks | null | [
"tflite",
"robotics",
"dataset:peterdavidfagan/transporter_networks",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T17:38:29+00:00 | [] | [] | TAGS
#tflite #robotics #dataset-peterdavidfagan/transporter_networks #license-apache-2.0 #region-us
|
# Introduction
This model is currently being tested, further details to be added in future for now see robot_learning_baselines. | [
"# Introduction\n\nThis model is currently being tested, further details to be added in future for now see robot_learning_baselines."
] | [
"TAGS\n#tflite #robotics #dataset-peterdavidfagan/transporter_networks #license-apache-2.0 #region-us \n",
"# Introduction\n\nThis model is currently being tested, further details to be added in future for now see robot_learning_baselines."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | sparvekar/critique_lora_model | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:38:51+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# PULI LlumiX 32K instruct (6.74B billion parameter)
Intruct finetuned version of NYTK/PULI-LlumiX-32K.
## Training platform
[Lightning AI Studio](https://lightning.ai/studios) L4 GPU
## Hyper parameters
- Epoch: 3
- LoRA rank (r): 16
- LoRA alpha: 16
- Lr: 2e-4
- Lr scheduler: cosine
- Optimizer: adamw_8bit
- Weight decay: 0.01
## Dataset
boapps/szurkemarha
In total ~30k instructions were selected.
## Prompt template: ChatML
```
<|im_start|>system
Az alábbiakban egy feladatot leíró utasítás található. Írjál olyan választ, amely megfelelően teljesíti a kérést.<|im_end|>
<|im_start|>user
Ki a legerősebb szuperhős?<|im_end|>
<|im_start|>assistant
A legerősebb szuperhős a Marvel univerzumában Hulk.<|im_end|>
```
## Base model
- Trained with OpenChatKit [github](https://github.com/togethercomputer/OpenChatKit)
- The [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) model were continuously pretrained on Hungarian dataset
- The model has been extended to a context length of 32K with position interpolation
- Checkpoint: 100 000 steps
## Dataset for continued pretraining
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
- English: Long Context QA (2 billion words), BookSum (78 million words)
## Limitations
- max_seq_length = 32 768
- float16
- vocab size: 32 000 | {"language": ["hu", "en"], "license": "llama2", "tags": ["puli", "text-generation-inference", "transformers", "unsloth", "llama", "trl", "finetuned"], "datasets": ["boapps/szurkemarha"], "base_model": "NYTK/PULI-LlumiX-32K", "pipeline_tag": "text-generation"} | ariel-ml/PULI-LlumiX-32K-instruct-lora | null | [
"transformers",
"safetensors",
"puli",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"finetuned",
"text-generation",
"conversational",
"hu",
"en",
"dataset:boapps/szurkemarha",
"base_model:NYTK/PULI-LlumiX-32K",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:39:54+00:00 | [] | [
"hu",
"en"
] | TAGS
#transformers #safetensors #puli #text-generation-inference #unsloth #llama #trl #finetuned #text-generation #conversational #hu #en #dataset-boapps/szurkemarha #base_model-NYTK/PULI-LlumiX-32K #license-llama2 #endpoints_compatible #region-us
|
# PULI LlumiX 32K instruct (6.74B billion parameter)
Intruct finetuned version of NYTK/PULI-LlumiX-32K.
## Training platform
Lightning AI Studio L4 GPU
## Hyper parameters
- Epoch: 3
- LoRA rank (r): 16
- LoRA alpha: 16
- Lr: 2e-4
- Lr scheduler: cosine
- Optimizer: adamw_8bit
- Weight decay: 0.01
## Dataset
boapps/szurkemarha
In total ~30k instructions were selected.
## Prompt template: ChatML
## Base model
- Trained with OpenChatKit github
- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset
- The model has been extended to a context length of 32K with position interpolation
- Checkpoint: 100 000 steps
## Dataset for continued pretraining
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
- English: Long Context QA (2 billion words), BookSum (78 million words)
## Limitations
- max_seq_length = 32 768
- float16
- vocab size: 32 000 | [
"# PULI LlumiX 32K instruct (6.74B billion parameter)\n\nIntruct finetuned version of NYTK/PULI-LlumiX-32K.",
"## Training platform\nLightning AI Studio L4 GPU",
"## Hyper parameters\n\n- Epoch: 3\n- LoRA rank (r): 16\n- LoRA alpha: 16\n- Lr: 2e-4\n- Lr scheduler: cosine\n- Optimizer: adamw_8bit\n- Weight decay: 0.01",
"## Dataset\n\nboapps/szurkemarha\n\nIn total ~30k instructions were selected.",
"## Prompt template: ChatML",
"## Base model\n\n- Trained with OpenChatKit github\n- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset\n- The model has been extended to a context length of 32K with position interpolation\n- Checkpoint: 100 000 steps",
"## Dataset for continued pretraining\n\n- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length\n- English: Long Context QA (2 billion words), BookSum (78 million words)",
"## Limitations\n\n- max_seq_length = 32 768\n- float16\n- vocab size: 32 000"
] | [
"TAGS\n#transformers #safetensors #puli #text-generation-inference #unsloth #llama #trl #finetuned #text-generation #conversational #hu #en #dataset-boapps/szurkemarha #base_model-NYTK/PULI-LlumiX-32K #license-llama2 #endpoints_compatible #region-us \n",
"# PULI LlumiX 32K instruct (6.74B billion parameter)\n\nIntruct finetuned version of NYTK/PULI-LlumiX-32K.",
"## Training platform\nLightning AI Studio L4 GPU",
"## Hyper parameters\n\n- Epoch: 3\n- LoRA rank (r): 16\n- LoRA alpha: 16\n- Lr: 2e-4\n- Lr scheduler: cosine\n- Optimizer: adamw_8bit\n- Weight decay: 0.01",
"## Dataset\n\nboapps/szurkemarha\n\nIn total ~30k instructions were selected.",
"## Prompt template: ChatML",
"## Base model\n\n- Trained with OpenChatKit github\n- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset\n- The model has been extended to a context length of 32K with position interpolation\n- Checkpoint: 100 000 steps",
"## Dataset for continued pretraining\n\n- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length\n- English: Long Context QA (2 billion words), BookSum (78 million words)",
"## Limitations\n\n- max_seq_length = 32 768\n- float16\n- vocab size: 32 000"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/g2rr5al | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:40:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# PULI LlumiX 32K instruct (6.74B billion parameter)
Intruct finetuned version of NYTK/PULI-LlumiX-32K.
## Training platform
[Lightning AI Studio](https://lightning.ai/studios) L4 GPU
## Hyper parameters
- Epoch: 3
- LoRA rank (r): 16
- LoRA alpha: 16
- Lr: 2e-4
- Lr scheduler: cosine
- Optimizer: adamw_8bit
- Weight decay: 0.01
## Dataset
boapps/szurkemarha
In total ~30k instructions were selected.
## Prompt template: ChatML
```
<|im_start|>system
Az alábbiakban egy feladatot leíró utasítás található. Írjál olyan választ, amely megfelelően teljesíti a kérést.<|im_end|>
<|im_start|>user
Ki a legerősebb szuperhős?<|im_end|>
<|im_start|>assistant
A legerősebb szuperhős a Marvel univerzumában Hulk.<|im_end|>
```
## Base model
- Trained with OpenChatKit [github](https://github.com/togethercomputer/OpenChatKit)
- The [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) model were continuously pretrained on Hungarian dataset
- The model has been extended to a context length of 32K with position interpolation
- Checkpoint: 100 000 steps
## Dataset for continued pretraining
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
- English: Long Context QA (2 billion words), BookSum (78 million words)
## Limitations
- max_seq_length = 32 768
- float16
- vocab size: 32 000 | {"language": ["hu", "en"], "license": "llama2", "tags": ["puli", "text-generation-inference", "transformers", "unsloth", "llama", "trl", "finetuned"], "datasets": ["boapps/szurkemarha"], "base_model": "NYTK/PULI-LlumiX-32K", "pipeline_tag": "text-generation"} | ariel-ml/PULI-LlumiX-32K-instruct-f16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"puli",
"text-generation-inference",
"unsloth",
"trl",
"finetuned",
"conversational",
"custom_code",
"hu",
"en",
"dataset:boapps/szurkemarha",
"base_model:NYTK/PULI-LlumiX-32K",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:40:22+00:00 | [] | [
"hu",
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #puli #text-generation-inference #unsloth #trl #finetuned #conversational #custom_code #hu #en #dataset-boapps/szurkemarha #base_model-NYTK/PULI-LlumiX-32K #license-llama2 #autotrain_compatible #endpoints_compatible #region-us
|
# PULI LlumiX 32K instruct (6.74B billion parameter)
Intruct finetuned version of NYTK/PULI-LlumiX-32K.
## Training platform
Lightning AI Studio L4 GPU
## Hyper parameters
- Epoch: 3
- LoRA rank (r): 16
- LoRA alpha: 16
- Lr: 2e-4
- Lr scheduler: cosine
- Optimizer: adamw_8bit
- Weight decay: 0.01
## Dataset
boapps/szurkemarha
In total ~30k instructions were selected.
## Prompt template: ChatML
## Base model
- Trained with OpenChatKit github
- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset
- The model has been extended to a context length of 32K with position interpolation
- Checkpoint: 100 000 steps
## Dataset for continued pretraining
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
- English: Long Context QA (2 billion words), BookSum (78 million words)
## Limitations
- max_seq_length = 32 768
- float16
- vocab size: 32 000 | [
"# PULI LlumiX 32K instruct (6.74B billion parameter)\n\nIntruct finetuned version of NYTK/PULI-LlumiX-32K.",
"## Training platform\nLightning AI Studio L4 GPU",
"## Hyper parameters\n\n- Epoch: 3\n- LoRA rank (r): 16\n- LoRA alpha: 16\n- Lr: 2e-4\n- Lr scheduler: cosine\n- Optimizer: adamw_8bit\n- Weight decay: 0.01",
"## Dataset\n\nboapps/szurkemarha\n\nIn total ~30k instructions were selected.",
"## Prompt template: ChatML",
"## Base model\n\n- Trained with OpenChatKit github\n- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset\n- The model has been extended to a context length of 32K with position interpolation\n- Checkpoint: 100 000 steps",
"## Dataset for continued pretraining\n\n- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length\n- English: Long Context QA (2 billion words), BookSum (78 million words)",
"## Limitations\n\n- max_seq_length = 32 768\n- float16\n- vocab size: 32 000"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #puli #text-generation-inference #unsloth #trl #finetuned #conversational #custom_code #hu #en #dataset-boapps/szurkemarha #base_model-NYTK/PULI-LlumiX-32K #license-llama2 #autotrain_compatible #endpoints_compatible #region-us \n",
"# PULI LlumiX 32K instruct (6.74B billion parameter)\n\nIntruct finetuned version of NYTK/PULI-LlumiX-32K.",
"## Training platform\nLightning AI Studio L4 GPU",
"## Hyper parameters\n\n- Epoch: 3\n- LoRA rank (r): 16\n- LoRA alpha: 16\n- Lr: 2e-4\n- Lr scheduler: cosine\n- Optimizer: adamw_8bit\n- Weight decay: 0.01",
"## Dataset\n\nboapps/szurkemarha\n\nIn total ~30k instructions were selected.",
"## Prompt template: ChatML",
"## Base model\n\n- Trained with OpenChatKit github\n- The LLaMA-2-7B-32K model were continuously pretrained on Hungarian dataset\n- The model has been extended to a context length of 32K with position interpolation\n- Checkpoint: 100 000 steps",
"## Dataset for continued pretraining\n\n- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length\n- English: Long Context QA (2 billion words), BookSum (78 million words)",
"## Limitations\n\n- max_seq_length = 32 768\n- float16\n- vocab size: 32 000"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Jayant9928/orpo_med_v3](https://huggingface.co/Jayant9928/orpo_med_v3) as a base.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Jayant9928/orpo_med_v3
parameters:
density: 0.53
weight: 0.4
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: Jayant9928/orpo_med_v3
tokenizer_source: union
parameters:
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Jayant9928/orpo_med_v3", "meta-llama/Meta-Llama-3-8B-Instruct"]} | Muhammad2003/Dmitry69 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Jayant9928/orpo_med_v3",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:40:29+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-Jayant9928/orpo_med_v3 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using Jayant9928/orpo_med_v3 as a base.
### Models Merged
The following models were included in the merge:
* meta-llama/Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using Jayant9928/orpo_med_v3 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-Jayant9928/orpo_med_v3 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using Jayant9928/orpo_med_v3 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5137
- F1 Score: 0.7465
- Accuracy: 0.7452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5962 | 0.97 | 200 | 0.5550 | 0.7252 | 0.7241 |
| 0.5522 | 1.93 | 400 | 0.5384 | 0.7392 | 0.7377 |
| 0.5347 | 2.9 | 600 | 0.5523 | 0.7224 | 0.7213 |
| 0.5295 | 3.86 | 800 | 0.5225 | 0.7510 | 0.7495 |
| 0.5188 | 4.83 | 1000 | 0.5540 | 0.7280 | 0.7271 |
| 0.5171 | 5.8 | 1200 | 0.5321 | 0.7383 | 0.7368 |
| 0.5098 | 6.76 | 1400 | 0.5174 | 0.7512 | 0.7495 |
| 0.5041 | 7.73 | 1600 | 0.5242 | 0.7454 | 0.7437 |
| 0.499 | 8.7 | 1800 | 0.5289 | 0.7457 | 0.7443 |
| 0.4945 | 9.66 | 2000 | 0.5280 | 0.7488 | 0.7474 |
| 0.4928 | 10.63 | 2200 | 0.5247 | 0.7495 | 0.7480 |
| 0.4843 | 11.59 | 2400 | 0.5053 | 0.7654 | 0.7640 |
| 0.4847 | 12.56 | 2600 | 0.5261 | 0.7461 | 0.7446 |
| 0.4807 | 13.53 | 2800 | 0.5256 | 0.7504 | 0.7489 |
| 0.478 | 14.49 | 3000 | 0.5253 | 0.7434 | 0.7419 |
| 0.4683 | 15.46 | 3200 | 0.5126 | 0.7638 | 0.7622 |
| 0.4695 | 16.43 | 3400 | 0.5248 | 0.7485 | 0.7470 |
| 0.4665 | 17.39 | 3600 | 0.5196 | 0.7578 | 0.7561 |
| 0.4595 | 18.36 | 3800 | 0.5050 | 0.7626 | 0.7619 |
| 0.4571 | 19.32 | 4000 | 0.5115 | 0.7579 | 0.7564 |
| 0.4522 | 20.29 | 4200 | 0.5346 | 0.7557 | 0.7540 |
| 0.4557 | 21.26 | 4400 | 0.5250 | 0.7566 | 0.7549 |
| 0.449 | 22.22 | 4600 | 0.5417 | 0.7443 | 0.7431 |
| 0.4484 | 23.19 | 4800 | 0.5210 | 0.7545 | 0.7528 |
| 0.4437 | 24.15 | 5000 | 0.5327 | 0.7544 | 0.7528 |
| 0.4398 | 25.12 | 5200 | 0.5487 | 0.7435 | 0.7425 |
| 0.4388 | 26.09 | 5400 | 0.5419 | 0.7453 | 0.7440 |
| 0.4372 | 27.05 | 5600 | 0.5656 | 0.7427 | 0.7416 |
| 0.4307 | 28.02 | 5800 | 0.5400 | 0.7533 | 0.7516 |
| 0.429 | 28.99 | 6000 | 0.5285 | 0.7539 | 0.7522 |
| 0.4243 | 29.95 | 6200 | 0.5554 | 0.7452 | 0.7437 |
| 0.4249 | 30.92 | 6400 | 0.5254 | 0.7546 | 0.7534 |
| 0.426 | 31.88 | 6600 | 0.5293 | 0.7494 | 0.7477 |
| 0.4144 | 32.85 | 6800 | 0.5486 | 0.7502 | 0.7486 |
| 0.4206 | 33.82 | 7000 | 0.5444 | 0.7498 | 0.7483 |
| 0.4113 | 34.78 | 7200 | 0.5544 | 0.7529 | 0.7513 |
| 0.4185 | 35.75 | 7400 | 0.5436 | 0.7481 | 0.7464 |
| 0.4096 | 36.71 | 7600 | 0.5489 | 0.7499 | 0.7483 |
| 0.4124 | 37.68 | 7800 | 0.5416 | 0.7554 | 0.7537 |
| 0.4109 | 38.65 | 8000 | 0.5439 | 0.7488 | 0.7470 |
| 0.4081 | 39.61 | 8200 | 0.5420 | 0.7506 | 0.7489 |
| 0.4018 | 40.58 | 8400 | 0.5606 | 0.7492 | 0.7477 |
| 0.4028 | 41.55 | 8600 | 0.5520 | 0.7524 | 0.7507 |
| 0.4059 | 42.51 | 8800 | 0.5511 | 0.7539 | 0.7522 |
| 0.4061 | 43.48 | 9000 | 0.5581 | 0.7514 | 0.7498 |
| 0.4036 | 44.44 | 9200 | 0.5532 | 0.7521 | 0.7504 |
| 0.408 | 45.41 | 9400 | 0.5504 | 0.7551 | 0.7534 |
| 0.3953 | 46.38 | 9600 | 0.5564 | 0.7517 | 0.7501 |
| 0.4054 | 47.34 | 9800 | 0.5496 | 0.7512 | 0.7495 |
| 0.3949 | 48.31 | 10000 | 0.5515 | 0.7527 | 0.7510 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:44:41+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_15M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5137
* F1 Score: 0.7465
* Accuracy: 0.7452
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | BotCuddles/gemma-2b-it-ft-mental | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:44:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- F1 Score: 0.6642
- Accuracy: 0.6693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6513 | 1.04 | 200 | 0.6213 | 0.6317 | 0.6549 |
| 0.6199 | 2.08 | 400 | 0.6293 | 0.6443 | 0.6426 |
| 0.6123 | 3.12 | 600 | 0.6108 | 0.6493 | 0.6680 |
| 0.6106 | 4.17 | 800 | 0.6176 | 0.6585 | 0.6588 |
| 0.6064 | 5.21 | 1000 | 0.6064 | 0.6673 | 0.6758 |
| 0.6033 | 6.25 | 1200 | 0.6056 | 0.6679 | 0.6738 |
| 0.5957 | 7.29 | 1400 | 0.6180 | 0.6640 | 0.6624 |
| 0.5959 | 8.33 | 1600 | 0.6186 | 0.6685 | 0.6667 |
| 0.5932 | 9.38 | 1800 | 0.6305 | 0.6574 | 0.6549 |
| 0.5919 | 10.42 | 2000 | 0.6082 | 0.6766 | 0.6774 |
| 0.593 | 11.46 | 2200 | 0.6020 | 0.6772 | 0.6826 |
| 0.5848 | 12.5 | 2400 | 0.6121 | 0.6768 | 0.6758 |
| 0.5841 | 13.54 | 2600 | 0.6098 | 0.6725 | 0.6729 |
| 0.5834 | 14.58 | 2800 | 0.6081 | 0.6715 | 0.6719 |
| 0.5864 | 15.62 | 3000 | 0.6126 | 0.6760 | 0.6751 |
| 0.5818 | 16.67 | 3200 | 0.6155 | 0.6718 | 0.6699 |
| 0.5802 | 17.71 | 3400 | 0.6068 | 0.6744 | 0.6751 |
| 0.5828 | 18.75 | 3600 | 0.6077 | 0.6713 | 0.6719 |
| 0.5803 | 19.79 | 3800 | 0.6130 | 0.6742 | 0.6735 |
| 0.5743 | 20.83 | 4000 | 0.6197 | 0.6699 | 0.6680 |
| 0.5769 | 21.88 | 4200 | 0.6318 | 0.6626 | 0.6601 |
| 0.5746 | 22.92 | 4400 | 0.6185 | 0.6679 | 0.6663 |
| 0.5741 | 23.96 | 4600 | 0.6256 | 0.6661 | 0.6637 |
| 0.5728 | 25.0 | 4800 | 0.6091 | 0.6691 | 0.6693 |
| 0.5694 | 26.04 | 5000 | 0.6206 | 0.6678 | 0.6660 |
| 0.5706 | 27.08 | 5200 | 0.6181 | 0.6659 | 0.6644 |
| 0.5682 | 28.12 | 5400 | 0.6203 | 0.6699 | 0.6680 |
| 0.5684 | 29.17 | 5600 | 0.6188 | 0.6727 | 0.6716 |
| 0.5626 | 30.21 | 5800 | 0.6244 | 0.6680 | 0.6663 |
| 0.5659 | 31.25 | 6000 | 0.6298 | 0.6645 | 0.6621 |
| 0.5652 | 32.29 | 6200 | 0.6119 | 0.6672 | 0.6667 |
| 0.565 | 33.33 | 6400 | 0.6228 | 0.6646 | 0.6628 |
| 0.5636 | 34.38 | 6600 | 0.6187 | 0.6672 | 0.6663 |
| 0.5624 | 35.42 | 6800 | 0.6183 | 0.6671 | 0.6660 |
| 0.5631 | 36.46 | 7000 | 0.6131 | 0.6729 | 0.6729 |
| 0.5575 | 37.5 | 7200 | 0.6277 | 0.6620 | 0.6601 |
| 0.5588 | 38.54 | 7400 | 0.6218 | 0.6689 | 0.6680 |
| 0.5624 | 39.58 | 7600 | 0.6139 | 0.6722 | 0.6722 |
| 0.56 | 40.62 | 7800 | 0.6328 | 0.6586 | 0.6562 |
| 0.5583 | 41.67 | 8000 | 0.6191 | 0.6650 | 0.6634 |
| 0.5563 | 42.71 | 8200 | 0.6189 | 0.6708 | 0.6706 |
| 0.5599 | 43.75 | 8400 | 0.6180 | 0.6674 | 0.6663 |
| 0.5572 | 44.79 | 8600 | 0.6239 | 0.6643 | 0.6624 |
| 0.5543 | 45.83 | 8800 | 0.6204 | 0.6676 | 0.6670 |
| 0.5576 | 46.88 | 9000 | 0.6294 | 0.6597 | 0.6575 |
| 0.5544 | 47.92 | 9200 | 0.6281 | 0.6599 | 0.6579 |
| 0.555 | 48.96 | 9400 | 0.6271 | 0.6637 | 0.6621 |
| 0.555 | 50.0 | 9600 | 0.6273 | 0.6652 | 0.6634 |
| 0.5544 | 51.04 | 9800 | 0.6263 | 0.6641 | 0.6624 |
| 0.5523 | 52.08 | 10000 | 0.6264 | 0.6642 | 0.6624 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:44:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_15M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5973
* F1 Score: 0.6642
* Accuracy: 0.6693
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/ox9od86 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:44:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-fsi-masked-loss
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 17.1473 | 0.5333 | 1 | 18.7762 |
| 17.1473 | 1.6 | 3 | 9.0847 |
| 11.7791 | 2.6667 | 5 | 3.0550 |
| 11.7791 | 3.7333 | 7 | 0.6706 |
| 11.7791 | 4.8 | 9 | 0.6697 |
| 1.4045 | 5.8667 | 11 | 0.5653 |
| 1.4045 | 6.9333 | 13 | 0.4982 |
| 0.6622 | 8.0 | 15 | 0.4756 |
| 0.6622 | 8.5333 | 16 | 0.4777 |
| 0.6622 | 9.6 | 18 | 0.4338 |
| 0.5586 | 10.6667 | 20 | 0.3788 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "sft-fsi-masked-loss", "results": []}]} | jamesoneill12/sft-fsi-masked-loss | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:45:22+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| sft-fsi-masked-loss
===================
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3788
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 256
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ragab167/m2m_translation_v2 | null | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:45:31+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/vlrcskl | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:45:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<br/><br/>
3bpw/h6 exl2 quantization of [NeverSleep/Llama-3-Lumimaid-70B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) using default exllamav2 calibration dataset.
---
**ORIGINAL CARD:**
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-70B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]} | JayhC/Llama-3-Lumimaid-70B-v0.1-3bpw-h6-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-05-03T17:45:41+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
|
<br/><br/>
3bpw/h6 exl2 quantization of NeverSleep/Llama-3-Lumimaid-70B-v0.1 using default exllamav2 calibration dataset.
---
ORIGINAL CARD:
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="URL style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 prompting format
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-70B-v0.1.
Switch: 8B - 70B - 70B-alt
## Training data used:
- Aesir datasets
- NoRobots
- limarp - 8k ctx
- toxic-dpo-v0.1-sharegpt
- ToxicQAFinal
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)
- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)
- cgato/SlimOrcaDedupCleaned - 5% (randomly)
- Airoboros (reduced)
- Capybara (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
## Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek | [
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-70B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"## Lumimaid 0.1\n\n<center><div style=\"width: 100%;\">\n <img src=\"URL style=\"display: block; margin: auto;\">\n</div></center>\n\nThis model uses the Llama3 prompting format\n\nLlama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.\n\nWe also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.\n\nThis model includes the new Luminae dataset from Ikari.\n\n\nIf you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\nThis repo contains FP16 files of Lumimaid-70B-v0.1.\n\nSwitch: 8B - 70B - 70B-alt",
"## Training data used:\n- Aesir datasets\n- NoRobots\n- limarp - 8k ctx\n- toxic-dpo-v0.1-sharegpt\n- ToxicQAFinal\n- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset\n- Squish42/bluemoon-fandom-1-1-rp-cleaned - 50% (randomly)\n- NobodyExistsOnTheInternet/PIPPAsharegptv2test - 5% (randomly)\n- cgato/SlimOrcaDedupCleaned - 5% (randomly)\n- Airoboros (reduced)\n- Capybara (reduced)",
"## Models used (only for 8B)\n\n- Initial LumiMaid 8B Finetune\n- Undi95/Llama-3-Unholy-8B-e4\n- Undi95/Llama-3-LewdPlay-8B",
"## Prompt template: Llama3",
"## Others\n\nUndi: If you want to support us, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6006
- F1 Score: 0.6727
- Accuracy: 0.6748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6452 | 1.04 | 200 | 0.6178 | 0.6311 | 0.6566 |
| 0.6153 | 2.08 | 400 | 0.6301 | 0.6434 | 0.6409 |
| 0.6065 | 3.12 | 600 | 0.6049 | 0.6694 | 0.6804 |
| 0.603 | 4.17 | 800 | 0.6251 | 0.6584 | 0.6559 |
| 0.5966 | 5.21 | 1000 | 0.6064 | 0.6686 | 0.6755 |
| 0.5941 | 6.25 | 1200 | 0.6047 | 0.6722 | 0.6745 |
| 0.5852 | 7.29 | 1400 | 0.6158 | 0.6688 | 0.6673 |
| 0.5845 | 8.33 | 1600 | 0.6197 | 0.6646 | 0.6624 |
| 0.5814 | 9.38 | 1800 | 0.6251 | 0.6585 | 0.6559 |
| 0.5764 | 10.42 | 2000 | 0.6045 | 0.6777 | 0.6804 |
| 0.5778 | 11.46 | 2200 | 0.6041 | 0.6711 | 0.6758 |
| 0.5668 | 12.5 | 2400 | 0.6185 | 0.6726 | 0.6722 |
| 0.5644 | 13.54 | 2600 | 0.6260 | 0.6671 | 0.6657 |
| 0.5642 | 14.58 | 2800 | 0.6139 | 0.6665 | 0.6670 |
| 0.5637 | 15.62 | 3000 | 0.6193 | 0.6636 | 0.6631 |
| 0.5576 | 16.67 | 3200 | 0.6239 | 0.6590 | 0.6579 |
| 0.5523 | 17.71 | 3400 | 0.6274 | 0.6560 | 0.6546 |
| 0.556 | 18.75 | 3600 | 0.6327 | 0.6570 | 0.6553 |
| 0.5516 | 19.79 | 3800 | 0.6394 | 0.6645 | 0.6628 |
| 0.5438 | 20.83 | 4000 | 0.6292 | 0.6633 | 0.6621 |
| 0.5438 | 21.88 | 4200 | 0.6535 | 0.6475 | 0.6448 |
| 0.5386 | 22.92 | 4400 | 0.6413 | 0.6594 | 0.6579 |
| 0.5357 | 23.96 | 4600 | 0.6465 | 0.6519 | 0.6497 |
| 0.5325 | 25.0 | 4800 | 0.6459 | 0.6539 | 0.6517 |
| 0.5274 | 26.04 | 5000 | 0.6459 | 0.6504 | 0.6484 |
| 0.526 | 27.08 | 5200 | 0.6466 | 0.6535 | 0.6520 |
| 0.523 | 28.12 | 5400 | 0.6561 | 0.6495 | 0.6471 |
| 0.5191 | 29.17 | 5600 | 0.6623 | 0.6535 | 0.6514 |
| 0.5115 | 30.21 | 5800 | 0.6637 | 0.6552 | 0.6533 |
| 0.5137 | 31.25 | 6000 | 0.6703 | 0.6423 | 0.6396 |
| 0.5119 | 32.29 | 6200 | 0.6508 | 0.6502 | 0.6487 |
| 0.5088 | 33.33 | 6400 | 0.6721 | 0.6439 | 0.6413 |
| 0.5057 | 34.38 | 6600 | 0.6668 | 0.6495 | 0.6491 |
| 0.5043 | 35.42 | 6800 | 0.6701 | 0.6503 | 0.6481 |
| 0.506 | 36.46 | 7000 | 0.6517 | 0.6510 | 0.6497 |
| 0.4961 | 37.5 | 7200 | 0.6784 | 0.6473 | 0.6452 |
| 0.4929 | 38.54 | 7400 | 0.6843 | 0.6489 | 0.6471 |
| 0.4942 | 39.58 | 7600 | 0.6631 | 0.6505 | 0.6510 |
| 0.4938 | 40.62 | 7800 | 0.6954 | 0.6413 | 0.6386 |
| 0.4898 | 41.67 | 8000 | 0.6708 | 0.6492 | 0.6474 |
| 0.4866 | 42.71 | 8200 | 0.6798 | 0.6518 | 0.6504 |
| 0.4901 | 43.75 | 8400 | 0.6709 | 0.6427 | 0.6413 |
| 0.4866 | 44.79 | 8600 | 0.6799 | 0.6513 | 0.6494 |
| 0.4819 | 45.83 | 8800 | 0.6798 | 0.6502 | 0.6494 |
| 0.4847 | 46.88 | 9000 | 0.6948 | 0.6396 | 0.6370 |
| 0.4809 | 47.92 | 9200 | 0.6960 | 0.6417 | 0.6393 |
| 0.4816 | 48.96 | 9400 | 0.6919 | 0.6486 | 0.6468 |
| 0.482 | 50.0 | 9600 | 0.6903 | 0.6467 | 0.6445 |
| 0.4792 | 51.04 | 9800 | 0.6937 | 0.6468 | 0.6445 |
| 0.4762 | 52.08 | 10000 | 0.6930 | 0.6458 | 0.6435 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:45:43+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_15M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6006
* F1 Score: 0.6727
* Accuracy: 0.6748
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
- F1 Score: 0.7632
- Accuracy: 0.7625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6178 | 1.15 | 200 | 0.5812 | 0.7080 | 0.7074 |
| 0.5678 | 2.3 | 400 | 0.6382 | 0.6436 | 0.6582 |
| 0.5459 | 3.45 | 600 | 0.5881 | 0.7031 | 0.7071 |
| 0.5401 | 4.6 | 800 | 0.5820 | 0.7005 | 0.7046 |
| 0.5342 | 5.75 | 1000 | 0.5525 | 0.7202 | 0.7200 |
| 0.5258 | 6.9 | 1200 | 0.5593 | 0.7173 | 0.7179 |
| 0.5225 | 8.05 | 1400 | 0.5496 | 0.7250 | 0.7247 |
| 0.5197 | 9.2 | 1600 | 0.5916 | 0.6860 | 0.6923 |
| 0.5155 | 10.34 | 1800 | 0.5553 | 0.7197 | 0.7200 |
| 0.5146 | 11.49 | 2000 | 0.5560 | 0.7191 | 0.7200 |
| 0.508 | 12.64 | 2200 | 0.5824 | 0.7011 | 0.7053 |
| 0.5132 | 13.79 | 2400 | 0.5530 | 0.7193 | 0.7211 |
| 0.506 | 14.94 | 2600 | 0.5556 | 0.7127 | 0.7143 |
| 0.504 | 16.09 | 2800 | 0.5451 | 0.7312 | 0.7316 |
| 0.503 | 17.24 | 3000 | 0.5652 | 0.7205 | 0.7222 |
| 0.4994 | 18.39 | 3200 | 0.5591 | 0.7246 | 0.7262 |
| 0.5011 | 19.54 | 3400 | 0.5456 | 0.7289 | 0.7298 |
| 0.497 | 20.69 | 3600 | 0.5430 | 0.7267 | 0.7269 |
| 0.4967 | 21.84 | 3800 | 0.5407 | 0.7314 | 0.7319 |
| 0.4947 | 22.99 | 4000 | 0.5471 | 0.7285 | 0.7290 |
| 0.4959 | 24.14 | 4200 | 0.5297 | 0.7354 | 0.7352 |
| 0.4894 | 25.29 | 4400 | 0.5519 | 0.7314 | 0.7319 |
| 0.4965 | 26.44 | 4600 | 0.5460 | 0.7324 | 0.7326 |
| 0.4902 | 27.59 | 4800 | 0.5525 | 0.7269 | 0.7280 |
| 0.487 | 28.74 | 5000 | 0.5480 | 0.7240 | 0.7251 |
| 0.4945 | 29.89 | 5200 | 0.5410 | 0.7337 | 0.7341 |
| 0.4869 | 31.03 | 5400 | 0.5507 | 0.7291 | 0.7301 |
| 0.4896 | 32.18 | 5600 | 0.5256 | 0.7396 | 0.7391 |
| 0.4832 | 33.33 | 5800 | 0.5439 | 0.7342 | 0.7344 |
| 0.4921 | 34.48 | 6000 | 0.5405 | 0.7330 | 0.7337 |
| 0.4814 | 35.63 | 6200 | 0.5309 | 0.7376 | 0.7373 |
| 0.4888 | 36.78 | 6400 | 0.5390 | 0.7330 | 0.7334 |
| 0.4837 | 37.93 | 6600 | 0.5416 | 0.7329 | 0.7330 |
| 0.4815 | 39.08 | 6800 | 0.5345 | 0.7384 | 0.7384 |
| 0.4833 | 40.23 | 7000 | 0.5349 | 0.7385 | 0.7384 |
| 0.486 | 41.38 | 7200 | 0.5310 | 0.7382 | 0.7380 |
| 0.483 | 42.53 | 7400 | 0.5359 | 0.7330 | 0.7334 |
| 0.4805 | 43.68 | 7600 | 0.5332 | 0.7385 | 0.7384 |
| 0.4801 | 44.83 | 7800 | 0.5450 | 0.7309 | 0.7316 |
| 0.4821 | 45.98 | 8000 | 0.5359 | 0.7349 | 0.7352 |
| 0.4806 | 47.13 | 8200 | 0.5407 | 0.7325 | 0.7330 |
| 0.4819 | 48.28 | 8400 | 0.5387 | 0.7352 | 0.7355 |
| 0.4829 | 49.43 | 8600 | 0.5323 | 0.7389 | 0.7388 |
| 0.4819 | 50.57 | 8800 | 0.5356 | 0.7366 | 0.7366 |
| 0.48 | 51.72 | 9000 | 0.5423 | 0.7321 | 0.7326 |
| 0.4766 | 52.87 | 9200 | 0.5446 | 0.7321 | 0.7326 |
| 0.4815 | 54.02 | 9400 | 0.5420 | 0.7328 | 0.7334 |
| 0.48 | 55.17 | 9600 | 0.5405 | 0.7329 | 0.7334 |
| 0.476 | 56.32 | 9800 | 0.5379 | 0.7349 | 0.7352 |
| 0.4808 | 57.47 | 10000 | 0.5386 | 0.7342 | 0.7344 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:45:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_15M-L1\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4994
* F1 Score: 0.7632
* Accuracy: 0.7625
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4829
- F1 Score: 0.7825
- Accuracy: 0.7819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5985 | 1.15 | 200 | 0.5713 | 0.7153 | 0.7154 |
| 0.5469 | 2.3 | 400 | 0.6351 | 0.6395 | 0.6564 |
| 0.5227 | 3.45 | 600 | 0.5475 | 0.7292 | 0.7294 |
| 0.5141 | 4.6 | 800 | 0.5387 | 0.7369 | 0.7373 |
| 0.5053 | 5.75 | 1000 | 0.5242 | 0.7497 | 0.7492 |
| 0.4998 | 6.9 | 1200 | 0.5305 | 0.7395 | 0.7391 |
| 0.4963 | 8.05 | 1400 | 0.5248 | 0.7393 | 0.7388 |
| 0.4931 | 9.2 | 1600 | 0.5473 | 0.7262 | 0.7283 |
| 0.4876 | 10.34 | 1800 | 0.5194 | 0.7439 | 0.7434 |
| 0.4872 | 11.49 | 2000 | 0.5172 | 0.7510 | 0.7506 |
| 0.4807 | 12.64 | 2200 | 0.5475 | 0.7303 | 0.7319 |
| 0.4855 | 13.79 | 2400 | 0.5089 | 0.7587 | 0.7582 |
| 0.4776 | 14.94 | 2600 | 0.5157 | 0.7514 | 0.7510 |
| 0.4752 | 16.09 | 2800 | 0.5177 | 0.7500 | 0.7499 |
| 0.4758 | 17.24 | 3000 | 0.5201 | 0.7531 | 0.7528 |
| 0.4699 | 18.39 | 3200 | 0.5240 | 0.7504 | 0.7503 |
| 0.4728 | 19.54 | 3400 | 0.5102 | 0.7498 | 0.7496 |
| 0.4662 | 20.69 | 3600 | 0.5063 | 0.7545 | 0.7542 |
| 0.4668 | 21.84 | 3800 | 0.5230 | 0.7458 | 0.7460 |
| 0.4627 | 22.99 | 4000 | 0.5297 | 0.7412 | 0.7416 |
| 0.4655 | 24.14 | 4200 | 0.5121 | 0.7515 | 0.7510 |
| 0.4565 | 25.29 | 4400 | 0.5336 | 0.7506 | 0.7506 |
| 0.463 | 26.44 | 4600 | 0.5167 | 0.7540 | 0.7535 |
| 0.4583 | 27.59 | 4800 | 0.5223 | 0.7470 | 0.7474 |
| 0.4553 | 28.74 | 5000 | 0.5166 | 0.7515 | 0.7513 |
| 0.4595 | 29.89 | 5200 | 0.5159 | 0.7546 | 0.7542 |
| 0.4532 | 31.03 | 5400 | 0.5204 | 0.7508 | 0.7506 |
| 0.4546 | 32.18 | 5600 | 0.5063 | 0.7537 | 0.7531 |
| 0.4474 | 33.33 | 5800 | 0.5128 | 0.7562 | 0.7557 |
| 0.4565 | 34.48 | 6000 | 0.5174 | 0.7511 | 0.7506 |
| 0.4419 | 35.63 | 6200 | 0.5137 | 0.7540 | 0.7535 |
| 0.4492 | 36.78 | 6400 | 0.5112 | 0.7576 | 0.7571 |
| 0.4456 | 37.93 | 6600 | 0.5413 | 0.7403 | 0.7402 |
| 0.4434 | 39.08 | 6800 | 0.5180 | 0.7519 | 0.7513 |
| 0.4448 | 40.23 | 7000 | 0.5249 | 0.7538 | 0.7535 |
| 0.4468 | 41.38 | 7200 | 0.5210 | 0.7503 | 0.7499 |
| 0.444 | 42.53 | 7400 | 0.5156 | 0.7479 | 0.7474 |
| 0.4406 | 43.68 | 7600 | 0.5162 | 0.7490 | 0.7485 |
| 0.4386 | 44.83 | 7800 | 0.5258 | 0.7495 | 0.7492 |
| 0.443 | 45.98 | 8000 | 0.5153 | 0.7486 | 0.7481 |
| 0.4409 | 47.13 | 8200 | 0.5243 | 0.7488 | 0.7485 |
| 0.4412 | 48.28 | 8400 | 0.5204 | 0.7493 | 0.7488 |
| 0.4385 | 49.43 | 8600 | 0.5198 | 0.7501 | 0.7496 |
| 0.4406 | 50.57 | 8800 | 0.5227 | 0.7521 | 0.7517 |
| 0.4391 | 51.72 | 9000 | 0.5283 | 0.7511 | 0.7510 |
| 0.4376 | 52.87 | 9200 | 0.5288 | 0.7484 | 0.7481 |
| 0.4383 | 54.02 | 9400 | 0.5270 | 0.7479 | 0.7478 |
| 0.4378 | 55.17 | 9600 | 0.5240 | 0.7488 | 0.7485 |
| 0.4332 | 56.32 | 9800 | 0.5228 | 0.7514 | 0.7510 |
| 0.4382 | 57.47 | 10000 | 0.5228 | 0.7503 | 0.7499 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:46:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_15M-L8\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4829
* F1 Score: 0.7825
* Accuracy: 0.7819
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4683
- F1 Score: 0.7846
- Accuracy: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5805 | 1.15 | 200 | 0.5699 | 0.7096 | 0.7121 |
| 0.5311 | 2.3 | 400 | 0.6196 | 0.6574 | 0.6711 |
| 0.5054 | 3.45 | 600 | 0.5329 | 0.7385 | 0.7384 |
| 0.4987 | 4.6 | 800 | 0.5223 | 0.7407 | 0.7406 |
| 0.4901 | 5.75 | 1000 | 0.5191 | 0.7511 | 0.7506 |
| 0.4858 | 6.9 | 1200 | 0.5215 | 0.7479 | 0.7478 |
| 0.4814 | 8.05 | 1400 | 0.5293 | 0.7434 | 0.7431 |
| 0.4752 | 9.2 | 1600 | 0.5324 | 0.7414 | 0.7427 |
| 0.4677 | 10.34 | 1800 | 0.5228 | 0.7472 | 0.7467 |
| 0.4686 | 11.49 | 2000 | 0.5185 | 0.7565 | 0.7560 |
| 0.4585 | 12.64 | 2200 | 0.5343 | 0.7408 | 0.7409 |
| 0.4611 | 13.79 | 2400 | 0.5132 | 0.7546 | 0.7542 |
| 0.4551 | 14.94 | 2600 | 0.5177 | 0.7486 | 0.7481 |
| 0.45 | 16.09 | 2800 | 0.5290 | 0.7467 | 0.7470 |
| 0.4485 | 17.24 | 3000 | 0.5097 | 0.7583 | 0.7578 |
| 0.4414 | 18.39 | 3200 | 0.5293 | 0.7483 | 0.7481 |
| 0.4412 | 19.54 | 3400 | 0.5122 | 0.7461 | 0.7456 |
| 0.4354 | 20.69 | 3600 | 0.5108 | 0.7502 | 0.7499 |
| 0.4326 | 21.84 | 3800 | 0.5305 | 0.7444 | 0.7445 |
| 0.4262 | 22.99 | 4000 | 0.5570 | 0.7396 | 0.7406 |
| 0.4284 | 24.14 | 4200 | 0.5263 | 0.7511 | 0.7506 |
| 0.4186 | 25.29 | 4400 | 0.5468 | 0.7512 | 0.7510 |
| 0.4232 | 26.44 | 4600 | 0.5302 | 0.7490 | 0.7485 |
| 0.4159 | 27.59 | 4800 | 0.5412 | 0.7507 | 0.7506 |
| 0.4109 | 28.74 | 5000 | 0.5274 | 0.7464 | 0.7460 |
| 0.4147 | 29.89 | 5200 | 0.5354 | 0.7479 | 0.7481 |
| 0.4047 | 31.03 | 5400 | 0.5491 | 0.7428 | 0.7427 |
| 0.4047 | 32.18 | 5600 | 0.5310 | 0.7433 | 0.7427 |
| 0.3938 | 33.33 | 5800 | 0.5478 | 0.7511 | 0.7506 |
| 0.4018 | 34.48 | 6000 | 0.5339 | 0.7508 | 0.7503 |
| 0.3872 | 35.63 | 6200 | 0.5474 | 0.7439 | 0.7434 |
| 0.3911 | 36.78 | 6400 | 0.5366 | 0.7428 | 0.7424 |
| 0.3877 | 37.93 | 6600 | 0.5748 | 0.7417 | 0.7413 |
| 0.3853 | 39.08 | 6800 | 0.5557 | 0.7392 | 0.7388 |
| 0.3846 | 40.23 | 7000 | 0.5654 | 0.7439 | 0.7434 |
| 0.3872 | 41.38 | 7200 | 0.5705 | 0.7375 | 0.7373 |
| 0.3829 | 42.53 | 7400 | 0.5605 | 0.7393 | 0.7388 |
| 0.3754 | 43.68 | 7600 | 0.5542 | 0.7450 | 0.7445 |
| 0.3755 | 44.83 | 7800 | 0.5678 | 0.7403 | 0.7398 |
| 0.3758 | 45.98 | 8000 | 0.5571 | 0.7418 | 0.7413 |
| 0.3735 | 47.13 | 8200 | 0.5867 | 0.7398 | 0.7395 |
| 0.3728 | 48.28 | 8400 | 0.5711 | 0.7382 | 0.7377 |
| 0.371 | 49.43 | 8600 | 0.5742 | 0.7407 | 0.7402 |
| 0.3695 | 50.57 | 8800 | 0.5821 | 0.7402 | 0.7398 |
| 0.368 | 51.72 | 9000 | 0.5897 | 0.7393 | 0.7391 |
| 0.3675 | 52.87 | 9200 | 0.5823 | 0.7362 | 0.7359 |
| 0.3668 | 54.02 | 9400 | 0.5857 | 0.7365 | 0.7362 |
| 0.3671 | 55.17 | 9600 | 0.5799 | 0.7396 | 0.7391 |
| 0.3637 | 56.32 | 9800 | 0.5779 | 0.7410 | 0.7406 |
| 0.3655 | 57.47 | 10000 | 0.5769 | 0.7406 | 0.7402 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:46:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_15M-L32\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4683
* F1 Score: 0.7846
* Accuracy: 0.7841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt_fine_tuned_CB_bert
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1636
- Accuracy: 0.3182
- F1: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "prompt_fine_tuned_CB_bert", "results": []}]} | lenatr99/prompt_fine_tuned_CB_bert | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T17:48:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us
|
# prompt_fine_tuned_CB_bert
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1636
- Accuracy: 0.3182
- F1: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# prompt_fine_tuned_CB_bert\n\nThis model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1636\n- Accuracy: 0.3182\n- F1: 0.1536",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 400",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.1\n- Pytorch 2.3.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-google-bert/bert-base-uncased #license-apache-2.0 #region-us \n",
"# prompt_fine_tuned_CB_bert\n\nThis model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1636\n- Accuracy: 0.3182\n- F1: 0.1536",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 400",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.1\n- Pytorch 2.3.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi0503B1
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5697 | 0.09 | 10 | 0.7185 |
| 0.345 | 0.18 | 20 | 0.1655 |
| 0.1552 | 0.27 | 30 | 0.1343 |
| 0.1345 | 0.36 | 40 | 0.1175 |
| 0.121 | 0.45 | 50 | 0.1152 |
| 0.1088 | 0.54 | 60 | 0.0861 |
| 0.0923 | 0.63 | 70 | 0.0942 |
| 0.0773 | 0.73 | 80 | 0.0681 |
| 0.0606 | 0.82 | 90 | 0.0686 |
| 0.0647 | 0.91 | 100 | 0.0624 |
| 0.062 | 1.0 | 110 | 0.0663 |
| 0.0434 | 1.09 | 120 | 0.0687 |
| 0.042 | 1.18 | 130 | 0.0675 |
| 0.0503 | 1.27 | 140 | 0.0681 |
| 0.0445 | 1.36 | 150 | 0.0654 |
| 0.0511 | 1.45 | 160 | 0.0593 |
| 0.0462 | 1.54 | 170 | 0.0687 |
| 0.0498 | 1.63 | 180 | 0.0651 |
| 0.0448 | 1.72 | 190 | 0.0640 |
| 0.043 | 1.81 | 200 | 0.0636 |
| 0.04 | 1.9 | 210 | 0.0617 |
| 0.043 | 1.99 | 220 | 0.0613 |
| 0.0226 | 2.08 | 230 | 0.0657 |
| 0.0165 | 2.18 | 240 | 0.0788 |
| 0.011 | 2.27 | 250 | 0.0943 |
| 0.0097 | 2.36 | 260 | 0.0946 |
| 0.0167 | 2.45 | 270 | 0.0864 |
| 0.0105 | 2.54 | 280 | 0.0827 |
| 0.0118 | 2.63 | 290 | 0.0819 |
| 0.0156 | 2.72 | 300 | 0.0802 |
| 0.0137 | 2.81 | 310 | 0.0800 |
| 0.013 | 2.9 | 320 | 0.0800 |
| 0.0098 | 2.99 | 330 | 0.0800 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "Phi0503B1", "results": []}]} | Litzy619/Phi0503B1 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-03T17:49:24+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us
| Phi0503B1
=========
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0800
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi0503B2
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.4837 | 0.09 | 10 | 5.4342 |
| 5.4537 | 0.18 | 20 | 5.2266 |
| 4.774 | 0.27 | 30 | 3.6419 |
| 2.4745 | 0.36 | 40 | 1.0488 |
| 0.5621 | 0.45 | 50 | 0.2015 |
| 0.1739 | 0.54 | 60 | 0.1465 |
| 0.1373 | 0.63 | 70 | 0.1350 |
| 0.1328 | 0.73 | 80 | 0.1258 |
| 0.1091 | 0.82 | 90 | 0.1152 |
| 0.1142 | 0.91 | 100 | 0.0968 |
| 0.0918 | 1.0 | 110 | 0.1021 |
| 0.0773 | 1.09 | 120 | 0.0807 |
| 0.0711 | 1.18 | 130 | 0.0793 |
| 0.0751 | 1.27 | 140 | 0.0661 |
| 0.06 | 1.36 | 150 | 0.0651 |
| 0.0647 | 1.45 | 160 | 0.0658 |
| 0.0577 | 1.54 | 170 | 0.0657 |
| 0.0575 | 1.63 | 180 | 0.0644 |
| 0.0534 | 1.72 | 190 | 0.0661 |
| 0.0594 | 1.81 | 200 | 0.0622 |
| 0.0473 | 1.9 | 210 | 0.0628 |
| 0.0522 | 1.99 | 220 | 0.0643 |
| 0.0402 | 2.08 | 230 | 0.0644 |
| 0.0436 | 2.18 | 240 | 0.0674 |
| 0.0343 | 2.27 | 250 | 0.0708 |
| 0.0358 | 2.36 | 260 | 0.0724 |
| 0.0411 | 2.45 | 270 | 0.0720 |
| 0.0359 | 2.54 | 280 | 0.0706 |
| 0.0366 | 2.63 | 290 | 0.0702 |
| 0.0397 | 2.72 | 300 | 0.0697 |
| 0.044 | 2.81 | 310 | 0.0692 |
| 0.0415 | 2.9 | 320 | 0.0688 |
| 0.037 | 2.99 | 330 | 0.0690 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "Phi0503B2", "results": []}]} | Litzy619/Phi0503B2 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-03T17:49:39+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us
| Phi0503B2
=========
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0690
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_15M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6104
- F1 Score: 0.6754
- Accuracy: 0.6755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6748 | 0.87 | 200 | 0.6571 | 0.6173 | 0.6188 |
| 0.6536 | 1.74 | 400 | 0.6488 | 0.6324 | 0.6323 |
| 0.6479 | 2.61 | 600 | 0.6438 | 0.6414 | 0.6410 |
| 0.6365 | 3.48 | 800 | 0.6347 | 0.6385 | 0.6397 |
| 0.6346 | 4.35 | 1000 | 0.6307 | 0.6387 | 0.6416 |
| 0.6286 | 5.22 | 1200 | 0.6395 | 0.6339 | 0.6391 |
| 0.6242 | 6.09 | 1400 | 0.6266 | 0.6497 | 0.6511 |
| 0.6186 | 6.96 | 1600 | 0.6242 | 0.6600 | 0.6606 |
| 0.6171 | 7.83 | 1800 | 0.6186 | 0.6623 | 0.6630 |
| 0.6159 | 8.7 | 2000 | 0.6171 | 0.6646 | 0.6644 |
| 0.6094 | 9.57 | 2200 | 0.6146 | 0.6613 | 0.6611 |
| 0.6144 | 10.43 | 2400 | 0.6139 | 0.6648 | 0.6647 |
| 0.6105 | 11.3 | 2600 | 0.6175 | 0.6571 | 0.6584 |
| 0.6118 | 12.17 | 2800 | 0.6119 | 0.6676 | 0.6674 |
| 0.6086 | 13.04 | 3000 | 0.6103 | 0.6679 | 0.6677 |
| 0.6053 | 13.91 | 3200 | 0.6114 | 0.6620 | 0.6625 |
| 0.6039 | 14.78 | 3400 | 0.6115 | 0.6615 | 0.6628 |
| 0.606 | 15.65 | 3600 | 0.6125 | 0.6653 | 0.6660 |
| 0.6002 | 16.52 | 3800 | 0.6121 | 0.6665 | 0.6668 |
| 0.6016 | 17.39 | 4000 | 0.6084 | 0.6693 | 0.6696 |
| 0.603 | 18.26 | 4200 | 0.6086 | 0.6690 | 0.6690 |
| 0.597 | 19.13 | 4400 | 0.6072 | 0.6692 | 0.6693 |
| 0.5983 | 20.0 | 4600 | 0.6074 | 0.6661 | 0.6666 |
| 0.5986 | 20.87 | 4800 | 0.6091 | 0.6645 | 0.6649 |
| 0.5976 | 21.74 | 5000 | 0.6116 | 0.6619 | 0.6630 |
| 0.5976 | 22.61 | 5200 | 0.6068 | 0.6666 | 0.6677 |
| 0.5978 | 23.48 | 5400 | 0.6129 | 0.6573 | 0.6611 |
| 0.5943 | 24.35 | 5600 | 0.6047 | 0.6673 | 0.6674 |
| 0.5966 | 25.22 | 5800 | 0.6116 | 0.6578 | 0.6617 |
| 0.5934 | 26.09 | 6000 | 0.6113 | 0.6585 | 0.6614 |
| 0.5951 | 26.96 | 6200 | 0.6116 | 0.6622 | 0.6652 |
| 0.5948 | 27.83 | 6400 | 0.6180 | 0.6534 | 0.6592 |
| 0.5914 | 28.7 | 6600 | 0.6068 | 0.6609 | 0.6628 |
| 0.5915 | 29.57 | 6800 | 0.6048 | 0.6677 | 0.6690 |
| 0.5893 | 30.43 | 7000 | 0.6109 | 0.6600 | 0.6633 |
| 0.5974 | 31.3 | 7200 | 0.6085 | 0.6625 | 0.6652 |
| 0.5923 | 32.17 | 7400 | 0.6108 | 0.6596 | 0.6639 |
| 0.5891 | 33.04 | 7600 | 0.6036 | 0.6659 | 0.6671 |
| 0.5919 | 33.91 | 7800 | 0.6048 | 0.6618 | 0.6633 |
| 0.5906 | 34.78 | 8000 | 0.6055 | 0.6651 | 0.6666 |
| 0.5927 | 35.65 | 8200 | 0.6027 | 0.6657 | 0.6668 |
| 0.5891 | 36.52 | 8400 | 0.6069 | 0.6614 | 0.6639 |
| 0.5908 | 37.39 | 8600 | 0.6063 | 0.6635 | 0.6655 |
| 0.5857 | 38.26 | 8800 | 0.6095 | 0.6630 | 0.6660 |
| 0.5921 | 39.13 | 9000 | 0.6070 | 0.6622 | 0.6649 |
| 0.5895 | 40.0 | 9200 | 0.6047 | 0.6643 | 0.6660 |
| 0.5884 | 40.87 | 9400 | 0.6029 | 0.6672 | 0.6679 |
| 0.5909 | 41.74 | 9600 | 0.6040 | 0.6656 | 0.6668 |
| 0.5906 | 42.61 | 9800 | 0.6042 | 0.6650 | 0.6666 |
| 0.5892 | 43.48 | 10000 | 0.6047 | 0.6640 | 0.6658 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_15M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_15M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:51:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_15M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6104
* F1 Score: 0.6754
* Accuracy: 0.6755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_2_5e-05 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:53:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# LlamaJarvis-7B
LlamaJarvis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B)
## 🧩 Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.6
weight: 0.5
- model: mlabonne/OrpoLlama-3-8B
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/LlamaJarvis-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["NousResearch/Meta-Llama-3-8B-Instruct", "mlabonne/OrpoLlama-3-8B"]} | automerger/LlamaJarvis-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:mlabonne/OrpoLlama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T17:53:49+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #license-apache-2.0 #region-us
|
# LlamaJarvis-7B
LlamaJarvis-7B is an automated merge created by Maxime Labonne using the following configuration.
* NousResearch/Meta-Llama-3-8B-Instruct
* mlabonne/OrpoLlama-3-8B
## Configuration
## Usage
| [
"# LlamaJarvis-7B\n\nLlamaJarvis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #license-apache-2.0 #region-us \n",
"# LlamaJarvis-7B\n\nLlamaJarvis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B",
"## Configuration",
"## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6053
- F1 Score: 0.6811
- Accuracy: 0.6823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6682 | 0.87 | 200 | 0.6471 | 0.6349 | 0.6359 |
| 0.6393 | 1.74 | 400 | 0.6363 | 0.6456 | 0.6457 |
| 0.6281 | 2.61 | 600 | 0.6212 | 0.6612 | 0.6609 |
| 0.6181 | 3.48 | 800 | 0.6148 | 0.6642 | 0.6641 |
| 0.6148 | 4.35 | 1000 | 0.6139 | 0.6601 | 0.6603 |
| 0.6098 | 5.22 | 1200 | 0.6154 | 0.6526 | 0.6552 |
| 0.6042 | 6.09 | 1400 | 0.6202 | 0.6480 | 0.6527 |
| 0.5989 | 6.96 | 1600 | 0.6141 | 0.6615 | 0.6639 |
| 0.5962 | 7.83 | 1800 | 0.6078 | 0.6712 | 0.6709 |
| 0.5953 | 8.7 | 2000 | 0.6030 | 0.6731 | 0.6731 |
| 0.5864 | 9.57 | 2200 | 0.5979 | 0.6767 | 0.6766 |
| 0.5919 | 10.43 | 2400 | 0.6012 | 0.6721 | 0.6723 |
| 0.5862 | 11.3 | 2600 | 0.6009 | 0.6692 | 0.6715 |
| 0.5893 | 12.17 | 2800 | 0.5981 | 0.6709 | 0.6717 |
| 0.5824 | 13.04 | 3000 | 0.5966 | 0.6752 | 0.6758 |
| 0.5807 | 13.91 | 3200 | 0.5975 | 0.6735 | 0.6747 |
| 0.5772 | 14.78 | 3400 | 0.6008 | 0.6742 | 0.6766 |
| 0.5799 | 15.65 | 3600 | 0.6016 | 0.6730 | 0.6758 |
| 0.5746 | 16.52 | 3800 | 0.5983 | 0.6759 | 0.6764 |
| 0.5731 | 17.39 | 4000 | 0.5999 | 0.6770 | 0.6777 |
| 0.5756 | 18.26 | 4200 | 0.5986 | 0.6797 | 0.6815 |
| 0.5684 | 19.13 | 4400 | 0.5978 | 0.6775 | 0.6780 |
| 0.5707 | 20.0 | 4600 | 0.5995 | 0.6755 | 0.6769 |
| 0.5702 | 20.87 | 4800 | 0.5974 | 0.6778 | 0.6791 |
| 0.5675 | 21.74 | 5000 | 0.6075 | 0.6707 | 0.6720 |
| 0.569 | 22.61 | 5200 | 0.5955 | 0.6776 | 0.6785 |
| 0.5645 | 23.48 | 5400 | 0.6137 | 0.6672 | 0.6723 |
| 0.5628 | 24.35 | 5600 | 0.6011 | 0.6756 | 0.6769 |
| 0.5664 | 25.22 | 5800 | 0.6027 | 0.6728 | 0.6764 |
| 0.5609 | 26.09 | 6000 | 0.6073 | 0.6746 | 0.6772 |
| 0.5618 | 26.96 | 6200 | 0.6067 | 0.6739 | 0.6769 |
| 0.5603 | 27.83 | 6400 | 0.6151 | 0.6679 | 0.6728 |
| 0.5578 | 28.7 | 6600 | 0.5997 | 0.6778 | 0.6796 |
| 0.559 | 29.57 | 6800 | 0.5980 | 0.6795 | 0.6807 |
| 0.5551 | 30.43 | 7000 | 0.6067 | 0.6740 | 0.6772 |
| 0.5636 | 31.3 | 7200 | 0.6002 | 0.6794 | 0.6810 |
| 0.5549 | 32.17 | 7400 | 0.6016 | 0.6790 | 0.6807 |
| 0.5543 | 33.04 | 7600 | 0.5994 | 0.6770 | 0.6783 |
| 0.5558 | 33.91 | 7800 | 0.5993 | 0.6776 | 0.6793 |
| 0.5546 | 34.78 | 8000 | 0.6022 | 0.6781 | 0.6793 |
| 0.5567 | 35.65 | 8200 | 0.5980 | 0.6793 | 0.6807 |
| 0.553 | 36.52 | 8400 | 0.6025 | 0.6756 | 0.6783 |
| 0.5553 | 37.39 | 8600 | 0.6016 | 0.6774 | 0.6788 |
| 0.5478 | 38.26 | 8800 | 0.6096 | 0.6733 | 0.6764 |
| 0.5536 | 39.13 | 9000 | 0.6045 | 0.6756 | 0.6777 |
| 0.5508 | 40.0 | 9200 | 0.6035 | 0.6800 | 0.6818 |
| 0.5521 | 40.87 | 9400 | 0.6018 | 0.6760 | 0.6769 |
| 0.5512 | 41.74 | 9600 | 0.6028 | 0.6758 | 0.6772 |
| 0.552 | 42.61 | 9800 | 0.6021 | 0.6789 | 0.6802 |
| 0.5521 | 43.48 | 10000 | 0.6031 | 0.6783 | 0.6799 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:54:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_15M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6053
* F1 Score: 0.6811
* Accuracy: 0.6823
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6214
- F1 Score: 0.6871
- Accuracy: 0.6872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6604 | 0.87 | 200 | 0.6356 | 0.6419 | 0.6446 |
| 0.6283 | 1.74 | 400 | 0.6323 | 0.6420 | 0.6451 |
| 0.6159 | 2.61 | 600 | 0.6089 | 0.6698 | 0.6696 |
| 0.6064 | 3.48 | 800 | 0.6044 | 0.6734 | 0.6731 |
| 0.6011 | 4.35 | 1000 | 0.6067 | 0.6681 | 0.6679 |
| 0.5945 | 5.22 | 1200 | 0.6015 | 0.6739 | 0.6742 |
| 0.5889 | 6.09 | 1400 | 0.6102 | 0.6588 | 0.6630 |
| 0.5839 | 6.96 | 1600 | 0.6055 | 0.6744 | 0.6764 |
| 0.5777 | 7.83 | 1800 | 0.6038 | 0.6778 | 0.6777 |
| 0.5768 | 8.7 | 2000 | 0.6082 | 0.6716 | 0.6717 |
| 0.5664 | 9.57 | 2200 | 0.5990 | 0.6784 | 0.6785 |
| 0.5693 | 10.43 | 2400 | 0.6050 | 0.6708 | 0.6726 |
| 0.5635 | 11.3 | 2600 | 0.5996 | 0.6714 | 0.675 |
| 0.566 | 12.17 | 2800 | 0.5940 | 0.6735 | 0.6747 |
| 0.556 | 13.04 | 3000 | 0.5968 | 0.6770 | 0.6780 |
| 0.553 | 13.91 | 3200 | 0.6026 | 0.6703 | 0.6720 |
| 0.5486 | 14.78 | 3400 | 0.6150 | 0.6675 | 0.6709 |
| 0.5497 | 15.65 | 3600 | 0.6032 | 0.6709 | 0.6731 |
| 0.5432 | 16.52 | 3800 | 0.6059 | 0.6764 | 0.6766 |
| 0.5393 | 17.39 | 4000 | 0.6131 | 0.6752 | 0.6772 |
| 0.5427 | 18.26 | 4200 | 0.6093 | 0.6747 | 0.6785 |
| 0.5304 | 19.13 | 4400 | 0.6131 | 0.6716 | 0.6739 |
| 0.5329 | 20.0 | 4600 | 0.6077 | 0.6777 | 0.6793 |
| 0.531 | 20.87 | 4800 | 0.6070 | 0.6769 | 0.6783 |
| 0.5239 | 21.74 | 5000 | 0.6174 | 0.6708 | 0.6723 |
| 0.5272 | 22.61 | 5200 | 0.6096 | 0.6799 | 0.6813 |
| 0.5188 | 23.48 | 5400 | 0.6364 | 0.6696 | 0.6731 |
| 0.5177 | 24.35 | 5600 | 0.6255 | 0.6697 | 0.6736 |
| 0.5185 | 25.22 | 5800 | 0.6251 | 0.6740 | 0.6777 |
| 0.513 | 26.09 | 6000 | 0.6339 | 0.6707 | 0.6742 |
| 0.5119 | 26.96 | 6200 | 0.6245 | 0.6742 | 0.6777 |
| 0.5078 | 27.83 | 6400 | 0.6367 | 0.6723 | 0.6766 |
| 0.504 | 28.7 | 6600 | 0.6171 | 0.6765 | 0.6772 |
| 0.5056 | 29.57 | 6800 | 0.6165 | 0.6755 | 0.6769 |
| 0.5021 | 30.43 | 7000 | 0.6280 | 0.6777 | 0.6804 |
| 0.5093 | 31.3 | 7200 | 0.6212 | 0.6818 | 0.6826 |
| 0.4991 | 32.17 | 7400 | 0.6257 | 0.6770 | 0.6783 |
| 0.4968 | 33.04 | 7600 | 0.6238 | 0.6776 | 0.6791 |
| 0.4957 | 33.91 | 7800 | 0.6232 | 0.6764 | 0.6785 |
| 0.4945 | 34.78 | 8000 | 0.6249 | 0.6765 | 0.6780 |
| 0.4986 | 35.65 | 8200 | 0.6241 | 0.6784 | 0.6802 |
| 0.4907 | 36.52 | 8400 | 0.6303 | 0.6738 | 0.6761 |
| 0.495 | 37.39 | 8600 | 0.6312 | 0.6758 | 0.6769 |
| 0.4868 | 38.26 | 8800 | 0.6352 | 0.6774 | 0.6793 |
| 0.4894 | 39.13 | 9000 | 0.6343 | 0.6773 | 0.6791 |
| 0.4875 | 40.0 | 9200 | 0.6298 | 0.6787 | 0.6802 |
| 0.4871 | 40.87 | 9400 | 0.6313 | 0.6760 | 0.6769 |
| 0.4861 | 41.74 | 9600 | 0.6330 | 0.6773 | 0.6791 |
| 0.4892 | 42.61 | 9800 | 0.6306 | 0.6777 | 0.6791 |
| 0.4891 | 43.48 | 10000 | 0.6317 | 0.6775 | 0.6791 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T17:55:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_15M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_15M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6214
* F1 Score: 0.6871
* Accuracy: 0.6872
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_15M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | LongDHo/finetuned-gemma-2b | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:55:44+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cosmosDPO_CodeTest2
This model is a fine-tuned version of [ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5356
- Rewards/chosen: -1.3639
- Rewards/rejected: -3.6411
- Rewards/accuracies: 0.2640
- Rewards/margins: 2.2772
- Logps/rejected: -477.7171
- Logps/chosen: -224.9044
- Logits/rejected: -4.1447
- Logits/chosen: -3.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6915 | 0.0524 | 8 | 0.6855 | -0.0225 | -0.0387 | 0.2171 | 0.0162 | -117.4774 | -90.7611 | -2.7135 | -2.4688 |
| 0.6639 | 0.1047 | 16 | 0.6480 | -0.2509 | -0.4015 | 0.2189 | 0.1506 | -153.7584 | -113.6010 | -3.2208 | -2.9343 |
| 0.6251 | 0.1571 | 24 | 0.6436 | -0.7453 | -1.1570 | 0.2217 | 0.4117 | -229.3032 | -163.0413 | -3.8702 | -3.5589 |
| 0.6238 | 0.2095 | 32 | 0.6047 | -0.7237 | -1.2597 | 0.2355 | 0.5360 | -239.5777 | -160.8856 | -3.7913 | -3.4555 |
| 0.5586 | 0.2619 | 40 | 0.5789 | -1.0590 | -1.9755 | 0.2474 | 0.9164 | -311.1551 | -194.4169 | -3.9560 | -3.5940 |
| 0.5389 | 0.3142 | 48 | 0.5577 | -1.0922 | -2.3486 | 0.2548 | 1.2564 | -348.4677 | -197.7312 | -3.9027 | -3.5251 |
| 0.5102 | 0.3666 | 56 | 0.5606 | -1.4904 | -3.3229 | 0.2548 | 1.8325 | -445.8979 | -237.5522 | -4.0088 | -3.6310 |
| 0.5506 | 0.4190 | 64 | 0.5529 | -1.4084 | -3.4076 | 0.2585 | 1.9992 | -454.3663 | -229.3532 | -3.9314 | -3.5543 |
| 0.5696 | 0.4714 | 72 | 0.5365 | -0.7411 | -2.1788 | 0.2621 | 1.4377 | -331.4860 | -162.6252 | -3.6733 | -3.2798 |
| 0.5265 | 0.5237 | 80 | 0.5355 | -0.8770 | -2.4950 | 0.2612 | 1.6180 | -363.1028 | -176.2112 | -3.7304 | -3.3452 |
| 0.5199 | 0.5761 | 88 | 0.5482 | -1.5559 | -3.7745 | 0.2585 | 2.2186 | -491.0597 | -244.1054 | -3.9633 | -3.5958 |
| 0.5163 | 0.6285 | 96 | 0.5464 | -1.5899 | -3.8545 | 0.2594 | 2.2646 | -499.0518 | -247.5011 | -4.0472 | -3.6688 |
| 0.5421 | 0.6809 | 104 | 0.5408 | -1.4973 | -3.8002 | 0.2631 | 2.3029 | -493.6231 | -238.2402 | -4.1221 | -3.7151 |
| 0.5416 | 0.7332 | 112 | 0.5356 | -1.2811 | -3.4299 | 0.2640 | 2.1488 | -456.5994 | -216.6231 | -4.0861 | -3.6611 |
| 0.4967 | 0.7856 | 120 | 0.5347 | -1.2626 | -3.4278 | 0.2640 | 2.1653 | -456.3912 | -214.7687 | -4.1048 | -3.6705 |
| 0.4783 | 0.8380 | 128 | 0.5345 | -1.2666 | -3.4477 | 0.2640 | 2.1811 | -458.3748 | -215.1744 | -4.1066 | -3.6704 |
| 0.508 | 0.8903 | 136 | 0.5352 | -1.3287 | -3.5746 | 0.2640 | 2.2459 | -471.0667 | -221.3868 | -4.1311 | -3.6966 |
| 0.5417 | 0.9427 | 144 | 0.5356 | -1.3619 | -3.6366 | 0.2640 | 2.2746 | -477.2621 | -224.7045 | -4.1435 | -3.7103 |
| 0.5414 | 0.9951 | 152 | 0.5356 | -1.3639 | -3.6411 | 0.2640 | 2.2772 | -477.7171 | -224.9044 | -4.1447 | -3.7114 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1", "model-index": [{"name": "cosmosDPO_CodeTest2", "results": []}]} | meguzn/cosmosDPO_CodeTest2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1",
"license:mit",
"region:us"
] | null | 2024-05-03T17:55:54+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 #license-mit #region-us
| cosmosDPO\_CodeTest2
====================
This model is a fine-tuned version of ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5356
* Rewards/chosen: -1.3639
* Rewards/rejected: -3.6411
* Rewards/accuracies: 0.2640
* Rewards/margins: 2.2772
* Logps/rejected: -477.7171
* Logps/chosen: -224.9044
* Logits/rejected: -4.1447
* Logits/chosen: -3.7114
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | MohamedSaeed-dev/gemma7b-unsloth | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:56:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | MohamedSaeed-dev/phi-unsloth | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T17:56:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #mistral #text-generation #unsloth #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #unsloth #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_cb_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2436
- Accuracy: 0.3182
- F1: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.9874 | 3.5714 | 50 | 1.1351 | 0.3182 | 0.1591 |
| 0.8665 | 7.1429 | 100 | 1.1589 | 0.3182 | 0.1536 |
| 0.8359 | 10.7143 | 150 | 1.1890 | 0.3182 | 0.1536 |
| 0.7662 | 14.2857 | 200 | 1.2116 | 0.3182 | 0.1536 |
| 0.769 | 17.8571 | 250 | 1.2287 | 0.3182 | 0.1536 |
| 0.7534 | 21.4286 | 300 | 1.2380 | 0.3182 | 0.1536 |
| 0.7359 | 25.0 | 350 | 1.2421 | 0.3182 | 0.1536 |
| 0.7449 | 28.5714 | 400 | 1.2436 | 0.3182 | 0.1536 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "loha_fine_tuned_cb_croslo", "results": []}]} | lenatr99/loha_fine_tuned_cb_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T17:56:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| loha\_fine\_tuned\_cb\_croslo
=============================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2436
* Accuracy: 0.3182
* F1: 0.1536
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_cb_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3172
- Accuracy: 0.3182
- F1: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 1.0956 | 3.5714 | 50 | 1.1024 | 0.3636 | 0.3027 |
| 0.8669 | 7.1429 | 100 | 1.1540 | 0.3182 | 0.1536 |
| 0.7634 | 10.7143 | 150 | 1.2351 | 0.3182 | 0.1536 |
| 0.7 | 14.2857 | 200 | 1.2885 | 0.3182 | 0.1536 |
| 0.6951 | 17.8571 | 250 | 1.3121 | 0.3182 | 0.1536 |
| 0.7047 | 21.4286 | 300 | 1.3145 | 0.3182 | 0.1536 |
| 0.6769 | 25.0 | 350 | 1.3154 | 0.3182 | 0.1536 |
| 0.6886 | 28.5714 | 400 | 1.3172 | 0.3182 | 0.1536 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "lora_fine_tuned_cb_croslo", "results": []}]} | lenatr99/lora_fine_tuned_cb_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T17:56:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| lora\_fine\_tuned\_cb\_croslo
=============================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3172
* Accuracy: 0.3182
* F1: 0.1536
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt_fine_tuned_CB_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Accuracy: 0.3182
- F1: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.0278 | 0.4545 | 50 | 1.1158 | 0.3182 | 0.2306 |
| 0.9865 | 0.9091 | 100 | 1.1195 | 0.3636 | 0.2430 |
| 0.8601 | 1.3636 | 150 | 1.1357 | 0.3182 | 0.1536 |
| 0.8769 | 1.8182 | 200 | 1.1595 | 0.3182 | 0.1536 |
| 0.9026 | 2.2727 | 250 | 1.1733 | 0.3182 | 0.1536 |
| 0.8002 | 2.7273 | 300 | 1.1885 | 0.3182 | 0.1536 |
| 0.8093 | 3.1818 | 350 | 1.1996 | 0.3182 | 0.1536 |
| 0.7259 | 3.6364 | 400 | 1.2046 | 0.3182 | 0.1536 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "cc-by-4.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "prompt_fine_tuned_CB_croslo", "results": []}]} | lenatr99/prompt_fine_tuned_CB_croslo | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T17:57:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us
| prompt\_fine\_tuned\_CB\_croslo
===============================
This model is a fine-tuned version of EMBEDDIA/crosloengual-bert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2046
* Accuracy: 0.3182
* F1: 0.1536
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 400
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-EMBEDDIA/crosloengual-bert #license-cc-by-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 400",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ferrazzipietro/LS_Llama-2-7b-hf_adapters_en.layer1_NoQuant_16_64_0.05_2_0.0002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T17:59:42+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |