SetFit with sentence-transformers/all-mpnet-base-v2
This is a SetFit model trained on the hojzas/proj8-lab2 dataset that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/all-mpnet-base-v2
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 384 tokens
- Number of Classes: 3 classes
- Training Dataset: hojzas/proj8-lab2
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
Label | Examples |
---|---|
0 |
|
2 |
|
1 |
|
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/proj8-lab2")
# Run inference
preds = model("def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 43 | 92.2069 | 125 |
Label | Training Sample Count |
---|---|
0 | 13 |
1 | 8 |
2 | 8 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.0137 | 1 | 0.4142 | - |
0.6849 | 50 | 0.0024 | - |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Carbon Emitted: 0.002 kg of CO2
- Hours Used: 0.006 hours
Training Hardware
- On Cloud: No
- GPU Model: 4 x NVIDIA RTX A5000
- CPU Model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- RAM Size: 251.49 GB
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for hojzas/proj8-lab2
Base model
sentence-transformers/all-mpnet-base-v2