pszemraj/deberta-v3-small-sp500-edgar-10k
this predicts the ret
column of the training dataset, given the text
column.
Click to expand code example
import json
from transformers import pipeline
from huggingface_hub import hf_hub_download
model_repo_name = "pszemraj/deberta-v3-small-sp500-edgar-10k"
pipe = pipeline("text-classification", model=model_repo_name)
pipe.tokenizer.model_max_length = 1024
# Download the regression_config.json file
regression_config_path = hf_hub_download(
repo_id=model_repo_name, filename="regression_config.json"
)
with open(regression_config_path, "r") as f:
regression_config = json.load(f)
def inverse_scale(prediction, config):
"""apply inverse scaling to a prediction"""
min_value, max_value = config["min_value"], config["max_value"]
return prediction * (max_value - min_value) + min_value
def predict_with_pipeline(text, pipe, config, ndigits=5):
result = pipe(text, truncation=True)[0]
scaled_score = inverse_scale(result['score'], config)
return round(scaled_score, ndigits)
text = "This is an example text for regression prediction."
# Get predictions
predictions = predict_with_pipeline(text, pipe, regression_config)
print("Predicted Value:", predictions)
Model description
This model is a fine-tuned version of microsoft/deberta-v3-small on BEE-spoke-data/sp500-edgar-10k-markdown
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Mse: 0.0005
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30826
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Mse |
---|---|---|---|---|
0.0064 | 0.54 | 50 | 0.0006 | 0.0006 |
0.0043 | 1.08 | 100 | 0.0005 | 0.0005 |
0.0028 | 1.61 | 150 | 0.0006 | 0.0006 |
0.0025 | 2.15 | 200 | 0.0005 | 0.0005 |
0.0025 | 2.69 | 250 | 0.0005 | 0.0005 |
Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for pszemraj/deberta-v3-small-sp500-edgar-10k
Base model
microsoft/deberta-v3-small