Edit model card

dwulff/mpnet-personality

This is a sentence-transformers model that maps personality-related items or texts into a 768-dimensional dense vector space and can be used for many tasks in personality psychology, such as clustering personality items and scales, mapping personality scales to personality constructs, and others.

The model has been generated by fine-tuning all-mpnet-base-v2 using unsigned empirical correlations of 200k pairs of personality items. The model, therefore, encodes the content of personality-related texts independent of the direction (e.g., negation).

See Wulff & Mata (2024) (see Supplement) for details.

Usage

Make sure sentence-transformers is installed:

# latest version
pip install -U sentence-transformers

# latest dev version
pip install git+https://github.com/UKPLab/sentence-transformers.git

You can extract embeddings in the following way:

from sentence_transformers import SentenceTransformer

# personality sentences
sentences = ["Rarely think about how I feel.", "Make decisions quickly."]

# load model
model = SentenceTransformer('dwulff/mpnet-personality')

# extract embeddings
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

The model has been evaluated on public personality data. For standard personality inventories, such as the BIG5 or HEXACO inventories, the model predicts the empirical correlations between personality items at Pearson r ~ .6 and empirical correlations between scales at Pearson r ~ .7.

Performance can be higher on the many common personality items it has been trained on due to memorization (r ~ .9). Performance will be worse for more specialized personality assessments and texts beyond personality items, as well as for personality factors due to the reduced variance in correlations.

See Wulff & Mata (2024) (see Supplement) for details.

Citing

@article{wulff2024jinglejangle,
  author       = {Wulff, Dirk U. and Mata, Rui},
  title        = {Automated jingle–jangle detection: Using embeddings to tackle taxonomic incommensurability},
  journal      = {PsyArViv},
  doi          = {https://doi.org/10.31234/osf.io/9h7aw}
}

Training

The model was trained with the parameters:

DataLoader:

torch.utils.data.dataloader.DataLoader of length 3125 with parameters:

{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}

Loss:

sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss

Parameters of the fit()-Method:

{
    "epochs": 3,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 625,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
Downloads last month
27
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.