embedding-models / README.md
louisbrulenaudet's picture
Update README.md
66be8d3 verified
metadata
license: apache-2.0
dataset_info:
  features:
    - name: model
      dtype: string
    - name: query_prefix
      dtype: string
    - name: passage_prefix
      dtype: string
    - name: embedding_size
      dtype: int64
    - name: revision
      dtype: string
    - name: model_type
      dtype: string
    - name: torch_dtype
      dtype: string
    - name: max_length
      dtype: int64
  splits:
    - name: train
      num_bytes: 475
      num_examples: 5
  download_size: 4533
  dataset_size: 475
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - tabular-to-text
  - tabular-classification
  - sentence-similarity
  - question-answering
language:
  - en
tags:
  - legal
  - reference
  - automation
  - HFforLegal
pretty_name: Reference models for integration into HF for Legal
size_categories:
  - n<1K

Dataset Description

Reference models for integration into HF for Legal 🤗

This dataset comprises a collection of models aimed at streamlining and partially automating the embedding process. Each model entry within this dataset includes essential information such as model identifiers, embedding configurations, and specific parameters, ensuring that users can seamlessly integrate these models into their workflows with minimal setup and maximum efficiency.

Dataset Structure

Field Type Description
model str The identifier of the model, typically formatted as organization/model-name.
query_prefix str A prefix string added to query inputs to delineate them.
passage_prefix str A prefix string added to passage inputs to delineate them.
embedding_size int The dimensional size of the embedding vectors produced by the model.
revision str The specific revision identifier of the model to ensure consistency.
model_type str The architectural type of the model, such as xlm-roberta or qwen2.
torch_dtype str The data type utilized in PyTorch operations, such as float32.
max_length int The maximum input length the model can process, specified in tokens.

Organization architecture

In order to simplify the deployment of the organization's various tools, we propose a simple architecture in which datasets containing the various legal and contractual texts are doubled by datasets containing embeddings for different models, to enable simplified index creation for Spaces initialization and the provision of vector data for the GPU-poor. A simplified representation might look like this:

Citing & Authors

If you use this dataset in your research, please use the following BibTeX entry.

@misc{HFforLegal2024,
  author =       {Louis Brulé Naudet},
  title =        {Reference models for integration into HF for Legal},
  year =         {2024}
  howpublished = {\url{https://huggingface.co/datasets/HFforLegal/embedding-models}},
}

Feedback

If you have any feedback, please reach out at [email protected].