File size: 3,651 Bytes
c02ecf8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c927302
 
 
66be8d3
c927302
 
 
c02ecf8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: apache-2.0
dataset_info:
  features:
  - name: model
    dtype: string
  - name: query_prefix
    dtype: string
  - name: passage_prefix
    dtype: string
  - name: embedding_size
    dtype: int64
  - name: revision
    dtype: string
  - name: model_type
    dtype: string
  - name: torch_dtype
    dtype: string
  - name: max_length
    dtype: int64
  splits:
  - name: train
    num_bytes: 475
    num_examples: 5
  download_size: 4533
  dataset_size: 475
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- tabular-to-text
- tabular-classification
- sentence-similarity
- question-answering
language:
- en
tags:
- legal
- reference
- automation
- HFforLegal
pretty_name: Reference models for integration into HF for Legal
size_categories:
- n<1K
---
## Dataset Description
- **Repository:** https://huggingface.co/datasets/HFforLegal/embedding-models
- **Leaderboard:** N/A
- **Point of Contact:** [Louis Brulé Naudet](mailto:[email protected])
- 
# Reference models for integration into HF for Legal 🤗

This dataset comprises a collection of models aimed at streamlining and partially automating the embedding process. Each model entry within this dataset includes essential information such as model identifiers, embedding configurations, and specific parameters, ensuring that users can seamlessly integrate these models into their workflows with minimal setup and maximum efficiency.

## Dataset Structure

| Field           | Type   | Description                                                                 |
|-----------------|--------|-----------------------------------------------------------------------------|
| `model`         | str    | The identifier of the model, typically formatted as `organization/model-name`.|
| `query_prefix`  | str    | A prefix string added to query inputs to delineate them.                      |
| `passage_prefix`| str    | A prefix string added to passage inputs to delineate them.                    |
| `embedding_size`| int    | The dimensional size of the embedding vectors produced by the model.          |
| `revision`      | str    | The specific revision identifier of the model to ensure consistency.          |
| `model_type`    | str    | The architectural type of the model, such as `xlm-roberta` or `qwen2`.        |
| `torch_dtype`   | str    | The data type utilized in PyTorch operations, such as `float32`.              |
| `max_length`    | int    | The maximum input length the model can process, specified in tokens.          |


### Organization architecture

In order to simplify the deployment of the organization's various tools, we propose a simple architecture in which datasets containing the various legal and contractual texts are doubled by datasets containing embeddings for different models, to enable simplified index creation for Spaces initialization and the provision of vector data for the GPU-poor. A simplified representation might look like this:

<img src="https://huggingface.co/spaces/HFforLegal/README/resolve/main/assets/HF%20for%20Legal%20architecture%20for%20easy%20deployment.png">

## Citing & Authors

If you use this dataset in your research, please use the following BibTeX entry.

```BibTeX
@misc{HFforLegal2024,
  author =       {Louis Brulé Naudet},
  title =        {Reference models for integration into HF for Legal},
  year =         {2024}
  howpublished = {\url{https://huggingface.co/datasets/HFforLegal/embedding-models}},
}
```

## Feedback

If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).