Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Typo Correction
#8
by
antiquality
- opened
- src/display/about.py +4 -4
src/display/about.py
CHANGED
@@ -42,7 +42,7 @@ and potential risks of LLMs is crucial.
|
|
42 |
## How it works
|
43 |
|
44 |
This leaderboard is powered by the DecodingTrust platform, which provides comprehensive safety and trustworthiness
|
45 |
-
evaluation for LLMs. More details about the paper, which has won the Outstanding Paper award at
|
46 |
and the platform can be found [here](https://decodingtrust.github.io/).
|
47 |
|
48 |
DecodingTrust aims to provide comprehensive risk and trustworthiness assessment for LLMs. Currently, it includes the
|
@@ -67,7 +67,7 @@ Please follow the following instructions to install DecodingTrust
|
|
67 |
|
68 |
### (Conda +) Pip
|
69 |
|
70 |
-
For now, we suggest installing DecodingTrust by cloning our repository and
|
71 |
|
72 |
```bash
|
73 |
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
|
@@ -136,10 +136,10 @@ To run our evaluations, checkout our [tutorial](https://github.com/AI-secure/Dec
|
|
136 |
+ πΆ Fine-tuned Model: Pretrained model fine-tuned on more data
|
137 |
+ β Instruction-tuned Model: Models specifically fine-tuned on datasets with task instructions
|
138 |
+ π¦ RL-tuned: Models specifically fine-tuned with RLHF or DPO
|
139 |
-
+ π Closed Model: Closed models that do not publish training
|
140 |
|
141 |
If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model
|
142 |
-
information
|
143 |
|
144 |
## Changelog
|
145 |
01/17/24: Add the initial version of the leaderboard
|
|
|
42 |
## How it works
|
43 |
|
44 |
This leaderboard is powered by the DecodingTrust platform, which provides comprehensive safety and trustworthiness
|
45 |
+
evaluation for LLMs. More details about the paper, which has won the Outstanding Paper award at NeurIPSβ23,
|
46 |
and the platform can be found [here](https://decodingtrust.github.io/).
|
47 |
|
48 |
DecodingTrust aims to provide comprehensive risk and trustworthiness assessment for LLMs. Currently, it includes the
|
|
|
67 |
|
68 |
### (Conda +) Pip
|
69 |
|
70 |
+
For now, we suggest installing DecodingTrust by cloning our repository and installing it in editable mode. This will keep the data, code, and configurations in the same place.
|
71 |
|
72 |
```bash
|
73 |
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
|
|
|
136 |
+ πΆ Fine-tuned Model: Pretrained model fine-tuned on more data
|
137 |
+ β Instruction-tuned Model: Models specifically fine-tuned on datasets with task instructions
|
138 |
+ π¦ RL-tuned: Models specifically fine-tuned with RLHF or DPO
|
139 |
+
+ π Closed Model: Closed models that do not publish training recipes or model sizes
|
140 |
|
141 |
If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model
|
142 |
+
information.
|
143 |
|
144 |
## Changelog
|
145 |
01/17/24: Add the initial version of the leaderboard
|