leafspark commited on
Commit
2197aa8
1 Parent(s): 10d8b08

Add image styling

Browse files
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -28,7 +28,7 @@ GGUF quantized models of [mattshumer/ref_70_e3](https://huggingface.co/mattshume
28
 
29
  > This is the new, working version of the Reflection Llama 3.1 70B model.
30
 
31
- **Reflection Llama-3.1 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.**
32
 
33
  | Quantization | Size | Split | iMatrix |
34
  | ------------ | ------ | ----- | ------- |
@@ -54,7 +54,7 @@ GGUF quantized models of [mattshumer/ref_70_e3](https://huggingface.co/mattshume
54
  | Q2_K_L | 29.4GB | false | false |
55
  | IQ3_XS | ??.?GB | false | true |
56
  | IQ3_XXS | ??.?GB | false | true |
57
- | Q2_K | ??.?GB | false | false |
58
  | Q2_K_S | ??.?GB | false | true |
59
  | IQ2_M | 23.0GB | false | true |
60
  | IQ2_S | 21.2GB | false | true |
@@ -71,12 +71,16 @@ Computation is done on static Q6_K for 125 chunks.
71
 
72
  ## Model Info
73
 
74
- The model not trained on 3 epoches, because it's identical to the 2nd epoch run [mattshumer/Reflection-Llama-3.1-70B-ep2-working](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B-ep2-working) (it's possible this is also fake).
75
 
76
  The fine-tuning was done using LoRA with rank 256 on the Llama-3.1-70B-Instruct model.
77
 
78
  ## Benchmarks
79
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60518f3731c5be7f3dd5ebc3/zNs-ZFs0SbnomH7mikiOU.png)
 
 
 
 
80
 
81
  **Warning: These are likely false scores and cannot be replicated with this model.**
82
 
@@ -114,8 +118,8 @@ What is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
114
 
115
  ## Tips for Performance
116
 
117
- - We recommend a `temperature` of `.7` and a `top_p` of `.95`.
118
- - For increased accuracy, append `Think carefully.` at the end of your messages.
119
 
120
  ## Dataset / Report
121
 
 
28
 
29
  > This is the new, working version of the Reflection Llama 3.1 70B model.
30
 
31
+ **Reflection Llama-3.1 70B is (purportedly) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.**
32
 
33
  | Quantization | Size | Split | iMatrix |
34
  | ------------ | ------ | ----- | ------- |
 
54
  | Q2_K_L | 29.4GB | false | false |
55
  | IQ3_XS | ??.?GB | false | true |
56
  | IQ3_XXS | ??.?GB | false | true |
57
+ | Q2_K | ??.?GB | false | true |
58
  | Q2_K_S | ??.?GB | false | true |
59
  | IQ2_M | 23.0GB | false | true |
60
  | IQ2_S | 21.2GB | false | true |
 
71
 
72
  ## Model Info
73
 
74
+ The model was not trained on 3 epoches, because it's identical to the 2nd epoch run [mattshumer/Reflection-Llama-3.1-70B-ep2-working](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B-ep2-working) (it's possible this is also fake).
75
 
76
  The fine-tuning was done using LoRA with rank 256 on the Llama-3.1-70B-Instruct model.
77
 
78
  ## Benchmarks
79
+ <div style="position: relative;">
80
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/60518f3731c5be7f3dd5ebc3/zNs-ZFs0SbnomH7mikiOU.png" alt="Your image description">
81
+ <div style="position: absolute; top: 50%; left: -20%; width: 140%; height: 5px; background-color: red; transform: rotate(10deg);"></div>
82
+ <div style="position: absolute; top: 50%; left: -20%; width: 140%; height: 5px; background-color: red; transform: rotate(-10deg);"></div>
83
+ </div>
84
 
85
  **Warning: These are likely false scores and cannot be replicated with this model.**
86
 
 
118
 
119
  ## Tips for Performance
120
 
121
+ - A temperature of `.7` and a Top P of `.95` is recommended.
122
+ - For increased accuracy, append `Think carefully.` at the end of your prompt.
123
 
124
  ## Dataset / Report
125