sofuego commited on
Commit
aedd2ee
1 Parent(s): 3a83616

Fix typo in Readme

Browse files
Files changed (1) hide show
  1. README.md +31 -31
README.md CHANGED
@@ -1,31 +1,31 @@
1
- ---
2
- license: llama3
3
- ---
4
-
5
- # Llama-3-8B-Instruct-Gadsby
6
-
7
- ![The letter E discarded in a dumpster.](img/discarded_e.webp)
8
-
9
- Introducing Llama 3 Instruct Gatsby, a modification to the Llama 3 instruct model that may lie, cheat, and deny the existence of elephants; but no matter what, it will not use the letter “E.”
10
-
11
- ![A meme of an elephant hiding in the bushes.](img/elephant_hiding.webp)
12
-
13
- This generator of lipograms works through a very simple modification to the final layer of the Llama 3 model, zeroing out the weights corresponding to any tokens that would contain any variants of the letter "E."
14
-
15
- Example outputs below were generated using at the Q6_K quantization, but I've also included a few other quants to fit other use cases. The process to mask out a model in this way is very easy and can be done to other models.
16
-
17
- ![Example output - Sushi Poem](img/e_free_sushi.webp)
18
-
19
- ![Example output - Elephant Denial](img/cant_say_elephant.webp)
20
-
21
- ![Example output - Woodchuck deconstruction](img/woodchuck_without_e.webp)
22
-
23
- ![Example output - Can't say slumber or sleep](img/a_good_nights_slum.webp)
24
-
25
- For details on how to do this yourself, I have some example code in my medium post.
26
-
27
- https://medium.com/@coreyhanson/a-simple-way-to-program-an-llm-lipogram-83e84db41342
28
-
29
- In case there are any paywall shenanigans, I have also mirrored a copy on my website too.
30
-
31
- https://coreyhanson.com/blog/a-simple-way-to-program-an-llm-lipogram/
 
1
+ ---
2
+ license: llama3
3
+ ---
4
+
5
+ # Llama-3-8B-Instruct-Gadsby
6
+
7
+ ![The letter E discarded in a dumpster.](img/discarded_e.webp)
8
+
9
+ Introducing Llama 3 Instruct Gadsby, a modification to the Llama 3 instruct model that may lie, cheat, and deny the existence of elephants; but no matter what, it will not use the letter “E.”
10
+
11
+ ![A meme of an elephant hiding in the bushes.](img/elephant_hiding.webp)
12
+
13
+ This generator of lipograms works through a very simple modification to the final layer of the Llama 3 model, zeroing out the weights corresponding to any tokens that would contain any variants of the letter "E."
14
+
15
+ Example outputs below were generated using at the Q6_K quantization, but I've also included a few other quants to fit other use cases. The process to mask out a model in this way is very easy and can be done to other models.
16
+
17
+ ![Example output - Sushi Poem](img/e_free_sushi.webp)
18
+
19
+ ![Example output - Elephant Denial](img/cant_say_elephant.webp)
20
+
21
+ ![Example output - Woodchuck deconstruction](img/woodchuck_without_e.webp)
22
+
23
+ ![Example output - Can't say slumber or sleep](img/a_good_nights_slum.webp)
24
+
25
+ For details on how to do this yourself, I have some example code in my medium post.
26
+
27
+ https://medium.com/@coreyhanson/a-simple-way-to-program-an-llm-lipogram-83e84db41342
28
+
29
+ In case there are any paywall shenanigans, I have also mirrored a copy on my website too.
30
+
31
+ https://coreyhanson.com/blog/a-simple-way-to-program-an-llm-lipogram/