winglian commited on
Commit
211e4c7
1 Parent(s): ffae99c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ LoRA adapter based on GradientAI's 1M context Llama-3 8B Instruct finetune. I found that rank 1024 is not sufficient to capture the delta weights in the q_proj and o_proj, so I've created seperate adapters for those modules vs the k-v projection modules.