yuan-yang commited on
Commit
489e393
1 Parent(s): d74486b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -12,7 +12,10 @@ It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://hug
12
  **Model type:**
13
  This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
14
  We also provide the delta weights for other modes:
15
- - [naive correction LogicLLaMA ](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0)
 
 
 
16
 
17
  **License:**
18
  Apache License 2.0
 
12
  **Model type:**
13
  This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
14
  We also provide the delta weights for other modes:
15
+ - [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
16
+ - [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
17
+ - [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
18
+ - [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)
19
 
20
  **License:**
21
  Apache License 2.0