Shitao commited on
Commit
bac849b
1 Parent(s): 87e72cf

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -367,6 +367,18 @@ with torch.no_grad():
367
  print(scores)
368
  ```
369
 
 
 
 
 
 
 
 
 
 
 
 
 
370
  ## Evaluation
371
 
372
  The configuration of saving 60% Flops is: `compress_ratios=2`, `compress_layer=[8]`, `cutoff_layers=[25]`.
 
367
  print(scores)
368
  ```
369
 
370
+ ## Load model in local
371
+
372
+ 1. make sure `gemma_config.py` and `gemma_model.py` from [BAAI/bge-reranker-v2.5-gemma2-lightweight](https://huggingface.co/BAAI/bge-reranker-v2.5-gemma2-lightweight/tree/main) in your local path.
373
+ 2. modify the following part of config.json:
374
+ ```
375
+ "auto_map": {
376
+ "AutoConfig": "gemma_config.CostWiseGemmaConfig",
377
+ "AutoModel": "gemma_model.CostWiseGemmaModel",
378
+ "AutoModelForCausalLM": "gemma_model.CostWiseGemmaForCausalLM"
379
+ },
380
+ ```
381
+
382
  ## Evaluation
383
 
384
  The configuration of saving 60% Flops is: `compress_ratios=2`, `compress_layer=[8]`, `cutoff_layers=[25]`.