bokesyo commited on
Commit
e74da3d
1 Parent(s): 587e267

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -37,7 +37,9 @@ The model only takes images as document-side inputs and produce vectors represen
37
  - x86 CPU with 32GB memory.
38
  - x86 CPU with 32GB memory + Nvidia GPU with 16GB memory.
39
 
40
- 1. Pip install all dependencies (for all platforms):
 
 
41
 
42
  ```
43
  Pillow==10.1.0
@@ -49,7 +51,10 @@ sentencepiece==0.1.99
49
  numpy==1.26.0
50
  ```
51
 
52
- 2. Download the model weights and modeling file, choose one of the following:
 
 
 
53
 
54
  - Download with git clone.
55
 
@@ -65,7 +70,7 @@ pip install huggingface-hub
65
  huggingface-cli download --resume-download RhapsodyAI/minicpm-visual-embedding-v0 --local-dir minicpm-visual-embedding-v0 --local-dir-use-symlinks False
66
  ```
67
 
68
- 3. To deploy a local demo, first check `pipeline_gradio.py`, change `model_path` to your local path and change `device` to your device and launch demo:
69
 
70
  Install `gradio` first.
71
 
 
37
  - x86 CPU with 32GB memory.
38
  - x86 CPU with 32GB memory + Nvidia GPU with 16GB memory.
39
 
40
+ ### Install dependencies
41
+
42
+ Use pip to install all dependencies:
43
 
44
  ```
45
  Pillow==10.1.0
 
51
  numpy==1.26.0
52
  ```
53
 
54
+
55
+ ### Download model weights and modeling file
56
+
57
+ Use one of the following methods:
58
 
59
  - Download with git clone.
60
 
 
70
  huggingface-cli download --resume-download RhapsodyAI/minicpm-visual-embedding-v0 --local-dir minicpm-visual-embedding-v0 --local-dir-use-symlinks False
71
  ```
72
 
73
+ ### Launch demo
74
 
75
  Install `gradio` first.
76