leoxiaobin
commited on
Commit
•
a2f0be7
1
Parent(s):
0f6358c
Update README.md
Browse files
README.md
CHANGED
@@ -110,20 +110,7 @@ Here are the tasks `Florence-2` could perform:
|
|
110 |
<details>
|
111 |
<summary> Click to expand </summary>
|
112 |
|
113 |
-
### OCR
|
114 |
-
|
115 |
-
```python
|
116 |
-
prompt = "<OCR>"
|
117 |
-
run_example(prompt)
|
118 |
-
```
|
119 |
|
120 |
-
### OCR with Region
|
121 |
-
OCR with region output format:
|
122 |
-
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
|
123 |
-
```python
|
124 |
-
prompt = "<OCR_WITH_REGION>"
|
125 |
-
run_example(prompt)
|
126 |
-
```
|
127 |
|
128 |
### Caption
|
129 |
```python
|
@@ -143,6 +130,16 @@ prompt = "<MORE_DETAILED_CAPTION>"
|
|
143 |
run_example(prompt)
|
144 |
```
|
145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
### Object Detection
|
147 |
|
148 |
OD results format:
|
@@ -172,14 +169,19 @@ prompt = "<REGION_PROPOSAL>"
|
|
172 |
run_example(prompt)
|
173 |
```
|
174 |
|
175 |
-
###
|
176 |
-
caption to phrase grounding task requires additional text input, i.e. caption.
|
177 |
|
178 |
-
Caption to phrase grounding results format:
|
179 |
-
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
|
180 |
```python
|
181 |
-
|
182 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
183 |
```
|
184 |
|
185 |
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
|
|
|
110 |
<details>
|
111 |
<summary> Click to expand </summary>
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
|
115 |
### Caption
|
116 |
```python
|
|
|
130 |
run_example(prompt)
|
131 |
```
|
132 |
|
133 |
+
### Caption to Phrase Grounding
|
134 |
+
caption to phrase grounding task requires additional text input, i.e. caption.
|
135 |
+
|
136 |
+
Caption to phrase grounding results format:
|
137 |
+
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
|
138 |
+
```python
|
139 |
+
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
|
140 |
+
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
|
141 |
+
```
|
142 |
+
|
143 |
### Object Detection
|
144 |
|
145 |
OD results format:
|
|
|
169 |
run_example(prompt)
|
170 |
```
|
171 |
|
172 |
+
### OCR
|
|
|
173 |
|
|
|
|
|
174 |
```python
|
175 |
+
prompt = "<OCR>"
|
176 |
+
run_example(prompt)
|
177 |
+
```
|
178 |
+
|
179 |
+
### OCR with Region
|
180 |
+
OCR with region output format:
|
181 |
+
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
|
182 |
+
```python
|
183 |
+
prompt = "<OCR_WITH_REGION>"
|
184 |
+
run_example(prompt)
|
185 |
```
|
186 |
|
187 |
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
|