AlexZheng commited on
Commit
a63ba64
1 Parent(s): 1239697

Upload 4 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ yolov3-spp.weights filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <div align="center">
3
+ <img src="docs/logo.jpg", width="400">
4
+ </div>
5
+
6
+
7
+ ## News!
8
+ - Nov 2022: [**AlphaPose paper**](http://arxiv.org/abs/2211.03375) is released! Checkout the paper for more details about this project.
9
+ - Sep 2022: [**Jittor** version](https://github.com/tycoer/AlphaPose_jittor) of AlphaPose is released! It achieves 1.45x speed up with resnet50 backbone on the training stage.
10
+ - July 2022: [**v0.6.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! [HybrIK](https://github.com/Jeff-sjtu/HybrIK) for 3D pose and shape estimation is supported!
11
+ - Jan 2022: [**v0.5.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! Stronger whole body(face,hand,foot) keypoints! More models are availabel. Checkout [docs/MODEL_ZOO.md](docs/MODEL_ZOO.md)
12
+ - Aug 2020: [**v0.4.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! [Colab](https://colab.research.google.com/drive/1c7xb_7U61HmeJp55xjXs24hf1GUtHmPs?usp=sharing) now available.
13
+ - Dec 2019: [**v0.3.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! Smaller model, higher accuracy!
14
+ - Apr 2019: [**MXNet** version](https://github.com/MVIG-SJTU/AlphaPose/tree/mxnet) of AlphaPose is released! It runs at **23 fps** on COCO validation set.
15
+ - Feb 2019: [CrowdPose](https://github.com/MVIG-SJTU/AlphaPose/docs/CrowdPose.md) is integrated into AlphaPose Now!
16
+ - Dec 2018: [General version](https://github.com/MVIG-SJTU/AlphaPose/trackers/PoseFlow) of PoseFlow is released! 3X Faster and support pose tracking results visualization!
17
+ - Sep 2018: [**v0.2.0** version](https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch) of AlphaPose is released! It runs at **20 fps** on COCO validation set (4.6 people per image on average) and achieves 71 mAP!
18
+
19
+ ## AlphaPose
20
+ [AlphaPose](http://www.mvig.org/research/alphapose.html) is an accurate multi-person pose estimator, which is the **first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset.**
21
+ To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the **first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.**
22
+
23
+ AlphaPose supports both Linux and **Windows!**
24
+
25
+ <div align="center">
26
+ <img src="docs/alphapose_17.gif", width="400" alt><br>
27
+ COCO 17 keypoints
28
+ </div>
29
+ <div align="center">
30
+ <img src="docs/alphapose_26.gif", width="400" alt><br>
31
+ <b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 26 keypoints</a></b> + tracking
32
+ </div>
33
+ <div align="center">
34
+ <img src="docs/alphapose_136.gif", width="400"alt><br>
35
+ <b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 136 keypoints</a></b> + tracking
36
+ <b><a href="https://youtu.be/uze6chg-YeU">YouTube link</a></b><br>
37
+ </div>
38
+ <div align="center">
39
+ <img src="docs/alphapose_hybrik_smpl.gif", width="400"alt><br>
40
+ <b><a href="https://github.com/Jeff-sjtu/HybrIK">SMPL</a></b> + tracking
41
+ </div>
42
+
43
+
44
+ ## Results
45
+ ### Pose Estimation
46
+ Results on COCO test-dev 2015:
47
+ <center>
48
+
49
+ | Method | AP @0.5:0.95 | AP @0.5 | AP @0.75 | AP medium | AP large |
50
+ |:-------|:-----:|:-------:|:-------:|:-------:|:-------:|
51
+ | OpenPose (CMU-Pose) | 61.8 | 84.9 | 67.5 | 57.1 | 68.2 |
52
+ | Detectron (Mask R-CNN) | 67.0 | 88.0 | 73.1 | 62.2 | 75.6 |
53
+ | **AlphaPose** | **73.3** | **89.2** | **79.1** | **69.0** | **78.6** |
54
+
55
+ </center>
56
+
57
+ Results on MPII full test set:
58
+ <center>
59
+
60
+ | Method | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Ave |
61
+ |:-------|:-----:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
62
+ | OpenPose (CMU-Pose) | 91.2 | 87.6 | 77.7 | 66.8 | 75.4 | 68.9 | 61.7 | 75.6 |
63
+ | Newell & Deng | **92.1** | 89.3 | 78.9 | 69.8 | 76.2 | 71.6 | 64.7 | 77.5 |
64
+ | **AlphaPose** | 91.3 | **90.5** | **84.0** | **76.4** | **80.3** | **79.9** | **72.4** | **82.1** |
65
+
66
+ </center>
67
+
68
+ More results and models are available in the [docs/MODEL_ZOO.md](docs/MODEL_ZOO.md).
69
+
70
+ ### Pose Tracking
71
+
72
+ <p align='center'>
73
+ <img src="docs/posetrack.gif", width="360">
74
+ <img src="docs/posetrack2.gif", width="344">
75
+ </p>
76
+
77
+ Please read [trackers/README.md](trackers/) for details.
78
+
79
+ ### CrowdPose
80
+ <p align='center'>
81
+ <img src="docs/crowdpose.gif", width="360">
82
+ </p>
83
+
84
+ Please read [docs/CrowdPose.md](docs/CrowdPose.md) for details.
85
+
86
+
87
+ ## Installation
88
+ Please check out [docs/INSTALL.md](docs/INSTALL.md)
89
+
90
+ ## Model Zoo
91
+ Please check out [docs/MODEL_ZOO.md](docs/MODEL_ZOO.md)
92
+
93
+ ## Quick Start
94
+ - **Colab**: We provide a [colab example](https://colab.research.google.com/drive/1_3Wxi4H3QGVC28snL3rHIoeMAwI2otMR?usp=sharing) for your quick start.
95
+
96
+ - **Inference**: Inference demo
97
+ ``` bash
98
+ ./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
99
+ ```
100
+
101
+ Inference SMPL (Download the SMPL model `basicModel_neutral_lbs_10_207_0_v1.0.0.pkl` from [here](https://smpl.is.tue.mpg.de/) and put it in `model_files/`).
102
+ ``` bash
103
+ ./scripts/inference_3d.sh ./configs/smpl/256x192_adam_lr1e-3-res34_smpl_24_3d_base_2x_mix.yaml ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
104
+ ```
105
+ For high level API, please refer to `./scripts/demo_api.py`. To enable tracking, please refer to [this page](./trackers).
106
+
107
+ - **Training**: Train from scratch
108
+ ``` bash
109
+ ./scripts/train.sh ${CONFIG} ${EXP_ID}
110
+ ```
111
+
112
+ - **Validation**: Validate your model on MSCOCO val2017
113
+ ``` bash
114
+ ./scripts/validate.sh ${CONFIG} ${CHECKPOINT}
115
+ ```
116
+
117
+ Examples:
118
+
119
+ Demo using `FastPose` model.
120
+ ``` bash
121
+ ./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
122
+ #or
123
+ python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
124
+ #or if you want to use yolox-x as the detector
125
+ python scripts/demo_inference.py --detector yolox-x --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
126
+ ```
127
+
128
+ Train `FastPose` on mscoco dataset.
129
+ ``` bash
130
+ ./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose
131
+ ```
132
+
133
+ More detailed inference options and examples, please refer to [GETTING_STARTED.md](docs/GETTING_STARTED.md)
134
+
135
+
136
+ ## Common issue & FAQ
137
+ Check out [faq.md](docs/faq.md) for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!
138
+
139
+ ## Contributors
140
+ AlphaPose is based on RMPE(ICCV'17), authored by [Hao-Shu Fang](https://fang-haoshu.github.io/), Shuqin Xie, [Yu-Wing Tai](https://scholar.google.com/citations?user=nFhLmFkAAAAJ&hl=en) and [Cewu Lu](http://www.mvig.org/), [Cewu Lu](http://mvig.sjtu.edu.cn/) is the corresponding author. Currently, it is maintained by [Jiefeng Li\*](http://jeff-leaf.site/), [Hao-shu Fang\*](https://fang-haoshu.github.io/), [Haoyi Zhu](https://github.com/HaoyiZhu), [Yuliang Xiu](http://xiuyuliang.cn/about/) and [Chao Xu](http://www.isdas.cn/).
141
+
142
+ The main contributors are listed in [doc/contributors.md](docs/contributors.md).
143
+
144
+ ## TODO
145
+ - [x] Multi-GPU/CPU inference
146
+ - [x] 3D pose
147
+ - [x] add tracking flag
148
+ - [ ] PyTorch C++ version
149
+ - [x] Add model trained on mixture dataset (Check the model zoo)
150
+ - [ ] dense support
151
+ - [x] small box easy filter
152
+ - [x] Crowdpose support
153
+ - [ ] Speed up PoseFlow
154
+ - [x] Add stronger/light detectors (yolox is now supported)
155
+ - [x] High level API (check the scripts/demo_api.py)
156
+
157
+ We would really appreciate if you can offer any help and be the [contributor](docs/contributors.md) of AlphaPose.
158
+
159
+
160
+ ## Citation
161
+ Please cite these papers in your publications if it helps your research:
162
+
163
+ @article{alphapose,
164
+ author = {Fang, Hao-Shu and Li, Jiefeng and Tang, Hongyang and Xu, Chao and Zhu, Haoyi and Xiu, Yuliang and Li, Yong-Lu and Lu, Cewu},
165
+ journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
166
+ title = {AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time},
167
+ year = {2022}
168
+ }
169
+
170
+ @inproceedings{fang2017rmpe,
171
+ title={{RMPE}: Regional Multi-person Pose Estimation},
172
+ author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
173
+ booktitle={ICCV},
174
+ year={2017}
175
+ }
176
+
177
+ @inproceedings{li2019crowdpose,
178
+ title={Crowdpose: Efficient crowded scenes pose estimation and a new benchmark},
179
+ author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
180
+ booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
181
+ pages={10863--10872},
182
+ year={2019}
183
+ }
184
+
185
+ If you used the 3D mesh reconstruction module, please also cite:
186
+
187
+ @inproceedings{li2021hybrik,
188
+ title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
189
+ author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
190
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
191
+ pages={3383--3393},
192
+ year={2021}
193
+ }
194
+
195
+ If you used the PoseFlow tracking module, please also cite:
196
+
197
+ @inproceedings{xiu2018poseflow,
198
+ author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
199
+ title = {{Pose Flow}: Efficient Online Pose Tracking},
200
+ booktitle={BMVC},
201
+ year = {2018}
202
+ }
203
+
204
+
205
+
206
+
207
+
208
+ ## License
209
+ AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.
fast_421_res152_256x192.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0f15b3b1c1c04fc81999e4ffdeaf0c7b9a542e271b5cc40422fe94c7cfa9b41
3
+ size 333624610
fast_dcn_res50_256x192.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1e2aa0ad6d13a585f68e981a75254a5b4539990ce900a63b40673dc46d294b6
3
+ size 164931855
yolov3-spp.weights ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87a1e8c85c763316f34e428f2295e1db9ed4abcec59dd9544f8052f50de327b4
3
+ size 252209544