schirrmacher commited on
Commit
63b502d
1 Parent(s): 4681c4c

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +25 -6
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  tags:
4
  - segmentation
@@ -7,8 +8,17 @@ tags:
7
  - background-removal
8
  - Pytorch
9
  pretty_name: Open Remove Background Model
 
 
10
  datasets:
11
  - schirrmacher/humans
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Open Remove Background Model (ormbg)
@@ -50,14 +60,23 @@ I started training the model with synthetic images of the [Human Segmentation Da
50
 
51
  Synthetic datasets have limitations for achieving great segmentation results. This is because artificial lighting, occlusion, scale or backgrounds create a gap between synthetic and real images. A "model trained solely on synthetic data generated with naïve domain randomization struggles to generalize on the real domain", see [PEOPLESANSPEOPLE: A Synthetic Data Generator for Human-Centric Computer Vision (2022)](https://arxiv.org/pdf/2112.09290).
52
 
53
- Latest changes (05/07/2024):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  - Added [P3M-10K](https://paperswithcode.com/dataset/p3m-10k) dataset for training and validation
56
  - Added [AIM-500](https://paperswithcode.com/dataset/aim-500) dataset for training and validation
57
  - Added [PPM-100](https://github.com/ZHKKKe/PPM) dataset for training and validation
58
  - Applied [Grid Dropout](https://albumentations.ai/docs/api_reference/augmentations/dropout/grid_dropout/) to make the model smarter
59
-
60
- Next steps:
61
-
62
- - Expand dataset with synthetic and real images
63
- - Research on multi-step segmentation/matting by incorporating [ViTMatte](https://github.com/hustvl/ViTMatte)
 
1
  ---
2
+ title: Open Remove Background Model (ormbg)
3
  license: apache-2.0
4
  tags:
5
  - segmentation
 
8
  - background-removal
9
  - Pytorch
10
  pretty_name: Open Remove Background Model
11
+ models:
12
+ - schirrmacher/ormbg
13
  datasets:
14
  - schirrmacher/humans
15
+ emoji: 💻
16
+ colorFrom: red
17
+ colorTo: red
18
+ sdk: gradio
19
+ sdk_version: 4.29.0
20
+ app_file: hf_space/app.py
21
+ pinned: false
22
  ---
23
 
24
  # Open Remove Background Model (ormbg)
 
60
 
61
  Synthetic datasets have limitations for achieving great segmentation results. This is because artificial lighting, occlusion, scale or backgrounds create a gap between synthetic and real images. A "model trained solely on synthetic data generated with naïve domain randomization struggles to generalize on the real domain", see [PEOPLESANSPEOPLE: A Synthetic Data Generator for Human-Centric Computer Vision (2022)](https://arxiv.org/pdf/2112.09290).
62
 
63
+ ### Next steps:
64
+
65
+ - Expand dataset with synthetic and real images
66
+ - Research on state of the art loss functions
67
+
68
+ ### Latest changes (26/07/2024):
69
+
70
+ - Created synthetic dataset with 10k images, crafted with [BlenderProc](https://github.com/DLR-RM/BlenderProc)
71
+ - Removed training data created with [LayerDiffuse](https://github.com/layerdiffusion/LayerDiffuse), since it lacks the accuracy needed
72
+ - Improved model performance (after 100k iterations):
73
+ - F1: 0.9888 -> 0.9932
74
+ - MAE: 0.0113 -> 0.008
75
+ - Scores based on [this validation dataset](https://drive.google.com/drive/folders/1Yy9clZ58xCiai1zYESQkEKZCkslSC8eg)
76
+
77
+ ### 05/07/2024
78
 
79
  - Added [P3M-10K](https://paperswithcode.com/dataset/p3m-10k) dataset for training and validation
80
  - Added [AIM-500](https://paperswithcode.com/dataset/aim-500) dataset for training and validation
81
  - Added [PPM-100](https://github.com/ZHKKKe/PPM) dataset for training and validation
82
  - Applied [Grid Dropout](https://albumentations.ai/docs/api_reference/augmentations/dropout/grid_dropout/) to make the model smarter