bugreator commited on
Commit
073e9da
β€’
1 Parent(s): 9df2357

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -5
README.md CHANGED
@@ -5,12 +5,38 @@ tags:
5
  - robotics
6
  ---
7
 
8
- There are 11 scenes contained in the dataset from folder 1 to folder 11. Each scene folder contains several video folders represent each recorded videos under that scene. Each video folder contains a RGB folder, a depth folder, a point cloud folder and a ground truth folder. If you want to use this EgoPAT3Dv2 dataset's RGB modality as we do, you need to generate HDF5 file on your own with the script we provided(about 500GB since huggingface doesn't support that large file):
9
 
10
- 1. Download all of these scene folders. Extract each video folder from video zip files in the scene folder using unzip command and delete all useless files inside.
11
 
12
- 2. To use RGB modality, you need to create a new separate folder which has the same hierarchy as the scene-video-rgb structure. Put the previously extracted RGB folders for each scene each video into the same place as the original one. For example, color folder in ***"1/1.1/color"*** should be put into ***"RGB_file/1/1.1"*** as ***"RGB_file/1/1.1/color"***.
13
 
14
- 3. Run script make_RGB_dataset.py.
 
15
 
16
- Then you can use the provided RGBDataset tool to load dataset and create the dataloader.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - robotics
6
  ---
7
 
 
8
 
 
9
 
10
+ ## EgoPAT3Dv2
11
 
12
+ ### Dataset introduction
13
+ There are **11 scenes** contained in the EgoPAT3Dv2 dataset, corresponding to folders 1 through 11. Each scene folder contains 2 to 6 video folders, and each video folder contains an **RGB** folder, a **depth** folder, a **point cloud** folder and a **transformation matrices** folder. (Please ignore other folders or files inside the zip file.) The annotations (ground truth) and transformation matrices (the same as the transformation matrices above) are included in the annotation_transformation.hdf5 file. We use HDF5 to organize the dataset in the experiment, and the dataloader in the GitHub repo is also written correspondingly.
14
 
15
+
16
+ ### Dataset folder hierarchy
17
+ ```bash
18
+ Dataset/
19
+ β”œβ”€β”€ 1 # scene 1
20
+ β”œβ”€β”€ 1.1.zip -> 1.1 # video 1 in scene 1
21
+ β”œβ”€β”€ d2rgb # depth files
22
+ β”œβ”€β”€ color # rgb files
23
+ β”œβ”€β”€ pointcloud # point cloud files
24
+ └── transformation # transformation matrices
25
+ β”œβ”€β”€ 1.2.zip -> 1.2 # share the same structure as 1.1
26
+ β”œβ”€β”€ ...
27
+ └── 1.4.zip -> 1.4
28
+ β”œβ”€β”€ 2/ # all scene/video directories share the same structure as above
29
+ └── ...
30
+ .
31
+ .
32
+ .
33
+
34
+ └── 11
35
+ ```
36
+
37
+ ## Construct HDF5 dataset file
38
+
39
+ Since 50GB is the hard limit for single file size in huggingface, please use [make_RGB_dataset.py](https://huggingface.co/datasets/ai4ce/EgoPAT3Dv2/blob/main/make_RGB_dataset.py) to construct the HDF5 file on your own.
40
+
41
+ 1. Download all zipped files. Extract RGB("color" in the folder) files only.
42
+ 2. Run `make_RGB_dataset.py` after step 1.