fcakyon commited on
Commit
449c6be
1 Parent(s): 811d379

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Crack_detection_experiment > 2023-01-14 5:06pm
2
+ https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment
3
+
4
+ Provided by a Roboflow user
5
+ License: CC BY 4.0
6
+
7
+ This images and some annot was taken from this:
8
+
9
+ ```
10
+ @misc{ 400-img_dataset,
11
+ title = { 400 img Dataset },
12
+ type = { Open Source Dataset },
13
+ author = { Master dissertation },
14
+ howpublished = { \url{ https://universe.roboflow.com/master-dissertation/400-img } },
15
+ url = { https://universe.roboflow.com/master-dissertation/400-img },
16
+ journal = { Roboflow Universe },
17
+ publisher = { Roboflow },
18
+ year = { 2022 },
19
+ month = { dec },
20
+ note = { visited on 2023-01-14 },
21
+ }
22
+ ```
23
+
24
+ Anyway, the instance segmentation format was wrong with some annot because it has bbox labeled format (class Xcenter Ycenter Width Height) instead of segmentation format (class X1 Y1 X2 Y2 .... Xn Yn).
25
+
26
+ So, I corrected it
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - instance-segmentation
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+
8
+ ---
9
+
10
+ <div align="center">
11
+ <img width="640" alt="fcakyon/crack-instance-segmentation" src="https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/thumbnail.jpg">
12
+ </div>
13
+
14
+ ### Dataset Labels
15
+
16
+ ```
17
+ ['cracks-and-spalling', 'object']
18
+ ```
19
+
20
+
21
+ ### Number of Images
22
+
23
+ ```json
24
+ {'test': 37, 'train': 323, 'valid': 73}
25
+ ```
26
+
27
+
28
+ ### How to Use
29
+
30
+ - Install [datasets](https://pypi.org/project/datasets/):
31
+
32
+ ```bash
33
+ pip install datasets
34
+ ```
35
+
36
+ - Load the dataset:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ ds = load_dataset("fcakyon/crack-instance-segmentation", name="full")
42
+ example = ds['train'][0]
43
+ ```
44
+
45
+ ### Roboflow Dataset Page
46
+ [https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5](https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5?ref=roboflow2huggingface)
47
+
48
+ ### Citation
49
+
50
+ ```
51
+ @misc{ 400-img_dataset,
52
+ title = { 400 img Dataset },
53
+ type = { Open Source Dataset },
54
+ author = { Master dissertation },
55
+ howpublished = { \\url{ https://universe.roboflow.com/master-dissertation/400-img } },
56
+ url = { https://universe.roboflow.com/master-dissertation/400-img },
57
+ journal = { Roboflow Universe },
58
+ publisher = { Roboflow },
59
+ year = { 2022 },
60
+ month = { dec },
61
+ note = { visited on 2023-01-14 },
62
+ }
63
+ ```
64
+
65
+ ### License
66
+ CC BY 4.0
67
+
68
+ ### Dataset Summary
69
+ This dataset was exported via roboflow.com on January 14, 2023 at 10:08 AM GMT
70
+
71
+ Roboflow is an end-to-end computer vision platform that helps you
72
+ * collaborate with your team on computer vision projects
73
+ * collect & organize images
74
+ * understand and search unstructured image data
75
+ * annotate, and create datasets
76
+ * export, train, and deploy computer vision models
77
+ * use active learning to improve your dataset over time
78
+
79
+ For state of the art Computer Vision training notebooks you can use with this dataset,
80
+ visit https://github.com/roboflow/notebooks
81
+
82
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
83
+
84
+ The dataset includes 433 images.
85
+ Crack-spall are annotated in COCO format.
86
+
87
+ The following pre-processing was applied to each image:
88
+
89
+ No image augmentation techniques were applied.
90
+
91
+
92
+
README.roboflow.txt ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Crack_detection_experiment - v5 2023-01-14 5:06pm
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.com on January 14, 2023 at 10:08 AM GMT
6
+
7
+ Roboflow is an end-to-end computer vision platform that helps you
8
+ * collaborate with your team on computer vision projects
9
+ * collect & organize images
10
+ * understand and search unstructured image data
11
+ * annotate, and create datasets
12
+ * export, train, and deploy computer vision models
13
+ * use active learning to improve your dataset over time
14
+
15
+ For state of the art Computer Vision training notebooks you can use with this dataset,
16
+ visit https://github.com/roboflow/notebooks
17
+
18
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
+
20
+ The dataset includes 433 images.
21
+ Crack-spall are annotated in COCO format.
22
+
23
+ The following pre-processing was applied to each image:
24
+
25
+ No image augmentation techniques were applied.
26
+
27
+
crack-instance-segmentation.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5"
9
+ _LICENSE = "CC BY 4.0"
10
+ _CITATION = """\
11
+ @misc{ 400-img_dataset,
12
+ title = { 400 img Dataset },
13
+ type = { Open Source Dataset },
14
+ author = { Master dissertation },
15
+ howpublished = { \\url{ https://universe.roboflow.com/master-dissertation/400-img } },
16
+ url = { https://universe.roboflow.com/master-dissertation/400-img },
17
+ journal = { Roboflow Universe },
18
+ publisher = { Roboflow },
19
+ year = { 2022 },
20
+ month = { dec },
21
+ note = { visited on 2023-01-14 },
22
+ }
23
+ """
24
+ _CATEGORIES = ['cracks-and-spalling', 'object']
25
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
26
+
27
+
28
+ class CRACKINSTANCESEGMENTATIONConfig(datasets.BuilderConfig):
29
+ """Builder Config for crack-instance-segmentation"""
30
+
31
+ def __init__(self, data_urls, **kwargs):
32
+ """
33
+ BuilderConfig for crack-instance-segmentation.
34
+
35
+ Args:
36
+ data_urls: `dict`, name to url to download the zip file from.
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(CRACKINSTANCESEGMENTATIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
+ self.data_urls = data_urls
41
+
42
+
43
+ class CRACKINSTANCESEGMENTATION(datasets.GeneratorBasedBuilder):
44
+ """crack-instance-segmentation instance segmentation dataset"""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+ BUILDER_CONFIGS = [
48
+ CRACKINSTANCESEGMENTATIONConfig(
49
+ name="full",
50
+ description="Full version of crack-instance-segmentation dataset.",
51
+ data_urls={
52
+ "train": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/train.zip",
53
+ "validation": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/valid.zip",
54
+ "test": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/test.zip",
55
+ },
56
+ ),
57
+ CRACKINSTANCESEGMENTATIONConfig(
58
+ name="mini",
59
+ description="Mini version of crack-instance-segmentation dataset.",
60
+ data_urls={
61
+ "train": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/valid-mini.zip",
62
+ "validation": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/valid-mini.zip",
63
+ "test": "https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/data/valid-mini.zip",
64
+ },
65
+ )
66
+ ]
67
+
68
+ def _info(self):
69
+ features = datasets.Features(
70
+ {
71
+ "image_id": datasets.Value("int64"),
72
+ "image": datasets.Image(),
73
+ "width": datasets.Value("int32"),
74
+ "height": datasets.Value("int32"),
75
+ "objects": datasets.Sequence(
76
+ {
77
+ "id": datasets.Value("int64"),
78
+ "area": datasets.Value("int64"),
79
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
80
+ "category": datasets.ClassLabel(names=_CATEGORIES),
81
+ }
82
+ ),
83
+ }
84
+ )
85
+ return datasets.DatasetInfo(
86
+ features=features,
87
+ homepage=_HOMEPAGE,
88
+ citation=_CITATION,
89
+ license=_LICENSE,
90
+ )
91
+
92
+ def _split_generators(self, dl_manager):
93
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ gen_kwargs={
98
+ "folder_dir": data_files["train"],
99
+ },
100
+ ),
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split.VALIDATION,
103
+ gen_kwargs={
104
+ "folder_dir": data_files["validation"],
105
+ },
106
+ ),
107
+ datasets.SplitGenerator(
108
+ name=datasets.Split.TEST,
109
+ gen_kwargs={
110
+ "folder_dir": data_files["test"],
111
+ },
112
+ ),
113
+ ]
114
+
115
+ def _generate_examples(self, folder_dir):
116
+ def process_annot(annot, category_id_to_category):
117
+ return {
118
+ "id": annot["id"],
119
+ "area": annot["area"],
120
+ "bbox": annot["bbox"],
121
+ "segmentation": annot["segmentation"],
122
+ "category": category_id_to_category[annot["category_id"]],
123
+ }
124
+
125
+ image_id_to_image = {}
126
+ idx = 0
127
+
128
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
129
+ with open(annotation_filepath, "r") as f:
130
+ annotations = json.load(f)
131
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
132
+ image_id_to_annotations = collections.defaultdict(list)
133
+ for annot in annotations["annotations"]:
134
+ image_id_to_annotations[annot["image_id"]].append(annot)
135
+ image_id_to_image = {annot["file_name"]: annot for annot in annotations["images"]}
136
+
137
+ for filename in os.listdir(folder_dir):
138
+ filepath = os.path.join(folder_dir, filename)
139
+ if filename in image_id_to_image:
140
+ image = image_id_to_image[filename]
141
+ objects = [
142
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
143
+ ]
144
+ with open(filepath, "rb") as f:
145
+ image_bytes = f.read()
146
+ yield idx, {
147
+ "image_id": image["id"],
148
+ "image": {"path": filepath, "bytes": image_bytes},
149
+ "width": image["width"],
150
+ "height": image["height"],
151
+ "objects": objects,
152
+ }
153
+ idx += 1
data/test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b8e74948345eb41eba46e70331b16b85ec373c1c28e4fdbd9c8cd0d74b8bf2a
3
+ size 1429165
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c797753c4743b0d10aa5a390a4a54e09de1cc51958b0d9326946ec79de1aca7
3
+ size 10567198
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4feef110accb74c7542eecdf2099a23e24a60afe85fb9e1b04bc2338debaef8
3
+ size 138840
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f1159a54a4b7233c5898638842fd75d23f0c08f7df740c3902569e4b5027575
3
+ size 2306229
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"test": 37, "train": 323, "valid": 73}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: 9319a20343c87c9b34a708e895dcf821bd337aa6b83e263dd160577f0219f22d
  • Pointer size: 131 Bytes
  • Size of remote file: 171 kB