id
int64
0
50k
region
stringlengths
4
15
start
int64
0
249M
end
int64
200
249M
strand
stringclasses
1 value
27,434
ENST00000217185
203
403
+
13,400
ENST00000367051
4,701
4,901
+
883
ENST00000360507
623
823
+
7,303
ENST00000489432
1,469
1,669
+
45,124
ENST00000309668
64
264
+
874
ENST00000335754
416
616
+
45,943
ENST00000595541
921
1,121
+
11,458
ENST00000639499
457
657
+
23,877
ENST00000300061
831
1,031
+
11
ENST00000526014
43
243
+
27,813
ENST00000588783
101
301
+
10,713
ENST00000281932
113
313
+
46,014
ENST00000510693
93
293
+
3,645
ENST00000302555
1,363
1,563
+
10,448
ENST00000374358
88
288
+
41,968
ENST00000323829
280
480
+
3,813
ENST00000368206
167
367
+
4,523
ENST00000430285
8
208
+
7,231
ENST00000560550
18
218
+
39,423
ENST00000506192
146
346
+
46,647
ENST00000419602
302
502
+
12,547
ENST00000652508
1
201
+
10,917
ENST00000367992
4
204
+
16,904
ENST00000253458
1,587
1,787
+
39,863
ENST00000626926
42
242
+
24,728
ENST00000329003
784
984
+
12,695
ENST00000395305
210
410
+
36,114
ENST00000374479
491
691
+
29,694
ENST00000413728
1,186
1,386
+
7,528
ENST00000514798
1,156
1,356
+
28,808
ENST00000549706
985
1,185
+
4,756
ENST00000614640
40
240
+
30,432
ENST00000376452
275
475
+
1,108
ENST00000650697
1,417
1,617
+
11,090
ENST00000324907
1,157
1,357
+
42,876
ENST00000565186
659
859
+
28,136
ENST00000542824
427
627
+
19,860
ENST00000424325
206
406
+
8,054
ENST00000265293
521
721
+
5,886
ENST00000543336
1,065
1,265
+
44,551
ENST00000367614
2,356
2,556
+
3,542
ENST00000648665
2,217
2,417
+
8,359
ENST00000409476
234
434
+
15,074
ENST00000392799
751
951
+
40,589
ENST00000332118
965
1,165
+
19,528
ENST00000528880
267
467
+
2,333
ENST00000360839
6,904
7,104
+
37,768
ENST00000665583
16
216
+
37,791
ENST00000539416
14
214
+
22,894
ENST00000367725
298
498
+
11,790
ENST00000451385
290
490
+
43,628
ENST00000592558
144
344
+
47,118
ENST00000518817
73
273
+
17,703
ENST00000649186
468
668
+
14,474
ENST00000517399
779
979
+
22,427
ENST00000524107
34
234
+
28,883
ENST00000443137
842
1,042
+
5,785
ENST00000256854
471
671
+
27,395
ENST00000527838
213
413
+
5,883
ENST00000368757
1
201
+
37,911
ENST00000443584
1,477
1,677
+
7,002
ENST00000389645
1,142
1,342
+
8,801
ENST00000521333
11
211
+
39,803
ENST00000639767
134
334
+
46,757
ENST00000537098
406
606
+
34,684
ENST00000622645
97
297
+
17,117
ENST00000545204
3
203
+
31,532
ENST00000360184
11,384
11,584
+
268
ENST00000450673
305
505
+
3,142
ENST00000532330
410
610
+
4,800
ENST00000627748
1
201
+
35,868
ENST00000651178
575
775
+
7,422
ENST00000598409
4
204
+
21,413
ENST00000460465
524
724
+
4,396
ENST00000357947
1,110
1,310
+
2,156
ENST00000370591
1,655
1,855
+
28,372
ENST00000566658
2
202
+
32,050
ENST00000412504
641
841
+
1,441
ENST00000295685
83
283
+
25,015
ENST00000527142
238
438
+
25,072
ENST00000368471
2,247
2,447
+
27,121
ENST00000524945
718
918
+
49,882
ENST00000340446
1,850
2,050
+
19,546
ENST00000359359
469
669
+
35,893
ENST00000650547
58
258
+
13,905
ENST00000646968
9
209
+
32,947
ENST00000383170
20
220
+
15,558
ENST00000466197
184
384
+
6,527
ENST00000389568
260
460
+
49,722
ENST00000382821
131
331
+
17,487
ENST00000450845
409
609
+
37,634
ENST00000584288
1
201
+
16,966
ENST00000481785
1,365
1,565
+
12,858
ENST00000622072
472
672
+
48,444
ENST00000373955
2,629
2,829
+
11,146
ENST00000317827
394
594
+
24,050
ENST00000394976
23
223
+
1,496
ENST00000340645
412
612
+
733
ENST00000343820
103
303
+
44,488
ENST00000652479
361
561
+

Genomic Benchmark

In this repository, we collect benchmarks for classification of genomic sequences. It is shipped as a Python package, together with functions helping to download & manipulate datasets and train NN models.

Citing Genomic Benchmarks

If you use Genomic Benchmarks in your research, please cite it as follows.

Text

GRESOVA, Katarina, et al. Genomic Benchmarks: A Collection of Datasets for Genomic Sequence Classification. bioRxiv, 2022.

BibTeX

@article{gresova2022genomic,
  title={Genomic Benchmarks: A Collection of Datasets for Genomic Sequence Classification},
  author={Gresova, Katarina and Martinek, Vlastimil and Cechak, David and Simecek, Petr and Alexiou, Panagiotis},
  journal={bioRxiv},
  year={2022},
  publisher={Cold Spring Harbor Laboratory},
  url={https://www.biorxiv.org/content/10.1101/2022.06.08.495248}
}

From the github repo:

Datasets

Each folder contains either one benchmark or a set of benchmarks. See docs/ for code used to create these benchmarks.

Naming conventions

  • dummy_...: small datasets, used for testing purposes
  • demo_...: middle size datasets, not necesarily biologically relevant or fully reproducible, used in demos

Versioning

We recommend to check the version number when working with the dataset (i.e. not using default None). The version should be set to 0 when the dataset is proposed, after inicial curration it should be changed to 1 and then increased after every modification.

Data format

Each benchmark should contain metadata.yaml file with its main folder with the specification in YAML format, namely

  • the version of the benchmark (0 = in development)

  • the classes of genomic sequences, for each class we further need to specify

    • url with the reference
    • type of the reference (currently, only fa.gz implemented)
    • extra_processing, a parameter helping to overcome some know issues with identifiers matching

The main folder should also contain two folders, train and test. Both those folders should contain gzipped CSV files, one for each class (named class_name.csv.gz).

The format of gzipped CSV files closely resemble BED format, the column names must be the following:

  • id: id of a sequence
  • region: chromosome/transcript/... to be matched with the reference
  • start, end: genomic interval specification (0-based, i.e. same as in Python)
  • strand: either '+' or '-'

To contribute a new datasets

Create a new branch. Add the new subfolders to datasets and docs. The subfolder of docs should contain a description of the dataset in README.md. If the dataset comes with the paper, link the paper. If the dataset is not taken from the paper, make sure you have described and understand the biological process behind it.

If you have access to cloud_cache folder on GDrive, upload your file there and update CLOUD_CACHE in cloud_caching.py.

To review a new dataset

Make sure you can run and reproduce the code. Check you can download the actual sequences and/or create a data loader. Do you understand what is behind these data? (either from the paper or the description) Ask for clarification if needed.

Downloads last month
90