Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'The'}) and 1 missing columns ({'Molecular'}).

This happened while the csv dataset builder was generating data using

hf://datasets/mevol/protein_structure_NER_model_v2.1/annotation_IOB/dev.tsv (at revision a2d2293a2225b9efb57cf8fd03b40b2b02395845)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              The: string
              O: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 471
              to
              {'Molecular': Value(dtype='string', id=None), 'O': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'The'}) and 1 missing columns ({'Molecular'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/mevol/protein_structure_NER_model_v2.1/annotation_IOB/dev.tsv (at revision a2d2293a2225b9efb57cf8fd03b40b2b02395845)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Molecular
string
O
string
Dissection
O
of
O
Xyloglucan
B-chemical
Recognition
O
in
O
a
O
Prominent
O
Human
B-species
Gut
O
Symbiont
O
Polysaccharide
B-gene
utilization
I-gene
loci
I-gene
(
O
PUL
B-gene
)
O
within
O
the
O
genomes
O
of
O
resident
O
human
B-species
gut
O
Bacteroidetes
B-taxonomy_domain
are
O
central
O
to
O
the
O
metabolism
O
of
O
the
O
otherwise
O
indigestible
O
complex
O
carbohydrates
B-chemical
known
O
as
O
O
dietary
O
fiber
O
.”
O
However
O
,
O
functional
O
characterization
O
of
O
PUL
B-gene
lags
O
significantly
O
behind
O
sequencing
O
efforts
O
,
O
which
O
limits
O
physiological
O
understanding
O
of
O
the
O
human
B-species
-
O
bacterial
B-taxonomy_domain
symbiosis
O
.
O
In
O
particular
O
,
O
the
O
molecular
O
basis
O
of
O
complex
B-chemical
polysaccharide
I-chemical
recognition
O
,
O
an
O
essential
O
prerequisite
O
to
O
hydrolysis
O
by
O
cell
O
surface
O
glycosidases
B-protein_type
and
O
subsequent
O
metabolism
O
,
O
is
O
generally
O
poorly
O
understood
O
.
O
Here
O
,
O
we
O
present
O
the
O
biochemical
B-experimental_method
,
I-experimental_method
End of preview.

Overview

This data was used to train model: https://huggingface.co/mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v2.1

There are 20 different entity types in this dataset: "bond_interaction", "chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant", "oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name", "residue_name_number","residue_number", "residue_range", "site", "species", "structure_element", "taxonomy_domain"

The data prepared as IOB formated input has been used during training, develiopment and testing. Additional data formats such as JSON and XML as well as CSV files are also available and are described below.

Annotation was carried out with the free annotation tool TeamTat (https://www.teamtat.org/) and documents were downloaded as BioC XML before converting them to IOB, annotation only JSON and CSV format.

The number of annotations and sentences in each file is given below:

document ID number of annotations in BioC XML number of annotations in IOB/JSON/CSV number of sentences
PMC4850273 1129 1129 205
PMC4784909 868 868 204
PMC4850288 718 710 146
PMC4887326 942 942 152
PMC4833862 1044 1044 192
PMC4832331 739 718 134
PMC4852598 1239 1228 250
PMC4786784 1573 1573 232
PMC4848090 1002 1000 192
PMC4792962 1297 1297 256
PMC4841544 1460 1459 274
PMC4772114 824 824 165
PMC4872110 1283 1283 250
PMC4848761 888 884 252
PMC4919469 1636 1624 336
PMC4880283 783 783 166
PMC4968113 1245 1245 292
PMC4937829 633 633 181
PMC4854314 498 488 139
PMC4871749 411 411 79
PMC4869123 922 922 195
PMC4888278 580 580 102
PMC4795551 1475 1475 297
PMC4831588 1087 1070 224
PMC4918766 1027 1027 210
PMC4802042 1445 1445 268
PMC4896748 2652 2638 480
PMC4781976 115 113 24
PMC4802085 983 983 193
PMC4887163 856 856 196
total 31354 31252 6286

Documents and annotations are easiest viewed by using the BioC XML files and opening them in free annotation tool TeamTat. More about the BioC format can be found here: https://bioc.sourceforge.net/

Raw BioC XML files

These are the raw, un-annotated XML files for the publications in the dataset in BioC format. The files are found in the directory: "raw_BioC_XML". There is one file for each document and they follow standard naming "unique PubMedCentral ID"_raw.xml.

Annotations in IOB format

The IOB formated files can be found in the directory: "annotation_IOB" The four files are as follows:

  • all.tsv --> all sentences and annotations used to create model "mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v2.1"; 6286 sentences
  • train.tsv --> training subset of the data; 4400 sentences
  • dev.tsv --> development subset of the data; 943 sentences
  • test.tsv --> testing subset of the data; 943 sentences

The total number of annotations is: 31252

Annotations in BioC JSON

The BioC formated JSON files of the publications have been downloaded from the annotation tool TeamTat. The files are found in the directory: "annotated_BioC_JSON" There is one file for each document and they follow standard naming "unique PubMedCentral ID"_ann.json

Each document JSON contains the following relevant keys:

  • "sourceid" --> giving the numerical part of the unique PubMedCentral ID
  • "text" --> containing the complete raw text of the publication as a string
  • "denotations" --> containing a list of all the annotations for the text

Each annotation is a dictionary with the following keys:

  • "span" --> gives the start and end of the annotatiom span defined by sub keys:
    • "begin" --> character start position of annotation
    • "end" --> character end position of annotation
  • "obj" --> a string containing a number of terms that can be separated by ","; the order of the terms gives the following: entity type, reference to ontology, annotator, time stamp
  • "id" --> unique annotation ID

Here an example:

[{"sourceid":"4784909",
  "sourcedb":"",
  "project":"",
  "target":"",
  "text":"",
  "denotations":[{"span":{"begin":24,
                          "end":34},
                  "obj":"chemical,CHEBI:,[email protected],2023-03-21T15:19:42Z",
                  "id":"4500"},
                 {"span":{"begin":50,
                          "end":59},
                  "obj":"taxonomy_domain,DUMMY:,[email protected],2023-03-21T15:15:03Z",
                  "id":"1281"}]
  }
]

Annotations in BioC XML

The BioC formated XML files of the publications have been downloaded from the annotation tool TeamTat. The files are found in the directory: "annotated_BioC_XML" There is one file for each document and they follow standard naming "unique PubMedCentral ID_ann.xml

The key XML tags to be able to visualise the annotations in TeamTat as well as extracting them to create the training data are "passage" and "offset". The "passage" tag encloses a text passage or paragraph to which the annotations are linked. "Offset" gives the passage/ paragraph offset and allows to determine the character starting and ending postions of the annotations. The tag "text" encloses the raw text of the passage.

Each annotation in the XML file is tagged as below:

  • "annotation id=" --> giving the unique ID of the annotation
  • "infon key="type"" --> giving the entity type of the annotation
  • "infon key="identifier"" --> giving a reference to an ontology for the annotation
  • "infon key="annotator"" --> giving the annotator
  • "infon key="updated_at"" --> providing a time stamp for annotation creation/update
  • "location" --> start and end character positions for the annotated text span
    • "offset" --> start character position as defined by offset value
    • "length" --> length of the annotation span; sum of "offset" and "length" creates the end character position

Here is a basic example of what the BioC XML looks like. Additional tags for document management are not given. Please refer to the documenttation to find out more.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE collection SYSTEM "BioC.dtd">
<collection>
  <source>PMC</source>
  <date>20140719</date>
  <key>pmc.key</key>
  <document>
    <id>4784909</id>
    <passage>
      <offset>0</offset>
      <text>The Structural Basis of Coenzyme A Recycling in a Bacterial Organelle</text>
      <annotation id="4500">
        <infon key="type">chemical</infon>
        <infon key="identifier">CHEBI:</infon>
        <infon key="annotator">[email protected]</infon>
        <infon key="updated_at">2023-03-21T15:19:42Z</infon>
        <location offset="24" length="10"/>
        <text>Coenzyme A</text>
      </annotation>
    </passage>
  </document>
</collection>

Annotations in CSV

The annotations and the relevant sentences they have been found in have also been made available as tab-separated CSV files, one for each publication in the dataset. The files can be found in directory "annotation_CSV". Each file is named as "unique PubMedCentral ID".csv.

The column labels in the CSV files are as follows:

  • "anno_start" --> character start position of the annotation
  • "anno_end" --> character end position of the annotation
  • "anno_text" --> text covered by the annotation
  • "entity_type" --> entity type of the annotation
  • "sentence" --> sentence text in which the annotation was found
  • "section" --> publication section in which the annotation was found

Annotations in JSON

A combined JSON file was created only containing the relevant sentences and associated annotations for each publication in the dataset. The file can be found in directory "annotation_JSON" under the name "annotations.json".

The following keys are used:

  • "PMC4850273" --> unique PubMedCentral of the publication
  • "annotations" --> list of dictionaries for the relevant, annotated sentences of the document; each dictionary has the following sub keys
    • "sid" --> unique sentence ID
    • "sent" --> sentence text as string
    • "section" --> publication section the sentence is in
    • "ner" --> nested list of annotations; each sublist contains the following items: start character position, end character position, annotation text, entity type

Here is an example of a sentence and its annotations:

{"PMC4850273": {"annotations":
                [{"sid": 0,
                  "sent": "Molecular Dissection of Xyloglucan Recognition in a Prominent Human Gut Symbiont",
                  "section": "TITLE",
                  "ner": [
                    [24,34,"Xyloglucan","chemical"],
                    [62,67,"Human","species"],]
                 },]
}}
Downloads last month
55