english_char_split / README.md
rchan26's picture
add metadata
22c125b verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
dataset_info:
  features:
    - name: word
      dtype: string
    - name: language
      dtype: string
    - name: input_ids
      sequence: int32
    - name: attention_mask
      sequence: int8
    - name: special_tokens_mask
      sequence: int8
    - name: tokens
      sequence: string
  splits:
    - name: train
      num_bytes: 5310458
      num_examples: 37849
    - name: test
      num_bytes: 1981786
      num_examples: 14123
    - name: validation
      num_bytes: 2614514
      num_examples: 18643
  download_size: 2205128
  dataset_size: 9906758
license: mit
task_categories:
  - text-classification
language:
  - en

Dataset Card for "english_char_split"

This is a dataset of English words which have been tokenised by character.

It was originally used to train a RoBERTa model from scratch on the masked language modelling task where during training, characters were randomly masked.

This was ultimately used in an anomaly detection task where the embeddings from the trained model were used to detect non-English words - see full example here.