|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
size_categories: |
|
- 10K<n<100K |
|
pretty_name: Chord Llama Dataset |
|
--- |
|
The dataset used to train Chord Llama, A model for generating sheet music. |
|
This dataset contains entries of an altered version of the MusicXML format, and cannot be used to generate MusicXML directly. |
|
A fine-tuned model and interface will be released in the future. |
|
|
|
The data is sourced from Wikifonia and Part 1 of MScoreLib. |
|
Both of these databases are originally in MusicXML format. |
|
|
|
- Wikifonia: http://www.synthzone.com/forum/ubbthreads.php/topics/384909/Download_for_Wikifonia_all_6,6 |
|
|
|
- MScoreLib: http://mscorelib.com/ |
|
|
|
Please see [music_xml_converter.ipynb](https://huggingface.co/datasets/Chord-Llama/chord_llama_dataset/blob/main/music_xml_converter.ipynb) for the notebook used to generate the dataset. |
|
|
|
The data was then cleaned to maximize tokenization in a few ways: |
|
- Everything that added readability to the file without changing the meaning of the music was removed |
|
- All parts were separated into individual documents |
|
- All parts containing more than one `<attributes>` element were discarded |
|
- The MusicXML was converted into YAML format to lower token count using the Python library `xmltodict` |
|
- If the necessary data of the YAML version of the document was over Llama 2's context length, the document was split |
|
- The `<attributes>` element saved to `instruction` |
|
- The first half of the measures was set to the input |
|
- The second half of the measures was set to the output |