Edit model card

This is a Large (780M parameter) Transformer trained for 800k steps on arrival-time encoded music from the Lakh MIDI dataset, MetaMidi dataset, and transcripts of the FMA audio dataset and 450k commercial music records (transcribed using Google Magenta's ISMIR 2022 music transcription model). This model was trained with anticipation.

References for the Anticipatory Music Transformer

The Anticipatory Music Transformer paper is available on ArXiv.

The full model card is available here.

Code for using this model is available on GitHub.

See the accompanying blog post for additional discussion of anticipatory models.

Downloads last month
334
Safetensors
Model size
780M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.