--- inference: true pipeline_tag: text-to-audio library_name: audiocraft widget: - text: hip hop, soul, piano, chords, jazz, neo jazz, G# minor, 140 bpm example_title: Prompt 1 - text: music, hip hop, soul, rnb, neo soul, C# major, 80 bpm example_title: Prompt 2 language: en tags: - text-to-audio - musicgen license: cc-by-nc-4.0 --- # Model Card for musicgen-songstarter-v0.1 musicgen-songstarter-v0.1 is a [`musicgen-melody`](https://huggingface.co/facebook/musicgen-melody) fine-tuned on a dataset of melody loops from my Splice sample library. It's intended to be used to generate song ideas that are useful for music producers. It generates stereo audio in 32khz. This is a proof of concept. Hopefully, we will be able to collect more data and train a better models in the future. ## Usage Install [audiocraft](https://github.com/facebookresearch/audiocraft): ``` pip install -U git+https://github.com/facebookresearch/audiocraft#egg=audiocraft ``` Then, you should be able to load this model just like any other musicgen checkpoint here on the Hub: ```python from audiocraft.models import musicgen model = musicgen.MusicGen.get_pretrained('nateraw/musicgen-songstarter-v0.1', device='cuda') ``` To generate and save audio samples, you can do: ```python from datetime import datetime from pathlib import Path from audiocraft.models import musicgen from audiocraft.data.audio import audio_write from audiocraft.utils.notebook import display_audio model = musicgen.MusicGen.get_pretrained('nateraw/musicgen-songstarter-v0.1', device='cuda') # path to save our samples. out_dir = Path("./samples") out_dir.mkdir(exist_ok=True, parents=True) model.set_generation_params( duration=15, use_sampling=True, temperature=1.0, top_k=250, cfg_coef=3.0, ) text = "hip hop, soul, piano, chords, jazz, neo jazz, G# minor, 140 bpm" N = 4 out = model.generate( [text] * N, progress=True, ) # Write to files dt_str = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") for i in range(N): audio_write( out_dir / f"{dt_str}_{i:02d}", out[i].cpu(), model.sample_rate, strategy="loudness", ) # Or, if in a notebook, display audio widgets # display_audio(out, model.sample_rate) ``` ## Prompt Format Follow the following prompt format: ``` {tag_1}, {tag_1}, ..., {tag_n}, {key}, {bpm} bpm ``` For example: ``` hip hop, soul, piano, chords, jazz, neo jazz, G# minor, 140 bpm ``` The training dataset had the following tags in it: ``` hip hop trap soul rnb synth songstarters melody keys chords guitar vocals dancehall melodic stack piano electric layered music drill lo-fi hip hop cinematic pop resampled afropop & afrobeats strings leads dark african acoustic brass & woodwinds live sounds reggaeton boom bap pads electric piano fx downtempo wet electric guitar lo-fi caribbean chops chillout riffs percussion electronic bass choir arp uk drill female plucks future bass processed future soul ensemble mallets hooks uk flute phrases drums atmospheres jazz emo gospel male reverse latin american trap edm latin bells pitched ambient tonal distorted moombahton vinyl orchestral dry psychedelic edm funk neo soul classical harmony adlib trumpet high horns electronica violin 808 synthwave ngoni house drones progressive house g-funk hats trip hop baile funk filtered doo wop tambourine kora stabs textures claps grooves clean analog harp ambience smooth acapella blues saxophone organ soft tremolo chillwave reverb electric bass low moog wah wobble indie pop modular sub indie dance glide k-pop afrobeat mid balafon bitcrushed phaser middle eastern zither shakers delay tech house disco experimental celesta cello drum and bass trance rock rhythm whistle sidechained saw breakbeat techno brazilian music box glitch clarinet ```