Skip to content

Enhancing coherence by incorporating accompaniment principles into DeepBach

License

Notifications You must be signed in to change notification settings

TyanVuon/AccompaniX

Repository files navigation

AccompaniX

This repository is a tentative adaptation of the DeepBach model, addressing the coherence challenges in AI music generation. The project builds on the foundational work of DeepBach: a Steerable Model for Bach Chorales Generation. DeepBach: a Steerable Model for Bach chorales generation
Gaëtan Hadjeres, François Pachet, Frank Nielsen
ICML 2017 arXiv:1612.01010

python 3.10 together with Pytorch 2.1.0+cu121, music21 7.3.3

For the original Keras version, please checkout the original_keras branch.

Examples of music generated by Original DeepBach are available on this website

Installation

To set up AccompaniX, follow these steps:

git clone https://github.com/TyanVuon/AccompaniX
cd AccompaniX
conda env create --name deepbach_pytorch -f environment.yml

This will create a conda env named deepbach_pytorch.

music21 editor

You might need to Open a four-part chorale. Press enter on the server address, a list of computed models should appear. Select and (re)load a model. Configure properly the music editor called by music21. On Ubuntu you can eg. use MuseScore:

sudo apt install musescore
python -c 'import music21; music21.environment.set("musicxmlPath", "/usr/bin/musescore")'

For usage on a headless server (no X server), just set it to a dummy command:

python -c 'import music21; music21.environment.set("musicxmlPath", "/bin/true")'

Usage

Usage: deepBach.py [OPTIONS]

Options:
  --note_embedding_dim INTEGER    Size of the note embeddings. Default: 20
  --meta_embedding_dim INTEGER    Size of the metadata embeddings. Default: 20
  --num_layers INTEGER            Number of layers of the LSTMs. Default: 2
  --lstm_hidden_size INTEGER      Hidden size of the LSTMs. Default: 256
  --dropout_lstm FLOAT            Amount of dropout between LSTM layers. Default: 0.5
  --linear_hidden_size INTEGER    Hidden size of the Linear layers. Default: 256
  --batch_size INTEGER            Training batch size. Default: 256
  --num_epochs INTEGER            Number of training epochs. Default: 5
  --train                         Flag to train or retrain the specified model. Default: False
  --num_iterations INTEGER        Number of parallel pseudo-Gibbs sampling iterations. Default: 500
  --sequence_length_ticks INTEGER Length of the generated chorale (in ticks). Default: 64
  --load TEXT                     Parameters to load models. Format: 'param1=value1,param2=value2,...'

  --help                          Show this message and exit.

Command Line Options

  • Training a New Model: Use the --train flag to train a new model from scratch. This initiates the training process with the specified parameters.

  • Loading Pretrained Models: To load a pretrained model, use the --load flag followed by specific parameters like ep=1,ni=30. This command searches for models matching the specified parameters in their filenames, such as epochs (ep), number of iterations (ni), and other relevant attributes.

  • Model Configuration via Command Line: Command line options are directly passed to the model constructor. This allows for dynamic adjustment of model parameters like embedding dimensions, LSTM sizes, and layer counts.

  • Filename-Based Model Adaptation: When loading models, DeepBach dynamically adjusts the voice models' architecture to align with the configurations indicated in the model filenames. This feature facilitates seamless transitions between different model states and configurations.

  • Saving Models with Parameterized Filenames: Models are saved with filenames that encapsulate key training parameters, enabling easy identification and retrieval of specific model states for future use or further training.

Example Commands

  • To train a new model: python deepBach.py --train --num_epochs=5 --batch_size=256
  • To load a specific pretrained model: python deepBach.py --load ep=1,ni=30
  • To generate music with a specific model: python deepBach.py --num_iterations=500 --sequence_length_ticks=64 --load ep=1,ni=30

Usage with NONOTO

The command

python flask_server.py

starts a Flask server listening on port 5000. You can then use NONOTO to compose with DeepBach in an interactive way.

This server can also been started using Docker with:

docker run -p 5000:5000 -it --rm ghadjeres/deepbach

(CPU version), with or

docker run --runtime=nvidia -p 5000:5000 -it --rm ghadjeres/deepbach

(GPU version, requires nvidia-docker.

Usage within MuseScore

Deprecated

Put deepBachMuseScore.qml file in your MuseScore plugins directory, and run

python musescore_flask_server.py

MuseScore3.5,and 4 can be set by configuration option from music21, and mainly for analysis during the course of modifications,for interactive use, the server is used.

Issues

Music21 editor not set

music21.converter.subConverters.SubConverterException: Cannot find a valid application path for format musicxml. Specify this in your Environment by calling environment.set(None, '/path/to/application')

Either set it to MuseScore or similar (on a machine with GUI) to to a dummy command (on a server). See the installation section.

Cited from

@InProceedings{pmlr-v70-hadjeres17a,
  title = 	 {{D}eep{B}ach: a Steerable Model for {B}ach Chorales Generation},
  author = 	 {Ga{\"e}tan Hadjeres and Fran{\c{c}}ois Pachet and Frank Nielsen},
  booktitle = 	 {Proceedings of the 34th International Conference on Machine Learning},
  pages = 	 {1362--1371},
  year = 	 {2017},
  editor = 	 {Doina Precup and Yee Whye Teh},
  volume = 	 {70},
  series = 	 {Proceedings of Machine Learning Research},
  address = 	 {International Convention Centre, Sydney, Australia},
  month = 	 {06--11 Aug},
  publisher = 	 {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v70/hadjeres17a/hadjeres17a.pdf},
  url = 	 {http://proceedings.mlr.press/v70/hadjeres17a.html},
}

About

Enhancing coherence by incorporating accompaniment principles into DeepBach

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published