A Walk to Meryton

A work in progress (begun in March 2021)…

My latest collaboration with musebots: a massive (unnamed) system to generate complete compositions. Ambient? Downtempo? It’s anyone’s guess.

A ProducerBot generates a complete framework – including a plan for when specific musebots should play – and a chord progression (based upon a much fuller corpus than previously used). Individual musebots (playerBots) generate their own parts, and select their own synths.

Building upon my previous generative systems, such as Moments, the approach is much more compositional than improvisational: high level decisions are made by a ProducerBot, and playerBots fulfill specific roles, doing what they can and know how to do. Furthermore, there is significantly more editing by the musebots of their material: they write their parts into a collective score, which other musebots can access that data in order inform their own decisions, and make second passes at their parts.

Musebots choose their own timbres, using a database of possible patches from a variety of synths. The only editing after generation is some volume adjustment between parts in Ableton.

Some recent example proofs of concept:

A further novelty to this system is that generated frameworks (created by the ProducerBot and provided to the playerBots) and scores (generated by the playerBots) are saved, making it possible to translate these into human readable scores. The goal is to eventually provide human musicians with such scores, allowing them to improvise to the musebot’s generated material. One additional bonus of this process is that the frameworks function like lead sheets, and the musebot score like a single performance; it is completely possible to create new musebot performances from the same structures, much like an ensemble of jazz musicians create different interpretations of the same lead sheet (i.e. tunes).

An example of a framework (generated July 20), with two different realisations by the musebots.