A Walk to Meryton

My latest collaboration with musebots and human performers: a massive system to generate complete compositions with video. Ambient? Downtempo? It’s anyone’s guess.

Coding began in early spring 2021, first generations appeared in July 2021; video generation was added between April-June 2022. Recording of live musicians May-September 2022, mixing and mastering (by Murat Çolak) October-March 2023. Double vinyl release (by RedShift Records) in late summer/early fall 2023.

In the summer of 2022, I collaborated with four amazing individuals, to overlay a human reaction to the generated parts in A Walk To Meryton: John Korsrud, trumpet and flugelhorn; Meredith Bates, violin; Jon Bentley, soprano and tenor saxophones; and Barbara Adler, text and reading.

John, Meredith, and Jon were given the harmonic progressions and melodies for each composition, with suggestions as to which sections they could improvise within.

Barbara and I had long conversations about walking, Jane Austin, musebots, and internal dialogs. Barbara then added her own take on these ideas, and provided readings.

The ten audio tracks, available on SoundCloud, below, will be mixed and mastered by Murat Çolek, for release on vinyl double album in 2023.

Project Description

Building upon my previous generative systems, such as Moments, the approach is much more compositional than improvisational: high level decisions are made by a ProducerBot, and playerBots fulfill specific roles, doing what they can and know how to do. Furthermore, there is significantly more editing by the musebots of their material: they write their parts into a collective score, which other musebots can access and in turn use that data in order inform their own decisions, making second passes at their parts

A ProducerBot generates a complete framework – including a plan for when specific musebots should play – and a chord progression (based upon a much fuller corpus than previously used). This produces a “lead sheet”, which can be interpreted multiple times by the playerBots.

Individual musebots (playerBots) generate their own parts, and select their own synths. Only two high-level controls are available for generation: valence (pleasantness) and arousal (activity).

Musebots choose their own timbres, using a database of possible patches from a multitude of synths. The only hand editing after generation is some volume adjustment between parts in Ableton.

The title of the entire series – A Walk To Meryton – as well as those of individual movements, is generated, by a bot, based upon the text of Jane Austen’s Pride and Prejudice using a second order Markov chain.

Videos are also generative; given the generated audio and score, video bots select five images – one for each section within the music – from a database of photographs taken by myself from recent walks in nature. The images are slowly panned and sent through video processes that are sensitive to movement.

Finally, the system produces leadsheets which display the overall form, harmonic progression, and melodies; this allows musicians to improvise over the generative music. This also allows the system to load these scores and regenerate parts for new performances, similar to how jazz musicians continually reinterpret leadsheets (more examples and explanation below).


Two early generations (with placeholder videos) from July 2021, after four months work on the system.

A further novelty to this system is that generated frameworks (created by the ProducerBot and provided to the playerBots) and scores (generated by the playerBots) are saved, making it possible to translate these into human readable scores. The goal is to eventually provide human musicians with such scores, allowing them to improvise to the musebot’s generated material. One additional bonus of this process is that the frameworks function like lead sheets, and the musebot score like a single performance; it is completely possible to create new musebot performances from the same structures, much like an ensemble of jazz musicians create different interpretations of the same lead sheet (i.e. tunes).

An example of a framework (generated July 20 2021), with two different realisations by the musebots.

Advertisement