Moments

 

 Moments is a metacreative work that explores Moment-form, a term coined to describe music that avoids directed narrative curves, and instead exists within stasis.

Moments (2016)

Metacreative work for 2 musebots sending MIDI data to 2 Disklaviers.

This initial version of Moments uses only two musebots – one to play each Disklavier via MIDI. Each PianoBot has ten different playing styles, modeled after favorite pianists and works for piano. The musebot selects a style based upon the paramBot’s playing conditions. Lastly, each PianoBot analyses its own playing, and transmits this analysis to the other PianoBot, who uses this information in a (possible) attempt to match playing styles.

A performance from NIME 2017, for two Disklaviers. 

The musebots decided to explore extremely quiet music in front of an international audience in Brisbane, Australia. The cameras don’t show me just off-stage, holding my head in my hands, but determined not to interrupt the system. Someone suggested it was an “awkward success”; someone else suggested that the musebots discovered conceptual art; yet another listener thought that the musebots had a mistimed experience with jet-lag.

Moments – Polychromatic (2017)

Metacreative work for 12 musebots controlling Ableton Live

The music is meant to remain in the background, and not draw attention to itself.

Moments continues the composer’s research into musebots – independent intelligent musical agents – that both generate an overall musical structure, and then create the details within that structure. Various musebots assume roles within the creation and performance of each 10 minute composition An OrchestratorBot decides which musebots are to be used in a given composition, based upon what the main ParamBot generates for the individual moments in the composition. A separate musebot generates the harmony for each moment, based upon the complexity required by the ParamBot. Each composition is unique, and generated on the spot. The musebots are “intelligent”, in that they have learned about their environment, and communicate their intentions and coordinate conditions for collaborative machine composition.

Premiered at Simon Fraser University Goldcorp Centre for the Arts (2017).

The latest version (November 2017) generates compositions in advance of performance, then completes two editing passes on the score: the first to determine that the density (i.e. the actual number of musebots) for each moment/section was achieved; the second to move the generated events within the moment/section to make sure there are no unwanted (long) silences. Scores are then uploaded into Ableton Live as midi clips for performance.

The VisBot generates visual agents matching the audio musebots; thus, for ensembles of three GenoaBots, three similar-looking visual representations will be created. The visual agents react to audio features, including spectral centroid, spectral flux, and loudness.

Ensemble 2: Two GenoaBots, two KitsilanoBots;
Visual musebot by Simon Lysander Overstall

Ensemble 10: Three GenoaBots, two LondynBots;
Visual musebot by Simon Lysander Overstall

Ensemble 8: GenoaBot, KitsilanoBot, Two LondynBot, SienaBot;
Visual musebot by Simon Lysander Overstall

Ensemble 1: GenoaBot, KitsilanoBot, LondynBot, SienaBot, ParisBot;
Visual musebot by Simon Lysander Overstall

Moments – Monochromatic (2017)

Metacreative work for 16 musebots and live performer

This version of Moments, unlike its cousins, is intended for live performance with a live musician, rather than as an installation.

Monochromatic explores how virtual musical agents can interact with a live musician, as well as each other, to explore a unique structure generated for the musical performance. Because the structure is generated at the beginning of the performance, musebot actions can be considered more compositional, rather than improvisational, as structural goals are known beforehand.

Important aspects (sectional points, harmonies, states) are communicated visually to the performer via standard musical notation. Additionally, the performer’s live audio is analyzed by a ListenerBot, and translated for the virtual musebots, who can, in turn, attempt to avoid the performer’s spectral regions, or become attracted to them.

Peggy Lee, cello
July 28, 2017
Gold Saucer, Vancouver

 

 

Advertisements