Seasons

Seasons is an audio-visual experience that models and depicts our natural environment across the span of a year. The system comprises video sequencing and transitions enriched through their interaction with music and soundscape.  The full work is a real-time cybernetic collaboration between three  generative systems: video, soundscape, and music. The work runs continuously using a variety of computational processes to build the audio-visual output for a single-channel high-definition video and multi-channel sound system.

The generative video sequencing engine uses a recombinant process to combine and sequence shots and transitions drawn from the system’s databases. It runs indefinitely, and very seldom repeats its sequencing. The video engine uses metadata tags to provide semantic coherence to the ongoing stream of images and sequences. The aesthetic is that of “ambient video”[1], gently inviting the viewer to enter an experience of sensory engagement with our natural environment.

Each video clip also has been hand-tagged with a subjective measure for valence and arousal: these values, combined with the video’s metadata, are sent to the soundscape and music systems, which generate appropriate accompanying material. The soundscape engine, Audio Metaphor, uses techniques from natural language processing, machine learning, and cognitive modelling to autonomously create an ambient soundscape from metadata tags. The music engine, PAT, creates melodic, harmonic, and rhythmic material attained through machine-learning from a corpus.

The team includes individuals with expertise in moving images, film, music composition, performance, installation, machine learning, sound art, software development, and multi-agent systems. Applying sound and music analysis and generative concepts to video, and vice-versa offers a rich opportunity for innovation in generative art and technology.