Close this search box.
Close this search box.

EMPTYSET, exploring architecture, algorithms & the material reality of sound

Interview Jack Apllo George

Can you automate the production of music? Can something intricate and moving be composed with the help of AI? 

For many, the almost invisible iterations processed by machines that speed up our lives, facilitate our interactions and perform tasks thousands of times quicker than any person could ever muster represent the highest plane of technical ability. They are the tools that we use to nullify the mundane, to do the things we cannot be bothered to tackle ourselves. In a world where automation has taken hold of much of human activity, it is often anticipated that some of the last remaining bastions of purely human production would be the arts. 

Good music and sound design transports the listener. From a simple sonic stimulation, the entire body, the whole soul, can be moved and enlivened. How influential composers and producers achieve that remains a mystery to the best of us, and even at times, to the composers themselves. It seems unlikely, if not downright unsettling, to conceive of auditory experiences that are arranged or composed by an unthinking, unhearing machine. But the power of machine learning algorithms to find the patterns and similarities that unite disparate elements, the potential for inconceivably moving or beautiful ghosts to be found within the machine, is an exciting prospect for many producers and composers working across the ever-thinning boundaries between computer science and music. 

emptyset is a music and sound production project founded by James Ginzburg and Paul Purgas in 2005. Working between London and Berlin, they seek to explore the material reality of sound through compositions, installations and performances where the sonic landscape interacts with and is shaped by the physical space in which it is played. Theirs is a vision of sound design in which architecture forms a crucial part of the creative and experiential equation. In recent years, they have produced works and installations for a variety of settings, including the David Roberts Art Foundation, Spike Island, Tate Britain and the Architecture Foundation. 

Their latest project is an album named Blossoms, for which they recorded 10 hours of improvised samples using basic instruments such as pieces of metal and wood. This recorded material was then fed into a neural net machine learning algorithm which, over time and with careful supervision, could then output full pieces of music. emptyset worked with other leading sound and computing technicians for eighteen months in order to perfect the machine learning and audio synthesis systems that eventually created the soundscapes on the album. Blossoms is an exciting step in the history of generative art. The album is released on the Thrill Jockey label on October 11th 2019. 

The album works especially well when heard through the prism of giving a voice to artificial intelligence. Indeed, you can hear something otherworldly, something mechanical, but breathing in the grinding but airy sounds that sing from this record. It is often creepy and atmospheric but never uninteresting. Compellingly, Blossoms does not sound deliberately experimental and instead sits comfortably alongside other ambient and industrial releases. On a conceptual level, its promotion of the artist from composer and performer to a curator is a deeply reassuring one. Digital programs can take, reinvent and remix all of human culture, from every recorded whisper to every lost holiday photograph. Increasingly, therefore, our role will be to discern the worthwhile and guide new explorations into what only we (for now, at least) can truly enjoy. 

What convinced you to deploy machine learning techniques so centrally in your songwriting process for this album?  

The ideas of structure, organisation & intelligence have been themes that we have considered within our previous recorded material. So the possibility to take a more direct exploration of this territory through machine learning felt like a way to take ideas from our previous releases that had been more philosophical and now build a method that could directly consider intelligence and cognition as a core part of the process.

From the initial decision, it took 18 months to arrive at a system that could respond to and assemble sound in a way that felt evolved enough for our needs and could reach the level of audio quality that we needed. This development arc involved conversations with multiple programmers and developers around the world and keeping up on the various white papers and research platforms in order to eventually arrive at a system that could form the foundation of an album.

Overall this was a territory that felt like the right space to explore with emptyset and made sense within the lineage and cosmogony of the project whilst enabling us to explore a whole new set of sonic possibilities.

What was it like recording the improvised samples knowing that these were for an algorithm to listen to and process and not for a thinking, feeling person?

The production for this initial part of the album was very similar to the recording sessions for the release Skin. We worked within the same studio, but this time using much more basic materials, including pieces of wood and metal as well as drum skins. The method of improvisation was actually very loose, and we didn’t try and preconceive what the system might understand but instead kept it as a more intuitive method.

In reality, we had no real sense of what the system itself would understand from these sessions, so improvisation felt like the best strategy – something closer to sketching rather than trying to make fixed structures or compositions. The final results of these recording sessions were then sliced into thousands of 1-3 second audio excerpts and categorised before being used as part of the seeding data set for the neural network.

What surprised you the most during the process of crafting the Blossoms album? 

The significant factor that was most surprising was during the phase of listening back to the outputs of the neural network when we listened back to the many hours of raw audio that had been synthesised. It was clear there were iterations of the system as it was learning to form basic sounds and structures from the data, and then toward the end of this learning, it eventually arrived at a very refined and accurate assemblage of the source material.

It was actually within the middle of this cognitive development that we found the most intriguing results, a space where the sonics felt like they had grown beyond basic forms but had yet not arrived at a  familiar reinterpretation of the source material.

In this middle phase was a sense of something still becoming or unfurling, a system in a moment of growth that had a particularly uncanny and almost biological sense to it. It was very much these moments within the production process that was the most unexpected and proved to be the elements that brought the central sense of life and evolutionary potential to the album.

What would you say is the ideal environment for someone to experience the Blossoms album in?

The album is available in a recorded format for general release, and alongside this, we will be presenting a live version of Blossoms that will begin touring in October at the Unsound festival.

We will be working with our visuals team to expand upon the themes of the record and will present a performative reworking of the audio as an expansive iteration of the album.

Do you think that listeners need to know about the technical and theoretical background of the album in order to enjoy it fully?

This is something that doesn’t feel necessary as a vital part of listening to the album. It is ultimately an emptyset record, so can be experienced in the same spirit as any of our previous material. Of course, a deeper sense of what is happening would bring a different frame of listening, but this doesn’t seem like a vital requirement.

You are presenting the album and new live show at Unsound this year; what can we expect from it?

The live show will be an immersive iteration of Blossoms, building on our previous performances, creating an encompassing audio-visual experience that will expand on the themes and motifs of Blossoms.

What is your chief enemy of creativity? 


You couldn’t live without…

Parsnip the emptyset cat.

(Images courtesy of the emptyset)

On Key

Related Posts