Interview by Ana Prendes
The technological hype around artificial intelligence is (fortunately) been counteracted by an ever-growing number of researchers, artists, and writers that work toward revealing the social and political implications of the infrastructures underpinning its technology.
Since 2014, Sougwen Chung has designed and constructed custom robots programmed with neural networks trained on the artist’s drawings’ gestures and biometrics. The artist and (re)searcher rejects understanding technology as a tool but embodies the idea of constructing robotic collaborators. Working with computer vision, biosensors, augment and virtual reality, and neural networks, Chung performs with these mechanical arms, which imitate the sketching movements and draw perfectly in synchrony with Chung.
Chung thinks of these robots, each of these called D.O.U.G (Drawing Operations Unit, Generation), as ‘co-creative systems’, blurring the edges of creative agency between humans and systems. Finding the technical challenges inspiring, their creative process is marked by intuition and trust in unexpected automatisation outcomes.
Chung has developed five generations of D.O.U.Gs. From researching human and computer vision to memory with deep learning and recurrent neural networks, the latest drawing operations unit explores bio-mimetic robotics. Inspired by the idea that robots can also manifest human features, not just perform our duties, Generation 4 (the ones shown during Frieze London last year) is connected to the electrical activity of Chung’s brain through an electroencephalogram (EEG).
In the most current generation, Chung has built a large-scale kinetic installation featuring these multi-robotic AI-system Generation 5 (D.O.U.G_5) that will be presented this generation at the Espoo Museum of Modern Art on 26 August. Activated through live performance, Chung utilises the E.E.G headset and biofeedback technologies to link human and machine operations in collaboration — together, working across an ambitious configuration of three paintings simultaneously forming. Through this intimate connection, Chung rejects human control and envisions ‘robotics that focuses on collaboration, co-creation and care.
During London Frieze Week last year, Chung presented some of the artist’s work and a performance in Entangled Origins, a progressive exhibition presented by Gillian Jason Gallery – representing the artist in the UK- at Asia House. This was the first time that Chung presented their ever-evolving connections between artificial intelligence and art in London through a combination of painting, sculpture, video and performance. The performance was an otherworldly experience, challenging our preconceptions of artificial intelligence while exploring the way in which ‘human artist’ and ‘machine artist’ can collaborate to create something extraordinary.
Could you tell us how your interest in exploring the dynamics between humans and systems came about? And more specifically, how did you start incorporating AI tools into your artistic practice?
My interest in the dynamics between humans and systems came about from an interest in how the technological systems that shape our behaviour, movement, and communications do so from an individual, social, and technical perspective.
A contemporary example of this is an AI system which is a combination of computer vision, predictive modelling, and user input, that feedback to an individual in a variety of circumstances.
It might sound a bit dry, but I find this feedback loop between the intuitive and the technology really intriguing, and in 2015 I wanted to explore how this parallel integrated process could be combined to create something beautiful and new.
Your practice revolves around the so-called human-machine collaboration. As collaboration is commonly understood as mutually constructive for all the parties involved, how would you describe these collaborations?
Developing new approaches to embodiment, memory, and improvisation is what excites me about collaboration and it’s what drives my interest in exploring creative systems. It’s why art and tech intersections with robotics, AI, and virtual reality matter — it’s about exploring new ways of creating and becoming-with machines.
I work with neural networks, computer vision, biosensors, and AR/VR to create relational robotics and technologies. You can think of them as co-creative systems tied to my body, movements, and biology. I like challenging the idea of technology as a tool, towards the idea of building collaborators. Expressions of this research take the form of artefacts like paintings, sculptures, performances, installations and simulations.
It began with Drawing Operations Unit: Generation 1 (Memory 2014 – 2015) of the D.O.U.G. project explored mimicry and computer vision via colour tracking, transmitting the position of my pen to the robotic arm to develop real-time co-creation. Shared movement.
The artefacts are white ink on black paper and trace the limitations of the robotic positional translation and my own adaptations to drawing with a robotic unit for the first time. Generation 2 (Mimicry 2015 – 2016) explored memory with deep learning and recurrent neural networks, training the robot on 2 generations of my drawing data to create a gestural feedback loop based on my own style.
The white and blue artefacts show a hybrid human and machine drawing, in a sense I’m collaborating with 2 decades of my drawing as remembered by a machine. Generation 3 (Collectivity 2017 – 2019) explored urban movement through a multi-robotic system.
In collaboration with Bell Labs researcher Larry O’gorman who introduced me to his optical flow computer vision system, which I used to extract path data from public cameras in NYC. I designed 20 painting robots and linked them to the data to create a performance of the collective machined mediated movement.
A way of re-imagining landscape painting as a combination of human and machine vision, human and machine painting. Generation 4 (Waves 2019 – 2021) explored bio-feedback and was developed in isolation during the onset of the pandemic.
I focused on internal flows of meditation captured through biometric recording with an EEG headset. I translated those states to the robotic unit as a physical expression of my meditative states during the lockdown.
Your practice involves performance, installation, and drawings, as well as developing pioneering technologies in robotics. Could you tell us about your creative process? What is the interplay between processes, technologies and artworks?
My creative process is led largely by intuition and wayfinding through materials. I find the challenges inspiring. Technical – teaching a robot to improvise, creating models for intelligence vs automation, humans and machines see things differently. Sensor malfunctions. Relational – Developing machine intuition, the right kind of feedback. Letting go of control. Trusting the process.
In Flora Rearing Agricultural Network (F.R.A.N.), presented as a video performance at Sonar, you seem to extend the intricate engagement between humans and machines to the non-humans. Could you tell us more about how it came to be and how the development of the networked robotic system is going?
F.R.A.N. is very much still in development – inspired by bio-mimetic robotics, and the idea that robots can embody human traits not just execute human tasks. Robotics that focus on collaboration, co-creation and care instead of control. Robots that steward natural ecosystems with regenerative power sources.
Considering the rising community of artists employing AI tools to support their work, using creative AI as a critical practice, or exploring the aesthetics of AI, how do you envision the result of working with AI can and will influence artistic practises?
I encourage artists and practitioners to think of AI systems as creative catalysts; permeable and fallible, and rife for interrogation and reinvention.
What is your chief enemy of creativity?
You could not live without….