Interview by Lidia Ratoi
Nothing in this (digital) world is true. And Ben Grosser understands this. True to his own circuit-bending philosophy, Ben Grosser uses software to “destroy” our current perception of social media. He does that with everything – music, computers, machines. And through this, he completely changes our perception of a certain system.
Ben Grosser started as a music composer. But he is also a coder, scientist and artist. By himself, he is a cultural phenomenon focusing on social media ethics. For his work, he asks questions such as “What does it mean for human creativity when a computational system can paint its own artworks? How does an interface foregrounding our friend count change our conceptions of friendship? Who benefits when a software system can intuit how we feel?” Most importantly, he situates users in a position where they face unexpected reactions in relation to surrounding technology.
One of his most famous series is Demetricator. Twitter Demetricator. Facebook Demetricator. Instagram Demetricator. And so on. The Demetricator is a browser extension that hides the metrics on a certain social media website. Imagine, for example, a Twitter where you cannot see anymore how many people re-tweeted something. Instead of 23,4 K tweets, it just writes “Tweets”. Are we able anymore to tell if we like it or not by ourselves, without outside influence? Is it still a success or a failure?
The experiment was meant to reveal how these metrics guides our behaviour and to ask who benefits from it. It could also give us a certain freedom. Imagine. Sending things outside in the (social media) world and not knowing who liked it or not. Nobody cares. Just you and the things you want to share. Input – unknown output.
Philip K. Dick, the renowned SciFi writer, questioned: do androids dream of electric sheep? In this context, he focused on questions regarding humanoids versus robots and whether empathy is a human feature. In the same mannerism, Grosser treats technology and software as living beings, focusing on the loop of how they affect us but also on how we affect them.
In his Computers Watching Movies, he analyses what a computational system sees when it watches the same films we do. The work illustrates this vision as a series of temporal sketches, where the sketching process is synchronised with the original clip’s audio. The code created by Grosser gives the system a certain degree of preference, allowing it to decide what it wants or does not want to watch. He uses a rather exotic mix of six movies (2001: A Space Odyssey, American Beauty, Inception, Taxi Driver, The Matrix, and Annie Hall) as input, true to his mix-approach ideology.
His job at the Beckman Institute inspired this multidisciplinary, intermixed approach of Ben Grosser. Thus, Interactive Robotic Painting Machine was born. Through the use of AI, a robot paints unplanned pictures, making us question if it paints for itself or us. Some pieces have a specific audio input, which tells the robot if the human is satisfied with what the machine is doing, thus altering the real-time process. However, Grosser likes freestyle paintings, proving that constantly critiquing one’s creative process may not be productive. The installation has also deviated into Head Swap, where the loop is between the machine and the violin player Benjamin Sung.
Human, machine. Action, reaction. Software, anti-software. Not being part of any particular field, be it musical, academic or scientific, but knowing to operate in all, Ben Grosser has the capacity to offer an egalitarian approach. He does not fully belong anywhere but is bridging the gap between all of these contemporary postures.
Ben Grosser’s work appeals to the most subtle of elements – perception. The computer experiences your world as much as you experience its world. And Ben Grosser proves that each machine has a soul, whether humanoid or robotic. It shouldn’t be surprising that Interactive Robotic Painting Machine is credited as an artist. Because, after all, what is an artist if not the one making art?
You are an artist and researcher interested in the sociopolitical effects of software in current societies. You have a background in New media and music composition, and you were also the director of the Imaging Technology Group at the Beckman Institute for Advanced Science and Technology. For our audience that’s not familiar with your work, how and when did all these interests come about?
With the benefit of hindsight, I see it as a slow merging of varied interests into a singular practice. My early focus on music composition oriented me to focus on systems: systems of making, systems of listening, and systems of composing. This focus was intertwined with a modernist desire to destroy and remake whatever system I could identify, and my primary method was computer music. I wanted to write software that would generate sounds we hadn’t yet heard, that not just avoided but abandoned previous conceptions of what sound could—or should—be.
As I started my doctorate in music, I also started working part-time at Beckman, getting involved with what at that time (the mid-’90s) were new and esoteric technologies, including high-res digital video and photography, 3D printing, and 3D animation. Within a year, I had quit the doctorate and was working full-time at Beckman, focused on digital imaging, visualization, and software development for remote instrumentation.
Eventually, I started collaborating with some of the faculty, developing varied projects ranging from automated artificial bone implant design to virtual microscopy. But despite my group’s and projects’ science focus, I never lost my interest in the arts. I brought artists in for talks. I encouraged artists on campus to use our labs. I setup exploratory conversations between artists and scientists and spent a couple of years pushing the Institute to establish a new research theme focused on Art & Technology. But in the end, that push was unsuccessful, and I wanted to steer my work in new directions.
So I left the Institute to return to art, but I brought Beckman’s focus on multidisciplinarity with me. I spent a year building and coding an artificially intelligent painting robot1 that listened to and sonically contributed to its environment; in a way, it was a hybrid of my past work on computer music and composition, my Beckman work on computer-controlled hardware, and my overall focus on remaking systems. I then went back to school for an MFA in New Media.
This period collided with the broad emergence of software as the dominant force in contemporary society—for search, social media, smartphones, and more—and helped me turn all of my attention to examining—through a practice-based artistic research methodology—the effects of this shift. Now my practice includes net art, computational video, AI-controlled robotics, sound art, interactive installation, theoretical writing, and online discourse (such as this interview).
Your most recent project, Twitter Demetricator, follows the Demetricator series, which aims to detach us from the constant enticement that social media metrics pose (likes, followers, retweets…). What has been the most significant findings of the use of these tools?
Some of the common themes with Demetricator are that its removal of visible numbers from social media platforms blunts feelings of competition (e.g. between a user and their friends), removes compulsive behaviours (e.g. stops users from feeling a need to check for new notifications constantly), and overall, reveals to users just how focused they have been on what the metrics say. When we see how many “likes” our last post received, we can’t help but take that into account when we write our next post. In other words, the metrics are guiding and changing what we say, how we say it, and who we say it to.
But perhaps most interestingly, Demetricator has also helped users to realize that they craft explicit rules for themselves about how to act (and not act) based on what the numbers say. For example, multiple Facebook users have shared with me that it turns out they have a rule for themselves about not liking a post if it has too many likes already (say, 25 or 50). But they weren’t aware of this rule until Demetricator removed the like counts. All of a sudden, they felt frozen, unable to react to a post without toggling Demetricator off to check! If your readers are interested in more stories like this, I have detailed many of them in a paper about
Demetricator and Facebook metrics called “What Do Metrics Want: How Quantification Prescribes Social Interaction on Facebook”2, published in the journal Computational Culture. And a couple of recent accounts by journalists using Twitter Demetricator are worth reading as well, one in The New Yorker3 and the other in Slate4.
What are the biggest challenges (technical or conceptual) you face in the development of your projects?
Technically, my biggest challenge is that the experiments I want to conduct and/or the changes I want to make to existing software systems are unsanctioned ones and thus require adaptive roundabout strategies. For example, I tend to think of my Facebook Demetricator as an artwork and a long-term code-based performance project. Facebook doesn’t want me hiding their metrics, so doing so against their wishes through browser extensions means I have to constantly react to the changes they make. Code that hides the “like” metrics today may not work tomorrow, and, more frustratingly, code that hides your “like” count may not hide mine!
This is because Facebook has many variant codebases in the field at any moment. In other words, your Facebook may not be my Facebook. So keeping this project going is an ongoing challenge. Further, this method of working, which I have come to refer to as “software recomposition,” means that every new work I make adds to an ever-increasing pile of technical projects that need to be maintained (or abandoned). I work hard to keep everything active for as long as I can.
Could you tell us a bit more about the latest developments in your research on artificially-intelligent creative systems?
I’m currently completing a new work called Autonomous Video Artist (or AVA for short). AVA is an artificially intelligent, self-navigating, video-capture robot that seeks out, records, edits, and uploads its own video art to the web. AVA is an outside observer who looks at our world and attempts to decode it as an artist. But different from human artists, AVA hasn’t spent years watching videos or learning how artists convey narratives using temporal media.
Instead, AVA starts from the beginning, using an iterative process of making to develop its own ideas of what video art can be. Most importantly, AVA sees the world differently than we do. I hope this difference will help uncover how culture directs what humans see and don’t see, showing us at least some of the ways that human vision has become automated—or “machinic”—through culture.
Of all the newest advancements in art created by intelligent software/machines (different pioneering work and projects from Magenta, Google’s IAM: from AI-generated poetry, music and artworks (my artificial muse…), how do you predict they are going to impact our future societies?
Well, I wouldn’t necessarily lump everything in your list together as equally influential. But in general, I hope these projects will help us see—and thus, understand—how social, cultural, technical, and political systems are changing who we are and what we do. In other words, I hope these systems generate more questions than answers. What does it mean for human creativity when a computer can make its own art?
How does machine vision differ from human vision, and what does that difference reveal about our own culturally-developed ways of looking? Why are we worried when we see a physically-embodied AI-controlled software system making a painting but not so worried when we see non-physically-embodied AI-controlled software systems (such as Google or Facebook) reconfiguring how we find information or with whom we call a “friend?”
Referring to your work at the Beckman Institute for Advanced Science and Technology. How did working in a scientific institution, I imagine, jointly with scientists, shaped or influenced your artistic practice?
This is hard to answer. Artists often see my methods as at least somewhat scientific. Scientists are often frustrated that my methods aren’t more scientific and see them instead as artistic. So perhaps the primary influence of my varied interactions is that I don’t fit in one place, and thus nobody quite knows where to put me.
What is your chief enemy of creativity?
Metrics.
You couldn’t live without…
My spouse. My laptop. The internet. Coffee.