Text by Patrick Tanguay

The MUTEK AI Ecologies Lab concluded with a public presentation and mini-conference at the MUTEK Forum, themed “Radical Rituals.” In the apparently haunted basement of Monument-National, with a vibe somewhere between solarpunk speakeasy and Batcave, the Playground provided an atmospheric backdrop for thoughtful work on artificial intelligence’s ecological footprint.
MUTEK Forum offers something rare for those closely following technology—not just the latest gadgets or startup news, but also deeper, systemic questions. It is a space where the critiques and conundrums that occupy policy researchers and environmental activists are explored through artistic practice. Here, transdisciplinary work flourishes at the intersections of art and technology, old methodologies and new tools, computer music theory and robustness engineering, Chinese pufferfish behaviour and emotion detection algorithms, advanced generative AI and hand-drawn sketches.
While public attention focuses on AI demonstrations and risk predictions, MUTEK Lab participants were working on a more fundamental question: what would it look like if it embodied different values? The projects revealed the Lab’s methodology: rapid prototyping coupled with critical reflection, guided by expert insights that grounded artistic experimentation in rigorous research.
Expert interventions
This gathering capped a journey that began with three months of online collaboration and an intensive in-person build phase in Montréal, at the Society for Arts and Technology (SAT), the Milieux Institute, and the Mila – Quebec Artificial Intelligence Institute. The setting enabled exchange between participants who presented technical prototypes in a makers’ fair format, with the schedule alternating between expert discussions and breaks for mingling around projects and themes.
Tegan Maharaj, a responsible AI researcher and core professor at Mila, who leads the ERRATA lab (Ecological Risk & Responsible AI: Theory & Practice), addressed sustainable AI through frameworks for practical impact measurement and deep risk mapping. As a member of Abundant Intelligences, co-founder of Climate Change AI, and managing editor at JML, Maharaj offered concrete approaches to identify ecological harms and design responsible AI systems.
Complementing this technical perspective, Pauline Bourdon brought insights from cultural sustainability practice. A live-events sustainability strategist and founder of Soliphilia, with 17 years of experience across global tours and festivals, including Glastonbury. Bourdon, spoke about the power of imagination as a tool for climate action. She demonstrated how artists, teams, and festivals can embed intersectional sustainability and green touring practices, championing knowledge sharing through university teaching and industry groups to scale these approaches.
La Piscine, an incubator supporting innovation across the creative and cultural industries, hosted “Digital Art as a Driver of Sustainable Change: A Dialogue on New Forms of Engagement.” Featuring Marion Schneider and Juliette Van Gaalen (Lucion), the discussion explored how digital art can catalyse sustainable practices and new audience engagement models through cross-sector collaboration. Together, these expert contributions forged a middle path between technological acceleration and thoughtful consideration of consequences, pushing the Lab beyond efficiency to reimagine AI’s ecological footprint through both technical rigour and cultural transformation.



Lab innovations
Artist-led AI in Animation
Award-winning animator and filmmaker David Barlow-Krelina addressed ethical concerns surrounding AI image generation, particularly the use of unconsented training data and the creation of work in other artists’ styles. He shifted to working exclusively with his own artwork in image-to-image workflows, which he found more creatively engaging than text-to-image generation. Using local AI served as both an ethical choice and a practical learning tool—visible resource demands, such as increased room temperature, fostered conscious engagement with environmental costs. Barlow-Krelina positioned AI as expanding creative possibilities rather than replacing human artistry, offering artists greater autonomy over their tools while maintaining transparency in the process.
Why it matters: The project represents creators’ ambivalence towards AI. On the one hand, large frontier models have hoovered up every piece of knowledge, art, and data they could find, without paying for it. On the other hand, those tools are incredibly powerful and can be used respectfully and ethically. So much of our shared knowledge is embodied in these models that artist Holly Herndon even discusses “Collective Intelligence [1].”Other proposals, considering creators’ rights, have been put forward, such as Tim O’Reilly’s concept of an “Architecture of Participation for AI [2]”.
Robust AI training and sustainability
AI researcher Dane Malenfant (McGill University, Mila) focused on maximising entropy in reinforcement learning agents—training AI systems to handle uncertainty and explore diverse pathways instead of converging on single solutions. This approach builds robustness into AI, much like creating durable goods rather than disposable products that require constant replacement. By training AI to handle uncertainty, the system requires fewer frequent updates. His work suggests models that adapt more effectively to new situations while consuming less energy through reduced retraining cycles.
Why it matters: François Chollet, former Google engineer and creator of the well-regarded AI benchmark, said “deep neural networks suffer from inherent brittleness [3],“ so we don’t have to look very far for a need for more robustness in the field. But, as Malenfant mentioned, it goes beyond AI. Olivier Hamant discusses the concept of robustness versus performance [4] in nature and organisations, as well as the “trap of efficiency and performance.” Futurist Jamais Casio created the BANI framework, starting with brittleness [5], as one of the “ways in which we feel like our world is falling apart.”
Wattsup
Creative technologist Lionel Ringenbach (Metacreation Lab for Creative Artificial Intelligence) found that local AI energy use is easily measurable, but remote tools like ChatGPT require complex estimation. This led to a project pivot: Instead of measuring ChatGPT’s energy use directly, he created a tool for developers to measure the usage of coding assistants. The final tool scrapes OpenRouter data to translate token usage into watt-hours and CO₂ emissions. He also revealed the substantial waste in AI operations (e.g., 55% of GitHub Copilot requests are made in the background and discarded), highlighting the potential for optimisation through greater transparency.
Why it matters: Training LLMs uses massive amounts of resources, from hardware production materials to the electricity powering data centres and water for cooling them down. It’s still unclear exactly which action, product, or use consumes how much [6]. Ringenbach’s noticing the extraneous requests in Copilot is one example, and the problem is growing. According to some estimates, AI data centres could need 30 times more power by 2035 [7].
TamagotchU
Experimental game creator Eyez Li challenged the assumption that high resolution equals quality art. She championed low-fi games that run on lightweight AI models and accessible devices, such as Raspberry Pis. Li explored automating the “spirituality” of AI creatures, positioning them as equal presences alongside humans in mixed reality environments. Drawing inspiration from the environmental sensitivity of Yangtze River pufferfish, Li connected the project to local impacts, such as thermal pollution from data centres. It evolved from a simple Raspberry Pi (small computers with sub $50 prices) setup to a customised, 3D-printed device resembling an old-time phone. It matured by giving the fish a more complex, independent personality that isn’t solely defined by user happiness and can feel threatened.
Why it matters: There’s a thriving scene of hobbyists and artists using cheaper, low-energy hardware and small AI models to explore their possibilities, such as Li’s project or those featured in the solar-powered Low-tech magazine [8]. Karen Hao, author of Empire of AI [9], reminds us that in scientific research, the best approach is often not to rely on large models, but rather to “can curate very, very small data sets, train them on very, very small computers,” and create powerful AI [10]. We must keep in mind that eternal growth in size and power for frontier LLMs is also based on a scaling myth [11].
You, me, the lichen & Spore
Multidisciplinary artist, researcher and curator Kelly Andres explored cultivating AI intelligence, inspired by lichen communities and their symbiotic relationships. She created Spore, an AI embodying mutual respect and ecological interdependence, by fine-tuning small local models. Her breakthrough was developing a three-tier interaction model where it responds differently based on user tone—sparsely to commanding language, gently to neutral interactions, and lyrically to relational approaches. Andres introduced “data gleaning”—intimate, consent-based data collection—as an alternative to extractive methods. Spore demonstrated how AI development could function as a symbiotic process rather than a prediction tool.
Why it matters: The connection between lichen and AI isn’t as far-fetched as it first appears. Researchers are increasingly turning to biological intelligence for insights—from Claire L. Evans’ exploration of unconventional forms of intelligence and computation in nature [12] to Blaise Agüera y Arcas’ examination of parallels between computing, life, and intelligence [13]. Andres’s approach suggests AI development could learn from symbiotic rather than extractive models.
CITYLLM + CITYchat
Femke Kocken, Sura Hanna, and Connor Cook from Concordia University’s Next-Generation City’s Institute discovered a pressing institutional need. Academic researchers often send sensitive queries and data to commercial AI services, such as ChatGPT, raising serious privacy concerns. The team pivoted to explore how institutions might run their own chatbots locally. Their system enables researchers to interact with their institution’s publications and data without sending any information to external servers. Running entirely on local hardware, the approach revealed the benefits of local AI, including data privacy, energy efficiency, and customisability.
Why it matters: From worries about ChatGPT and others memorising all of our conversations, the centralisation of data and power [14], proprietary data “leaking” into public answers, and copyright concerns, it’s no wonder institutions [15] and organisations take steps to keep their data on their own servers and control.


Experimenting with hard questions
The MUTEK AI Ecologies Lab revealed how artists and creative technologists tackle questions that mainstream AI discourse avoids or has not yet figured out. While policy debates about AI often stall in abstractions, these projects experimented with concrete alternatives: What does consent-based training look like? How do we measure the environmental cost of daily use? Can we build systems that embody symbiosis rather than extraction?
The projects filled gaps in current conversations. They explored ethical image generation when companies dismissed artists’ concerns, investigated robustness. At the same time, the industry pursued ever-larger models, developed energy transparency tools that tech companies rarely create, questioned computational assumptions, and envisioned different human-AI relationships.
This approach produced prototypes for tools and ways of thinking about technology that prioritise care and ecological boundaries over performance by combining artistic practice with technical experimentation. The Lab showed that our pressing questions about AI’s social role might be best explored where creative practice meets critical making, where “what if” scenarios get built and tested rather than just discussed.
MUTEK thanks the Canada Council for the Arts and the City of Montréal for their support of the first edition of the AI Ecologies Lab. The project is funded through Montréal’s Cultural Development Agreement, between the City of Montréal and the Government of Québec, and delivered in partnership with the Society for Arts and Technology (SAT), Milieux Institute for Art, Culture, and Technology, the Applied AI Institute, and Abundant Intelligences.




