Artist Brendan Dawes talks to IBC365 about his new documentary Eno and the invention of a generative ‘Brain’ that produces a unique version every time it’s watched.

Eno, the new documentary about fabled musician Brian Eno, is suitably avant-garde because no audience will see the same version of the film twice – thanks to AI. It is unlikely to be the last feature film to be presented in this fashion with major streaming platforms and directors keen on using the patented underlying software, according to its co-creator the British artist Brendan Dawes.

20240616_21_32_41_3892

Brendan Dawes at Sheffield Doc Fest

“We’ve had discussions with big film studios and big-name directors as well as documentary makers to imagine doing things in a different way,” he says. “An advertising company was also interested in making thousands of versions of a normal commercial.”

Discussions include those with fiction filmmakers. “We’ve definitely thought about it,” adds Dawes. “Christopher Nolan’s films play with the idea of timelines. Take Memento, which is told backwards. Perhaps if he had had our software then Memento would be different, every time it was screened.”

Netflix is among those reportedly expressing interest when the film premiered at Sundance. It had after all pioneered branching narratives with drama like Black Mirror: Bandersnatch. However, the cost of dynamically generating high-resolution files for streaming over the internet would be astronomical at present.

“We’ve been talking with various platforms, Netflix included,” Dawes confirms. “We could certainly just do a director’s cut of Eno but that would kill our creative ambition for the film. Our dream is that when you stream it, it generates a unique version. It’s an ongoing conversation with all these platforms and hopefully this year it will be available on one of them. We’re trying to find the perfect solution.”

Uniquely trained

Dawes created the software, which is called Brain One (an anagram of the film’s subject), with US director and visual artist Gary Hustwit. Together they have formed Anamorph, a software and film company, exploring ways to bring generative technology into the process of making and experiencing movies.

20240616_21_26_31_5690

Brendan Dawes at Sheffield Doc Fest

He stresses that Brain One is a generative system and not a generative AI tool like Midjourney or Sora. The system, or algorithm, was not trained on anyone else’s data, IP or other films. It was however trained on archive material of Eno and new footage shot by Hustwit and taught to recombine the video files along with 5.1 audio tracks into new sequences composited together using generative AI links in real time.

Read more Training models responsibly: AI and creator rights

Recent screenings of Eno have also been presented with a piece of hardware that looks like a DJ’s mixing deck.

“We realised that we wanted something a bit more theatrical on stage rather than just a laptop,” says Dawes. “So we approached Swedish consumer electronics designers Teenage Engineering to make a physical version of Brain One. At screenings now people want to take pictures of it. It’s almost like it has groupies. That’s because Teenage Engineering has a lot of fans but they are also just fascinated by the physical manifestation of this software.”

That’s apt because Dawes’ journey as a visual artist began as a student of sound engineering in Manchester at the start of the rave scene.

“I wasn’t a musician but I could program and sequence and sample and scratch with vinyl. Like Marcel Duchamp’s Readymades, this form of music was about combining things in different ways. What I’m doing now is basically the same. I’m using data but reforming it to create something new.”

Inspired by technology

His fascination with computers began from the moment he plugged in a Sinclair ZX81 in the early 1980s. “The power of typing words into it and making it do something was incredible. I never studied computer science but code just made sense to me. What I really loved is that by changing just one element you completely change the artwork. You can make infinite variations. It is reactive in real time, not fixed. There’s nothing more contemporary in contemporary art than digital art. It is the art of now.”

Cinema Redux at Action Design Over Time MoMA

Cinema Redux at Action Design Over Time MoMA

Growing up in Southport meant every minute of the weekend was spent in the video game arcades around town. Dawes channelled this memory into the artwork Arcade Machine Dreams (2021).

“Using actual footage from games like Defender, Tempest and Galaga, I created a generative system which transposes time, movement and pixels to create flowing, abstract data sculptures – a retrospective totem for each of these games which meant so much to me.”

His real-time generative work You, Me, and the Machine (2022) morphs its appearance in relation to a viewer’s proximity. Passengers BCN (2023) responds to official passenger data from Barcelona Airport and gives sculptural form to the relationship between digital and physical.

His works titled Cinema Redux create a visual fingerprint of a single movie composed of dozens of frames and part of the MoMA permanent collection. Films including Metropolis, Vertigo, Taxi Driver, Deliverance, Gone With The Wind and Jaws have been subjects, the first of which was created in 2004 from DVD footage.

“I was wondering how I could present film in a different way than as a static image. I sampled the film every second, and then I built that up for 60 seconds per row for the entire film.

“It was basically data visualisation without abstracting the data,” he says. “You’re using the data as the film frame, and just presenting it, without changing the colours or anything. But when people saw it they said things like ‘I can see the rhythm of the film.’ I’ve got to be honest, it’s one of the simplest things I’ve ever made.”

He adds, “This sort of reappropriation and remixing is obviously a very common thread throughout the whole of digital arts.”

Myriad of possibilities

Gen-AI is an extension of his work. Using Anamorph, he says, you could make over 52 quintillion variations of Eno.

“I was terrible at maths at school, and I never understood why we were learning it but as I got older I understood the beauty that’s within mathematics. I’m fascinated by the ability to use maths to make art,” he says. “It’s through working with these machines — these code-based collaborators — that I can put into the world my thoughts and ideas exploring our relationship and interactions with the analogue and digital.”

He is agnostic about whether AI on its own could create a piece of art but says we should be open to the probability.

“Who said humans are the best at creating art? If we take that as a given we’re limiting our imagination. For example, in Eno, scenes are sometimes cut together in ways we never would have thought of yet somehow it works.”

Read more IBC2024 Accelerator Project: AI Media Production Lab

Brain One has in fact been put together not just with math but with the expertise of storytellers. Elements in Eno including the beginning and end are structured to remain the same regardless of the rest being mashed up.

art of cybersecurity mcmillan-aocs-government_4800x2700

Commissioned by McMillan for Trend Micro, The Art of Cybersecurity is a series of images, together with a 4K animation born from cybersecurity threat data

“We had great editors on this film but they would never have done it perhaps because we have all these biases of how things are supposed to work in a narrative.

AI is a tool that can help us think differently,” he insists. “Personally, I’m not interested in trying to create work that looks like something that has gone before. I don’t care about AI-generated photorealism. If I want that, I can look out the window. To me, it’s far more interesting to make the familiar feel strange.”

He thinks that some core filmmaking skills might become simplified by using prompts to generate outcomes but believes that human taste, judgement and curation remain essential to our understanding of whether an AI-assisted artwork succeeds or fails.

Machines can’t replicate consciousness. There is no genome of consciousness. It’s a total mystery. And because we don’t know how it works, we can’t feed that into a computer to make it understand. I hold on to that. I hold onto the hope that human consciousness remains something that can’t be figured out.”

To end with a philosophical question: In future why would someone go outside and take a photograph of a tree when they can just prompt an AI to generate a photoreal a tree?

“I think it’s too easy,” is Dawes’ response. “Machines are trying to make us more efficient and more logical. I want to be more illogical and more inefficient. I want to waste time. I want to appreciate the power of silence.

“Taking a picture of a tree is a powerful human experience and much more rewarding than just typing in a prompt. The two experiences are totally different and they can co-exist. I’m a big fan of digital technology. I think we need to embrace it. So I don’t think I’m being a Luddite when I say I fear for people who never look up from their screens and never experience the beauty of the world.”

Read more IBC Keynote - Benedict Evans: “There was a period when mobile was exciting… now smartphones have become boring”