Is GenAI even capable of producing a long-form masterpiece and at what point does something become an AI original that deserves copyright protection? IBC365 delves deeper into the key ethical debates surrounding GenAI and media production.
Artificial intelligence has become an influential force in the creative industries, raising complex questions about ethics and copyright protection. The US Copyright Office (USCO) recently decided that generative AI outputs cannot receive copyright protection, and following a long strike, the SAG-AFTRA actors’ union’s agreement with producers included language covering the use of AI to create digital replicas and synthetic performers.
Both of these milestones have generated significant controversy and may have raised more questions than they have answered. However, they have moved the conversation out of the tech realm into the world of law and ethics.
Generative AI outputs and copyright protection
Commenting on USCO’s decision, Alex Connock, Senior Fellow at the University of Oxford and an AI and media expert, says: “I think this is well-intentioned - but both woolly and wishful thinking. First of all, within 2024 almost every creative work will feature the use of AI and many of them generative AI. We all use machine learning-driven tools probably every five minutes of the day, from Google search to Word or Co-Pilot. There will come to be a dividing line between AI outputs and human outputs that is so ill-defined that a hard split between AI-created and human-created content will become entirely impossible to determine. I would say that that point is not in the future - it’s right now.”
Read more Why Generative AI is No Good Without Traditional Machine Learning
Connock feels that USCO’s decision will be impossible to enforce. “How will copyright enforcement even know what the division of labour in a work was between the inputs in a neural network which uses 80 billion parameters,” he says. “Given that background, how will anyone be able to determine where the gen AI in the backgrounds in a TikTok or YouTube video stopped, and the human inputs began?”
Media theorist Douglas Rushkoff points out, “Most of the inputs are already copyrighted. Copyrighting the output is like a DJ copyrighting his setlist,” he says. “I think the reason they’re (AI developers) hoping for no protection is that it can effectively wipe copyright for the underlying materials, cutting out creators from the value of their original work, and giving all profits to the tech companies.”
“But finally, I suspect that as well as creating this problem, AI will ultimately solve it too,” says Connock. “I think human agency over gen AI will actually suffice for a copyright determination, provided that the constituent assets that were trained into the neural network were themselves ‘cleared’. And I suspect that blockchain will be the new digital rights management (DRM), offering training data owners ultimate remuneration.”
Use of AI-generated music and background performers
AI is already being used to replace library music and background performers in some productions, which is raising ethical concerns, some of which the SAG-AFTRA agreement is attempting to address.
“I think that in the Hollywood strike the actors were, for heartfelt reasons of course, often barking up the wrong tree,” says Connock. “They were concerned for the most part about their likeness being replicated in, say, a future Disney movie because of a release form they signed three years ago when they were doing a half-day’s work on a Kwik-fit commercial. That was an unlikely risk to become real (a) because Disney would never take the gamble and would always insist on a first-party release and (b) because it’s a basic misunderstanding of how synthetic humans are created.”
“Back in the true Wild West of training data, meaning up to 2022, all our faces were being systematically ingested from the internet, usually without our permission, to create datasets of up to 30 billion faces. Those datasets are now powering all sorts of malign computer vision projects from facial recognition onwards, and that’s also the training data from which brand new synthetic humans are created,” explains Connock. “These are the real risks to actors’ jobs, particularly as background, bit-part players, and non-player characters (NPCs). I don’t know how to solve that problem, and no one else does either.”
Connock doesn’t believe that AI-generated actors are a threat to the livelihood of real performers, however, that doesn’t mean that there aren’t serious ethical challenges on the horizon. “I don’t think in the near-term people actually even want to watch synthetic actors, because they don’t empathise with them,” he says. “Where I do think there is a likely problem is in the use of synthetic humans in dynamically optimised advertising assets, because there the research shows that we are more likely to buy from individuals who look like us. That poses threats to models, and also substantial ethical challenges around, for instance, performative diversity - where the diverse faces are in fact synthetic.”
AI-induced job losses in film and TV
One of the greatest concerns TV and film workers have about the use of AI is the potential to put them out of work.
Connock expresses some optimism in this area. “Where AI probably will however play a role is in reducing the cost of post-production to get the overall quality/cost equation more aligned to available budgets,” he says. “That could cost some post-production jobs, which would be very sad, but an alternative view is that it could also create new ones, as people move to more AI-driven tools and require expertise. Most AI workflows require 10-15 different ML tools, as great recent work by Edit Cloud showed.”
Sam Bogoch, Founder of Axle AI, the AI-powered media management platform, takes a pragmatic view of the potential for job loss. “There’s a whole bunch of people on Fiverr that get little jobs for graphics, and those people probably should be kind of worried right now,” he says. However, he notes that this might not be as straightforward as it seems. Although people might initially use generative AI for tasks like graphics, they might later realise it’s easier or more efficient to hire a human for the job. “They may spend two days attempting to use AI and realise, ‘I should have just paid that guy a hundred bucks and avoided all the trial and error,’” he says.
Can AI create meaningful, quality long-form content?
The potential for AI to create meaningful, quality long-form content is a topic of much debate. What is becoming clear is that the answer to this question relates more to the nature of human creativity than it does to technology.
Bogoch emphasises that AI is being developed to understand and organize media, which will enhance content creation. “I think there will be new genres of creativity with this stuff,” he says. “And it’s going to be a huge tool that people will figure out how to either ride or get obliterated by.”
Read more Adobe’s Deepa Subramaniam: How AI video will shake-up post production
Rushkoff argues that AI lacks the human element necessary for creating meaningful media: “These things we’re talking about - films, books, music - they are ‘media,’” he says. “Media are ways for humans to connect. They ‘mediate’ between humans, like the way a painting mediates between Van Gogh and you. Or a novel mediates between James Joyce and you. There’s no person inside the AI, so the AI is not someone that gets mediated. It can generate wallpaper out of the history of past human creation. I think producers can use it to see what is the most typical, cliche version of something. And then avoid it.”
“What I do also think though is that there will need to be some soul searching within the TV industry about what the nature of TV creativity really is in the first place,” says Connock. “The past 25 years have been a relentless quest towards formatting, with rigorous control over the tropes of formats both within series and globally, even to the level of mandating what the shot will be when someone walks into the kitchen of a potential new home in a property show. What AI ultimately does really well, is to recognise and predict the subsequent iteration of patterns, so AI systems will soon be the best format deliverers and re-iterators of all time. AI could make you 1000 episodes of that property show because it could machine learn everything about how the format works.”
Connock believes that there is an essential difference between what AI can create and what humans can. “If you want to keep AI out of human creativity, at some level you may have to start thinking differently about the predictability of formats,” he says. “Maybe we will go back to a golden era of one-off auteur originals, like Hollywood in the early 1970s - Easy Rider, Chinatown and Taxi Driver. I hope so. AI couldn’t make them.”
As AI continues to reshape the broadcast industry, ethical considerations will play a pivotal role in guiding its integration. From issues of copyright and job displacement to creativity and problem-solving, AI presents both challenges and opportunities.
Bogoch sees AI as evolving rapidly and stresses the need for careful adoption: “Fasten your seat belts because whatever does happen, it’s going to happen quickly!”
No comments yet