What are the limits for using GenAI in production and how capable is it of creating skilled and engaging content? IBC365 learns more about the recent findings from Pact’s AI working group.
It feels like the dust has settled on the detonation of ChatGPT-4 and v1 text-to-video tools and it’s now time for the adults in the room to take stock.
“We shouldn’t be rushing to the pitchforks but thinking through carefully and having an adult discussion about how we use it and how we disclose it,” says John McVay, CEO of UK-based producer’s association Pact. “It’s about having grown-up conversations with everyone in the creative value chain about what is understood in terms of consent and permission and transparency.”
McVay was speaking following Pact’s publication of guidelines and principles for its members on the implications of using generative AI for TV and film projects.
He is the first to admit that the technology generates more questions than answers.
“We’re not saying it’s going to be easy – but at the end of the day we are a professional industry and human creativity is what we all value the most.”
Hidden dangers
Pact is using its leadership position to suggest best practices for producers to approach the use of AI without, for example, getting caught in litigation. However, there are huge gaps in the industry’s knowledge of AI’s limits and potential, the legal framework has yet to catch up with its implementation and the technology is advancing at such a pace as to make any definitive statement almost immediately obsolete.
“What we’re trying to do is give people some guidelines about how to think about AI,” McVay says. “We’re not in a position to say ‘this is this’ or ‘do this, not that’. It’s like asking us for a position on the internet when it was first invented. What we can do is offer questions for consideration and continue to have oversight as we see different applications used.”
For instance, if a production is using GenAI then it would be pertinent to find out what data the large language model behind it had been trained on.
“You need to know if the data is legitimate, whether what the AI tool has been trained on uses copyrighted works or works not cleared for use.”
Pact became a member of the British Copyright Council (BCC) in early 2023 following a consultation with the Intellectual Property Office. The IPO’s Artificial Intelligence and Intellectual Property: Copyright and Patents stance aligned with that of the BCC.
“By being a part of the BCC we can both increase our knowledge in this area and help to ensure that the interests of independent producers are represented at the highest levels,” McVay says.
Regional considerations
However, different jurisdictions may differ in their approaches to regulation. The EU’s AI Act was approved by the European Parliament in March (but won’t come into force until 2026), the US will soon have established case law as a result of challenges to AI developers (such as the class action lawsuit brought by comedian Sarah Silverman among others against OpenAI and another against Meta accusing the companies of copyright infringement). China and other countries might diverge again.
“We’re not going to have one global approach to AI,” says McVay. “I think everyone will work from the same principals but end up in different places - which is not great when you’re making work for a global audience.”
He points out that programme makers already manipulate content to improve quality. “This is not fakery. We process all our images to make them look better to enhance the experience for the audience. If AI is used to do that I don’t think we want to put a flashing symbol on screen to say so. CGI movies have been shot on greenscreen for years. Those worlds are created in post and we all know it’s a fiction.”
Even reality and documentary shows are a construct. “Any time you take a camera and a crew into an environment you are distorting the truth.
“Where filmmakers, in a documentary for instance, are presenting something that is purporting to be actuality or a real interview then it is important that we the audience - as citizens - continue to have faith and trust that what we are seeing is true.”
Overstepping the mark
There have already been uses of AI in documentary work, justifiably perhaps if it tells the story better. Andy Warhol’s voice was recreated for a narration in Netflix biodoc The Andy Warhol Diaries. Documentarian David France used AI-generated face doubles in the observational doc Welcome to Chechnya to protect members of the LGBTQ+ community fleeing state-sanctioned persecution and violence in Chechnya. Both instances acknowledged their use of AI upfront, unlike a seemingly innocuous 2021 Anthony Bourdain road trip that didn’t at first admit to the use of AI in recreating lines of the late chef’s voiceover and landed in hot water as a result.
“It’s a question of how far do you go,” McVay says. “There has to be some understanding that every documentary is a created narrative but there is a line where that becomes deception.”
Pact says that the use of AI to enhance programme quality is no more a concern than passing every piece of content through a grade – but that it crosses a line when AI is used to deceive the viewer (or not properly recompense artists). In which case, the use of AI may still be acceptable provided there’s an honest and transparent acknowledgement by programme makers.
There is also a warning that reliance on AI to produce finished output will only create dull homogenised and unsellable content.
“You can get AI to serve different ideas, at script stage or in the edit, but it won’t be able to discern if anything is ultimately boring to an audience,” McVay says. “It won’t come up with a different angle in the way that only skilled craftspeople can.”
He adds, “We would not have had modern art without someone like Picasso breaking the rules. AI would not have invented punk music.”
In a sense, Pact is saying that the market is the final arbiter of the use of AI to produce final content (provided that usage is transparent and fairly compensated).
“You will get punished if your content is derivative. If you don’t make a program that is distinctive the audience will go somewhere else.”
Similarly, professional creators should use their own ethical judgement and decide whether to use a LLM which churns out biased (ethnic, gender or otherwise) results.
“We have made it very clear that we want a more inclusive industry and we want more inclusive content so we are flagging that LLMs can be biased. If you are developing an idea using a LLM that keeps generating storylines about white middle-aged men, then that should be enough for you to press pause.
He continues, “When tech companies introduce AI tools without proper research and testing then it is up to society to effectively say we don’t find it acceptable that these models are launched, regardless of whether that bias has been consciously input or not.”
Fit for purpose
AI has been on Pact’s radar for some time and had begun talking to various unions and guilds in the UK “since way before the SAG and Writers’ strikes,” McVay said.
Out of those discussions there emerged a working group which has devised the newly released guidelines. It’s a conversation that is open and ongoing.
“If a writer chooses to use AI to brainstorm or help them develop a theme then provided they and their producers are satisfied with what the AI has been trained on and provided it’s all open and upfront then there should be no issue.”
He points to the benefits in efficiency of AI tools for example in logging rushes for a reality show or speech-to-text transcriptions.
“The person who used to do that might just end up doing a better job,” he says. “Equally it may be that traditional jobs may not exist or may change. AI will likely save a lot of money when it comes to dubbing content into local languages for international sale, for example. There are going to be efficiencies, but none of this changes the fundamental human creativity at the core of our industry.”
No comments yet