Participants at a roundtable on AI in media and entertainment at IBC2024 assessed some of the opportunities and challenges confronting media companies seeking to tap the potential of generative AI across a range of applications.

Most discussion of AI today naturally focuses on generative AI – AI that has the capability to create new content, powered by large language models (LLMs), and this framed the terms of a round-table discussion on AI at this year’s IBC.

IBC 13sep24_HR_Melanie Lemahieu  (24)

IBC2024 AI Tech Zone

Opening the discussion, Imogen Wall, Google Cloud Telco & Media Lead UKI, Google, said AI is ubiquitous as a term and that media companies have been using AI for a long time, but that generative AI had created a paradigm shift. “GenAI creates new text and pictures and that has changed the nature of the game,” she said.

Wall cited a survey of IBC attendees that showed AI to be second on the list of key topics for them, just below media production workflow.

Hype and reality

The discussion immediately turned to the gap between hype and reality and some of the challenges that the application of generative AI encounters in practice.

Dr Marcin Remarczyk, Head of Media Consulting, EMEA, Cognizant, said that not all pilot projects involving generative AI are succeeding. In fact, he said, only 20% make it to production, a far lower proportion than might generally be expected.

Watch now How media companies can find value in AI today

On the question of why outcomes of pilot trials are so different when it comes to generative AI, Remarczyk suggested one possible reason is excess enthusiasm. In some cases, he said, projects are being piloted too early, or for the wrong use cases. It could also be the case that the technology is being piloted when it is not mature enough.

Large organisations also face the challenge of ‘change fatigue’, where disruptive innovation is resisted, while smaller organisations face the challenge of having at their disposal only a limited pool of professionals who are tasked with handling multiple new projects.

Responsibility for managing pilot projects can also be problematic. Responsibility may initially sit with the IT team, but it may subsequently be recognised that other stakeholders should have been considered before the project is ready for production.

Tami Hoffman, Director of New Distribution and Commercial Innovation at ITN, said that a significant difference between generative AI and what has gone before is “the hype around it” and the feeling that it is a “magic wand” that can solve all problems. She said generative AI also came with unprecedented levels of risk compared with analytical predictive AI.

Hoffman cited the example of the use of generative AI to deliver automated cataloguing of footage – a project initiated by ITN. She said the technology performed well in analysing individual shots, but when it is tasked with inferring something from content, it hallucinated.

She recognised the problem of change fatigue identified by Remarczyk. ITN employees and contractors are under pressure from “relentless change” impacting the industry, and the appetite for more is limited. She said organisations “have to make it as easy as possible,” for staff to become accustomed to change, ensuring minimum disruption.

Watch now IBC Accelerators - Evolution of the Control Room: AI assistance unlocks potential for voice-controlled automation

Disruption and negative reactions

Laura Ellis, Head of Technology Forecasting at BBC R&D, said that the BBC had retroactively applied guidelines to AI projects that disrupted some workflows.

IBC 13sep24_HR_Melanie Lemahieu  (7)

IBC2024

“One of the key problems is how you tell audiences what you are doing,” she observed, citing the example of the use of AI in marketing Dr Who, a long-running series with a highly loyal fanbase. The marketing was A/B tested. Ellis said that the reaction of the audience to the disclosure that AI was used in the marketing of the show was highly negative, raising the question of how much information organisations are obliged to disclose.

She said the question of disclosure was particularly pertinent in the case of news and documentary programming, where there is potential to use generative AI for headline generation and summarisation, or to tap sports content commentaries to generate short-form updates.

Ellis said the application of generative AI to fictional content, for example to create AI-generated bedtime stories from children’s TV shows, was also highly controversial and that it was “too early” to apply the technology in this way.

Hoffman cited the example of ITN’s English-language teaching business, involving the exploitation of a high volume of content that had a relatively low monetary value. She said that this is a commercial product and keeping costs down is key. Using an AI avatar without an actor for voiceover work reduces production costs. However, she said, clients are nervous about buying products that use AI avatars. Charging a premium for human voiceovers is one solution, but she noted that ITN had yet to close a deal for an AI avatar-generated version of the product.

Intellectual property rights

The discussion turned to the question of ownership of intellectual property.

Govind Shahi, EVP of Indiacast UK, highlighted the example of using AI to generate a song for YouTube in India. If the technology based the synthetic voice for the song on that of a real-life Bollywood star, that could raise questions of ownership of IP. The use of AI dubbing could raise similar problems.

With a very high volume of content being produced, it is often not clear who owns the rights, he said.

Following this, Remarczyk cited the example of a project in which Cognizant had been involved where a German broadcaster used generative AI as part of the creative process, but questions raised about ownership of rights and royalties resulted in the project ultimately being abandoned.

Ellis agreed that broadcasters must consider the legal implications of striking partnerships with companies. Asserting rights to content while going into licensing deals is complex, she said, noting that the BBC has a duty to ensure that everything it does delivers value for licence fee players. The BBC is now looking at much more sophisticated models of contracts to solve this problem, she added.

Delivering value

The discussion then turned to the question of whether generative AI delivered value in a wider sense.

One application of generative AI is to deliver transcripts and summaries of meetings, potentially freeing up some invitees from attending.

However, the panel noted that the use of AI in this way may also raise the question of whether meetings are necessary in the first place. Ellis noted that AI may make people question the value of human participation and interaction in certain contexts.

Hoffman noted that if ITN records the transcript of a meeting and no AI-generated action points are delivered, it is a signal that a meeting may not have been necessary.

Wall said that some applications of generative AI for very specific purposes had been shown to deliver value. She cited the use of AI by Liberty Global for matching of hand-written invoices, which had cut invoicing processing times and helped insure prompt payment. She said the company had arrived at this application by trying several different things out and then discovering a use case that made sense.

Watch now Generative AI vs Utility AI: A Clear Winner?

Impact on human resources

The panel briefly discussed the potential negative impact of AI on training and development of human resources.

Remarczyk said that human skills would inevitably be lost as generative AI is employed more widely.

Hoffman agreed, noting that the use of AI will remove the need for a swathe of entry level-skills. In journalism, she said, the automation of transcription takes out a task that previously enabled those starting in the job to filter and learn from interviews they transcribed. She said the result could be a generation of people who become impatient to progress without building up a foundation of skills.

Ellis noted the negative impact of using AI to rewrite content for different audiences. Using a model that has been trained how to write could result in human staff losing the ability to communicate effectively. An absence of employees able to write for different audiences could also ultimately compromise organisations’ ability to train AI models to adapt to changing usage patterns.

Barriers

The group then discussed barriers to the effective use of AI.

IBC 13sep24_HR_Melanie Lemahieu  (467)

IBC2024

Wall highlighted the problem of inadequate availability of data. She highlighted the need to democratise data, noting that there is often a drive to transform unmanaged data into managed data, but which then becomes so carefully managed that it is not made available to everyone in an organisation who could benefit from access to it.

She said that ‘data custodians’ may make data available in the form of a one-off report rather than have people create their own interfaces to interact with it. In the world of media, staff may not be able to access metadata around content for different use cases, for example.

Shahi said that often more time is spent correcting AI-generated transcriptions than is saved. If mistakes are found, staff may have to check the entire output of a channel, he said, noting that there was a specific challenge related to Hindi content because there is not enough data available for the models to learn from.

Hoffman said that use of AI to calculate the lifetime cost of making content and the lifetime value of that content was difficult, noting that AI only works on data that is well kept and managed. With production costs coming from different sources – payroll, production budgets, facilities, rent and power, technology and so on – the data is often not joined up. Using AI to calculate costs and benefits of things that span multiple departments and organisations is problematic.

Remaczyk noted that there had been a rush of interest in AI over the last year ahead of pilot trials being initiated. One problem now, he said, is that organisations are initiating pilots in an over-hasty way that does not reveal the full cost of a production-level rollout. He said this applies to all verticals, not just media.

Ellis said that in many cases a rush to use AI is being driven by ‘fear of missing out’ and noted that organisations need to consider the use of the technology more carefully, before launching projects.

Read more Apple, HBS, Google: Gamechangers highlight latest innovations in VR and discovery