Artificial intelligence technology is swiftly moving from experiment to practical use across production workflows and into the heart of content creation.
Not so long ago it was the subject of science fiction, but artificial intelligence is now being used to write pop songs, break news, assemble reality shows and even create Hollywood blockbusters.
Software first developed at New Zealand’s Weta Digital for Avatar and The Planet of the Apes films, has been adapted by Vancouver-based Ziva Dynamics to create computer-generated simulations that can transform the way visual effects studios create characters.
Its machine learning algorithms are trained on real-world physics, anatomy and kinesiology to simulate natural body movements, including soft tissue-like skin elasticity and layers of fat. It is claimed to animate CG characters in a fraction of the time and cost of traditional VFX – and it’s been used on major productions Pacific Rim and Fantastic Beasts.
Japanese start-up Amadeus Code is one of many AI algorithms being trained to produce music at the touch of a button. In its case a user uploads their list of songs and the AI will analyse them before automatically generating new pop tracks based on era, rhythm and range, all via an iPhone app.
These are just two examples of AI’s pervasive reach across the industry right into the heart of content creation. It is taking on laborious, expensive tasks such as closed captioning, metadata tagging and social media clip generation. Because of its ability to crunch volumes of data and yield meaningful results it is swiftly moving from experiment to practical use.
When the half-brother of North Korean leader Kim Jong-un was murdered in Malaysia in 2017 the news agency that broke the news – half an hour before anyone else – was Japanese start-up JX Press Corp. It used AI to scour social media to find breaking news then used another algorithm to write it up.
So impressive are its results, that broadcasters NHK, Fuji Television and TV Asahi are clients, with the latter’s deputy editor-in-chief Koichiro Nishi quoted by Bloomberg as saying “it’s a world of 100 million cameramen. A must-have tool.”
Endemol Shine Group (ESG) is using a Microsoft Azure AI workflow to replace an entirely manual selection process in the Spanish version of reality show Big Brother. Through machine learning, the system recognises patterns of language, keywords and emotional reactions. It tracks, monitors and indexes the activities of the Big Brother house’s residents and infers relationships between them.
“Watching screens for so many feeds and arduously logging moments is very tedious,” explains Lisa Perrin, CEO Creative Networks, ESG. “Now, we zero in on the most interesting actions rather than wading through hours of footage.”
Declaring the technology “ground-breaking”, Perrin says it will “completely revolutionise the way we produce our global formats” and open up “an unprecedented level of creative freedom”.
Accenture reports a major Latin American content producer experimenting with AI to automate and optimise production script creation for telenovelas. “An AI might help maximise the number of scenes scheduled for shooting per week, maximise the use of scenarios, minimise actors’ idleness or reduce time to move between shooting locations,” says Gavin Mann, the consultancy’s Global Broadcast Industry Lead.
AI-powered algorithms are able to analyse every nook and cranny of every frame of video, making it possible for a sports production team to sift through a mountain of metadata and put together a montage of great plays in a few seconds.
‘AI might help maximise the number of scenes scheduled for shooting per week, maximise the use of scenarios or reduce time to move between shooting locations’ Gavin Mann, Accenture
Getting granular
Wimbledon, for example, used IBM’s AI to automate the tagging and assembly of two-minute highlight reels for online publication. The system rates each play based on metrics such as crowd noise and player gesture to speed the search of creative editors to build more extensive highlights. Isreal’s WSC Sports has developed a similar automated workflow for the United Soccer League in the US and is currently churning out 300 clips per game in near realtime.
“AI essentially turns haystacks of information into needles of insights, which might be the best metaphor yet for how traditional media companies can advance their businesses in a big way by focusing on all things small,” says Glodina Lostanlen, Senior Vice President and General Manager Americas, Playout and Networking Sales, Services and Global Marketing at Imagine Communications
AI-powered machines are also proving adept at identifying unwanted content. Google reports that AI, not humans, detected about 80% of the 8.28 million videos removed from YouTube in the last quarter of 2017. Facebook acted against 1.9 million pieces of content on its platform in the first quarter of 2018, detected as fake accounts and fake news by AI.
“For many, the primary driver of adoption of AI technology is the opportunity to automate routine workflows that are manually executed,” says Stuart Almond, Head of Marketing and Communications, Sony Professional Europe. “Calling upon metadata in particular is a catalyst towards a richer environment for audiences. When applying this consumer lens, that’s when AI gets really smart and creates real, tangible benefits for both companies and end-users.”
Netflix is probably one of the best examples of how AI can help create a richer and more tailored experience for consumers, while at the same time driving business efficiencies.
“Its AI-driven recommendations engine is safeguarding over $1 billion of revenue each year by showing consumers the content they are really interested in and, in turn, keeping them from cancelling the service,” says Almond.
“It is a strong proof point that shows AI-based solutions can have a significant positive effect on revenues, if done right. The key going forward is adopting media supply chains that support this, bringing content acquisition and production into this process.”
Data, down to the finest detail, is now the currency with the most spending power in the media and entertainment industry. The more granular that media companies can get when it comes to knowing their networks, their audience and the way their audience is consuming content, the richer they will be.
“The challenge for media companies is finding a way to manage all the information generated from every aspect of workflow from viewer preferences to rights and network errors,” says Ian Munford, Akamai’s Director Product Marketing, Media Solutions. “Most media companies are drinking from the fire hose but AI has the potential to turn that data into action. Most uses of AI today are cutting edge.”
Speaking at CES at the beginning of this year, Amazon Vice President of Alexa Software Al Lindsay had advice for those concerned about an AI-powered future.
“Learn to code now,” he said. “If you know how to code, you can help change the outcome the way you want it to be. It’s that simple.”
AI essentially turns haystacks of information into needles of insights, which might be the best metaphor yet for how traditional media companies can advance their businesses in a big way by focusing on all things small’ Glodina Lostanlen, Imagine Communications
AI at IBC
There has been a large focus on AI within Sony’s media services, which returns to IBC under the banner of ‘Go Make Tomorrow’. “The key drivers will be to open up more efficiencies and possibilities with how content is used in any workflow,” says Almond. “Sony is fiercely committed to collaborating with industry bodies and innovators to help our customers drive efficiencies and untap the potential of new technologies like AI and machine learning.”
Accenture is working with broadcast and video clients to incorporate AI into projects spanning basic automation of back office processes and compliance checks, to the optimisation of programming schedules and interpreting payments for complicated royalties contracts.
“We believe AI’s real power is helping reimagine business by augmenting, not replacing, human capabilities,” asserts Mann. “Automation in content review is one area in which companies can use AI to leap ahead on innovation and profitability.”
With such a new technology, and one developing at an incredible pace, Mann says often clients want to start with a small proof of concept to demonstrate that it actually works. “We can help them measure what is working, scale fast when it does and fail fast when it doesn’t.”
Accenture also offers access to its Amsterdam-based Innovation Center (only a mile from the RAI) for further discussion and demonstration of its “very wide range of use cases and client stories”.
Nuance Communications, which describes itself as a pioneer in conversational AI, says it is seeing demand for enhanced targeting based on voice profiles. “Telecommunications customers are asking for the ability to better target and tailor specific offers and messages to their end users,” states Dan Faulkner, SVP & GM. “Developments are beginning to make this targeting more intelligent.”
At IBC2018, Nuance is presenting a new voice biometrics tool for its voice and natural language understanding platform Dragon TV. Aimed at Smart TV deployments, the innovation is intended for more secure authentication through natural voice patterns. For example, when purchasing a film, rather than PINs, passwords and security questions, this technology allows consumers to buy the movie using their voice alone.
According to AWS Elemental Chief Product Officer Aslam Khader, the next phase of AI will involve the concept of “content lakes”, which means having all content and related metadata in a unified location and proximal to scalable, cost-effective and easy-to-use cloud-based AI services. He says: “The content lakes concept makes searching, identifying and moving huge chunks of content across different media workflows easy and efficient. You might think about this as media asset management 2.0.”
At IBC, AWS will showcase ways to make it easier for media companies to enrich and monetise their content, with demonstrations of machine learning applications that highlight capabilities such as video analysis for metadata enrichment, automated compliance and editing use cases, automated transcription, translation and text to voice capabilities for closed captions, subtitling and audio description use cases, and clip production for personalised clip generation and advanced sports graphics creation.
IBC2018 Lisa Perrin will speak in Cutting Edge: Transforming production creativity with tech on Monday 17 September.
No comments yet