The restrictions around filming this year have severely impacted the business of making television. Companies like Arrow International Media have been deploying AI solutions to not only tackle lockdown limitations but also benefit the business beyond COVID-19.
One of the UK’s leading producers of high-quality factual content, Arrow has been using AI to help fill in editing gaps in productions during the lockdown period. It has made use of the AI-powered Curio Platform from GrayMeta, which utilised machine learning (ML) to log and access Arrow’s massive catalogue of archive material
“Our archives contain over 250,000 minutes of material which has never been broadcast - an extremely valuable asset both for now and the future,” says Iain Pelling, managing director, Arrow International Media. “The GrayMeta Curio Platform allows us to quickly interrogate that content so that we can efficiently and accurately access it and, most importantly, use it as part of other productions. It’s an extremely powerful solution for us. Continual innovation not only helps to keep us as leaders in our field now, but in the future, too, however things evolve. It enables us to respond much more quickly to broadcasters’ needs and supply the high-quality content they are looking for. It is one way of continuing to transform our business.”
Editorial impasse
Dan Carew-Jones, post production and workflow consultant at Arrow, was able to give further insight into the work with Curio and AI in the production process.
“When we were faced with lockdown, we had a lot of programmes that were in our edit that we couldn’t finish,” he explains. “There was a couple of series that we had the interviews for, but we hadn’t done drama reconstruction. We thought we might be able to leverage some of our archive from previous series, in order to give us compelling footage that would allow us to complete the programmes without further sacrificing any editorial integrity. We had upwards of 2,000 hours of unused rushes from previous series that we were faced with looking at.”
Initially using Limecraft Flow, a cloud-based media viewing system, Arrow started manually tagging this archive to make it more accessible for the production. “We had three or four people on the job of logging it initially, in the week that we had between realising what we needed to do and Curio becoming available,” he says.
Arrow had been talking to GrayMeta about AI and its possibilities for some time, but as lockdown became more prolonged, it prompted a trial of Curio on the footage to try to make it more available to Arrow productions
“We went forward with it and basically harvested the information about those 2,000 hours in less than a week,” says Carew-Jones.
Curio’s approach is not equivalent to the metadata creation process where human loggers tag shots, for example using a defined list of keywords. “The way that Curio works is it basically takes a couple of frames every second and analyses the content of that frame. So, you’ll get plenty of tags,” explains Carew-Jones. “Obviously, when you’re doing it with humans you get a lot more scope, because you’re tagging it with a particular use in mind. But it does make [data] available a lot quicker than a human logging process.”
“If you’ve got an hour of footage, then you may as well spend two hours with somebody looking at it than pushing it through an AI system, as your results are going to be much better,” continues Carew-Jones. “But once you’ve got 50 hours or 60 hours, then it becomes a lot more of a realistic prospect to use the AI system.
Data crunching
Any system using machine learning needs data to analyse, and the form this in can have an impact on the whole process.
“We had low-res proxies available of pretty much everything that we’ve shot through having put it on Limecraft; it was already being used as a cloud viewing platform,” Carew-Jones says. “It was really just a case of pointing the Curio system at that storage. Low-res proxies were used for speed of movement and access, but the process would have been exactly the same if Curio had been pointed at camera rushes, as the native files were also stored in the cloud.
“I’ve since realised that the key element, is in fact that we’ve got footage that is digitised and that is organised and it’s available,” he adds. “That meant when we started talking to GrayMeta, we didn’t have a long process of copying footage off of drives and putting it onto other storage devices, then ingesting it into some other storage device that Curio looks at and analyses it. That gave us a big advantage when it came to using tools like Curio and even other tools that we use that are cloud-based.”
Once Curio had done its analysis, it was not locked down; all the tagging becomes editable from that point on. A certain margin for error is accepted as part and parcel of the process.
“For example, we searched once for a machine gun, and one of the results it came up with was a camera tripod,” says Carew-Jones. “You do get that that inaccuracy, but then it’s the work of one button push to remove that tag from that shot of the tripod, so that it no longer thinks that the tripod is a machine gun. The more that people use it, the more accurate the database becomes, and so the better those searches are that people can run as it as it moves forward. It’s an organic process.
“The people that we entrusted with the logging of the footage continued to log it slightly to build the accuracy of the Curio database, but essentially, they were able to concentrate on looking for shots [for the productions] rather than for logging historic shots,” he adds.
Mind the gap
During post, the editing department would find ‘a hole’ where they needed footage and would specify the type of content that was required in order to fill it.
“We have a small team of researchers to look for that footage, they will first go to Curio to search through the tags that it had generated,” says Carew-Jones. “As the researchers had previously manually tagged a lot of footage in our Limecraft system, there were two systems for them to search through to try and find relevant topics. They’ve built preset filters of categories and collections and various subfolders. If you search too specifically, you’re probably not going to get anything. But if you search too generically, you’re going to get loads, so it’s how you structure that search process and build your filters, that you find what you need.
“[There follows] a sort of qualitative triage that happens before sending it back to the edit where they would make those sort of qualitative judgments that the AI can’t really make in terms of good shot/bad shot, [such as shots] that were slightly out of focus or from the wrong time of day,” he adds.
After the search, the found footage is ingested into the edit. “We’ll use the proxy files obviously for offline, but then we’ve got the camera native files all available on cloud storage. So that’s easily accessible,” says Carew-Jones. “I think we managed to complete about 12 episodes.
Moving forward
“For years we’ve been aware that we had a significant library of unused footage. I’m sure every production does,” says Carew-Jones. “We’d always entertained the idea of how we could potentially make use of that library. Frankly we probably wouldn’t have done it as swiftly as we did without the lockdown making it a practical opportunity for us and giving us a reason for investing those resources into it.
“Really we’ve just scratched the surface of the AI with what we’re doing,” he adds.
“We’ve been talking to GrayMeta as we have this view of how AI and machine learning could be incorporated more into the production process, so that we can bring together all the sort of rich media that we shoot – the video and the audio file – with the huge amount of metadata generated in the preparation of that shoot from location release forms, personnel release forms, contracts and rights documents.
We have thought for some time that the way we generally operate is that they’re in separate information silos, so how do we bring everything together into a single entity when we’re talking about media management? I think that’s where potentially AI and machine learning may have a significant gain for us because one of the things that it is extremely good at is OCR-type recognition.”
Something similar has already been tried out on an Arrow archive that sits on LTO tape. “Our data manager creates a screen grab of each LTO tape so that he knows exactly what’s on it, it’s basically just a list of camera card names,” says Carew-Jones. “We were looking at using the AI to be able to read those screen grabs and allow us to open up the ability to search for specific LTO tapes within our MAM system, to make it available to everyone else, not just the data manager and the few people that know about the LTO mechanism.”
Another discussion, more conceptual at the moment, is about looking at using AI to reference historical data and then apply that data to help restore damaged archive footage.
We’re in regular communication with GrayMeta with our ideas about how AI potentially could help us,” Carew-Jones says. “For a while AI seemed to be a solution in search of a problem, but now we’re coming to realise that there is value that it can add, significantly.”
No comments yet