Applying the techniques of 3D video animation to sound creation could take cinema audio to a new level of realism, Sound Particles’ Nuno Fonseca tells Paul Bray.
When did you last come out of the cinema saying, “Goodness, that movie had terrible sound”? Probably long ago. Possibly never. But that does not mean that all is well in the world of movie audio.
“The problem with sound is that it works mainly at a subconscious level,” says Nuno Fonseca, founder and CEO of Sound Particles. “You go to a movie with bad sound and you come out saying, “I didn’t like the photography”. But sound is an important story telling tool, and if you ignore it, you’ll be ruined. Many students and indie filmmakers disregard sound. Great filmmakers don’t.”
While most of us are happy to watch dramas and comedies in our own homes, Fonseca argues, it is the big epics – the superheroes, the science fiction adventures – that still pull in the cinema crowds. “The experience between seeing an epic movie in a cinema and at home is significant. But the difference in terms of image is slight. It’s the difference in sound that’s huge. Probably most people continue to go to cinemas because of the sound experience, even though they don’t realise it.”
It is ironic, therefore, that the biggest revolution in entertainment during the last half-century has been, not in sound, but in graphics: the advent of CGI. “Imagine if we could use the same concepts and the same technology, not for image, but for sound,” says Fonseca.
For the last nine years, he has been working to do just that.
CGI for sound
A former professor at the Polytechnic Institute of Leiria, Portugal, and an invited professor at the Lisbon Superior Music School, Fonseca is the author of the book Introduction to Sound Engineering. He is a former president of the Portuguese section of the Audio Engineering Society, and a member of the AES technical committee on audio for cinema.
“For image there’s 2D and 3D software,” he says. “2D video editing software uses pixels, layering images on top of images. 3D image software uses a 3D space, with 3D models and virtual cameras. But all existing audio software only uses a 2D approach, mixing sounds on top of sounds. Some audio software has some 3D features, but it’s not a native 3D approach. It’s almost like going to Photoshop and applying a perspective effect – it’s a 3D effect, but not a true 3D approach like Maya or Blender.
“Our goal was to create native 3D audio software, using concepts from computer graphics and applying them to sound: having a 3D space, positioning sounds within it, and using virtual microphones instead of virtual cameras. By doing so we can bring the power of computer graphics into the audio world.”
The breakthrough came when Fonseca realised that the most interesting visual effects he was seeing in movies used particle systems, a CGI technique that creates thousands or even millions of small points that together create the illusion of fire, dust, rain or smoke.
“I thought it would be great if we could do the same with sound, creating thousands of small sounds that together would create the illusion of a greater sound around you. In 2012 I’d finished my PhD, and no one was using particle systems simulators for sound. So I decided to create my own.”
“We can bring the power of computer graphics into the audio world”
The result was Sound Particles, 3D CGI-like software for audio post-production (across cinema, TV, VR and video games).
Fonseca explains how the concept works. “Imagine someone needs to create the sound of a battlefield. With conventional, 2D audio software they’d start importing audio files: an explosion, another explosion after a few seconds, a machine gun somewhere, some screams and so on. After two days’ work they’d have 50 or 60 sounds playing at the same time.
“With Sound Particles, you can import 300 war-related sound effects from your sound library and use particle systems to create 10,000 sounds playing around you, spread over an area of a square kilometre, with some random movements and random audio effects to create diversity, and rendering the final result in any format, from mono to Dolby Atmos 9.1 bed. All this in a few minutes.”
Fonseca says that the ability of particle systems to generate thousands of sounds together is unique. “We have renders with up to a million sounds playing at the same time. It’s not only a matter of productivity, but also scale. In our battlefield example, I can create thousands of sounds playing together over a large area, far away from the virtual mic, adding up to a rumble of background sound, plus a few sounds close to the mic. This achieves both epic and detailed sound, exactly as it would sound in the real world.
“More recently we’ve started using CGI integration, allowing users to use existing CGI scenes to control sound. If sound and image are not coherent, your brain will tell you it’s fake, even if you don’t know why – just like with bad visual effects. With CGI integration, you get a perfect coherence between sound and image.”
Sound force
Fonseca says Sound Particles is now used in all the major Hollywood studios, ranging from animation (Cars 3, Despicable Me 3) and super-hero flicks (Batman vs Superman, Wonder Woman), to science fiction (Ready Player One, Independence Day: Resurgence) and adventures (The Great Wall, The Mummy).
“It was used in Game of Thrones to create the sound of the epic battles with thousands of undead, and Disney’s Ralph Breaks the Internet for the King Kong scene,” says Fonseca. “It can also be used for small things like the scene in Steve Jobs where the theatre audience start tapping their feet.”
Bringing the power of computer graphics to audio post-production in this way entails a different intellectual approach, Fonseca believes.
“I usually compare our story with the story of animation. For more than 70 years classic animation used the same process, laying images on top of images, with some technical advances in between, but following the same workflow. Then came CGI, which was used in parts of particular scenes. Then came Pixar doing 100% CGI movies, which is the new animation standard. But the adoption of CGI wasn’t straightforward. It was quite hard, entailing a completely new workflow, with new tools.
“The same happens with audio. For more than 70 years we’ve been using the same audio post workflow, mixing sounds on top of sounds, with some technical advances in between (e.g. analogue mixers to digital computers). Then our native 3D audio approach started to be used with parts of particular scenes. In future, we’ll probably see a different approach, with a different workflow and new job titles, for example 3D sound modeller.”
If the potential of native 3D audio sounds fascinating, do make time to attend Fonseca’s IBC presentation ‘The New Sound of Hollywood’, where he will demonstrate the concept of 3D sound creation with the help of sound designer Tormod Ringnes, who used Sound Particles on Disney’s Maleficent: Mistress of Evil.
“What I really like is that, as soon people start to understand the concept, their eyes begin sparkling with ideas,” says Fonseca. “That’s the best feedback you can get from your work.”
Nuno Fonseca will be speaking at The new sound of Hollywood, in association with SMPTE Big Screen session on Monday.
-
IBC2019 is from 13-17 September at the RAI, Amsterdam
No comments yet