In this extended Technical Paper session, four authors present their impressive research in the field of XR. Our first is an outstanding paper on the emerging volumetric video technology, Neural Radiance Fields (NeRFs), where it provides a thorough and detailed treatise of the state-of-the-art as well as comparative performance results. Our second paper presents results of an ultra-low bit-rate 3D conferencing system, built using pre-trained Neural Radiance Field (NeRF) models for high-fidelity 3D head reconstruction and real-time rendering. Whilst, our third paper seeks a universally acceptable standard for representing Volumetric Video. Built off glTF (Graphics Library Transmission Format), the authors present results of both effective file playback and its suitability for streaming. Our final paper rethinks capture – with a camera system that can be adaptively adjusted across its field, to simultaneously capture regions of high-dynamic range (with longer exposure) and high-motion (with higher frame-rate). A working proof-of-concept is presented and discussed in detail. This session is further supported by a BBC paper exploring how to incorporate a live music event into a virtual environment.
Moderator:
Omar Niamut, Director of Science - TNO
Speakers:
Aljosa Smolic, Professor - Hochschule Luzern (HSLU)
Joshua Maraval, Research Engineer - B-Com
Jun Xu, Professor - Shanghai Jiao Tong University
Kodai Kikuchi, Research engineer - NHK (Japan Broadcasting Corporation)
Watch the full session: XR – advances in capturing, rendering, and delivering
No comments yet