11am - 12 noon
Monday 16 December 2024
Neural Rendering and Relighting of Humans
PhD Viva Open Presentation - Farshad Einabadi
Hybrid event - All Welcome!
Free
University of Surrey
Guildford
Surrey
GU2 7XH
This event has passed
Neural Rendering and Relighting of Humans
ABSTRACT:
Harmonising illumination for people between different scenes is a compelling, yet challenging problem in the fields of computer graphics and vision, with practical applications in mixed-reality scenarios, e.g., digital effects in films and augmented reality. This thesis, in particular, focuses on the rendering of cast shadows, self-shadows and relighting of clothed humans, given a 2D image cut-out from a single camera view and a desired light source, without seeking to explicitly reconstruct the corresponding 3D geometry — which is expensive to compute and often limited due to the required acquisition setups.
Firstly, we aim to bring together in a coherent manner current advances in the application of deep neural computing to illumination estimation, relighting and inverse rendering. We examine the attributes of the proposed approaches in the literature, presented in three categories: scene illumination estimation, relighting with reflectance-aware scene-specific representations and finally relighting as image-to-image transformations. We also provide an overview of current publicly available datasets for neural lighting applications.
The second contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes.
The third contribution proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. Similar to the previous contribution, the proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings.
Finally, our last contribution investigates the problem of 2D clothed human relighting given the self-shadowing maps already estimated from monocular colour image cut-outs, as auxiliary input signals to the proposed neural model inference. The proposed approach learns to render diffuse shading for one light at a time which can be combined to render arbitrary lighting conditions. Our proposed approach is evaluated against the rendered shadings from a prominent monocular human body reconstruction. The discussion highlights the benefits and shortcomings of both 3D and 2D human relighting paradigms.
All technical contributions in this thesis are in the category of supervised learning, generalisable to different people without per-subject pre-training, have fast inference timings, benefit from monocular capture setups, and are tested against relevant state-of-the-art methods, where we show improvements in speed and quality. Also, for each proposed method, ablation studies with regard to the model and/or training/inference data are provided where appropriate. The proposed neural models are supervised with images rendered using a scalable data generation framework, from diverse synthetic, or 3D scans of real people, for various light positions/ directions. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human cast shadow generation.