# **AI for Learning Photorealistic 3D Digital Humans from In-the-Wild Data**
***Matthew Chan, NVIDIA Research***
Thursday, April 18, 2024, 7:00 PM PDT
ABSTRACT
Traditionally, creating 3D digital humans requires lengthy efforts by digital artists, and often costly 3D scanning by special multi-view scanners. Learn how recent generative AI technologies allow the learning of photorealistic 3D representations from a collection of in-the-wild 2D images, such as internet photos. We’ll dive deep into our recent work called “EG3D” and “WYSIWYG”, which can synthesize wide varieties of photorealistic 3D humans in real time. We’ll also show how 3D synthetic data from a pre-trained 3D generative model can be used to train another AI model for challenging image synthesis tasks. To this end, we present our recent work called “LP3D,” which can synthesize photorealistic neural radiance field (NeRF) models from a single RGB image in real time. We’ll demonstrate how these AI-driven human synthesis methods can make the advanced capabilities, such as 3D video conferencing, accessible to anyone and enable new applications in the future.
BIO
Matthew joined NVIDIA as a research engineer in 2022. They primarily work at the intersection between graphics and generative models, specifically how they relate to 3D scene synthesis, reconstruction, and understanding. They graduated from University of Maryland, College Park in 2021 with a bachelor’s degree in mathematics and computer science.
Joint event by Silicon Valley ACM SIGGRAPH (SVSIGGRAPH), San Francisco Bay Area ACM (SFBayACM) and Los Angeles ACM SIGGRAPH (LASIGGRAPH).
In addition to Zoom, we will livestream on YouTube:
https://www.youtube.com/live/8p4rp6SVVeo
We are looking for a venue host for future SVSIGGRAPH hybrid meetings. Contact one of the event hosts or post a comment here.