Show simple item record

dc.contributor.authorRao, Pramod
dc.contributor.authorMeka, Abhimitra
dc.contributor.authorZhou, Xilong
dc.contributor.authorFox, Gereon
dc.contributor.authorB R, Mallikarjun
dc.contributor.authorZhan, Fangneng
dc.contributor.authorWeyrich, Tim
dc.contributor.authorBickel, Bernd
dc.contributor.authorPfister, Hanspeter
dc.contributor.authorMatusik, Wojciech
dc.contributor.authorBeeler, Thabo
dc.contributor.authorElgharib, Mohamed
dc.contributor.authorHabermann, Marc
dc.contributor.authorTheobalt, Christian
dc.date.accessioned2026-01-14T20:49:15Z
dc.date.available2026-01-14T20:49:15Z
dc.date.issued2025-12-14
dc.identifier.isbn979-8-4007-2137-3
dc.identifier.urihttps://hdl.handle.net/1721.1/164531
dc.descriptionSA Conference Papers ’25, December 15–18, 2025, Hong Kong, Hong Kongen_US
dc.description.abstractRendering novel, relit views of a human head, given a monocular portrait image as input, is an inherently underconstrained problem. The traditional graphics solution is to explicitly decompose the input image into geometry, material and lighting via differentiable rendering; but this is constrained by the multiple assumptions and approximations of the underlying models and parameterizations of these scene components. We propose 3DPR, an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images captured in a light stage. We introduce a new diverse and large-scale multi-view 4K OLAT dataset of 139 subjects to learn a high-quality prior over the distribution of high-frequency face reflectance. We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets. The input portrait is first embedded in the latent manifold of such a model through an encoder-based inversion process. Then a novel triplane-based reflectance network trained on our lightstage data is used to synthesize high-fidelity OLAT images to enable image-based relighting. Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model. Combining the generated OLATs according to a given HDRI environment maps yields physically accurate environmental relighting results. Through quantitative and qualitative evaluations, we demonstrate that 3DPR outperforms previous methods, particularly in preserving identity and in capturing lighting effects such as specularities, self-shadows, and subsurface scattering.en_US
dc.publisherACM|SIGGRAPH Asia 2025 Conference Papersen_US
dc.relation.isversionofhttps://doi.org/10.1145/3757377.3763962en_US
dc.rightsCreative Commons Attribution-NonCommercialen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.title3DPR: Single Image 3D Portrait Relighting with Generative Priorsen_US
dc.typeArticleen_US
dc.identifier.citationPramod Rao, Abhimitra Meka, Xilong Zhou, Gereon Fox, Mallikarjun B R, Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Thabo Beeler, Mohamed Elgharib, Marc Habermann, and Christian Theobalt. 2025. 3DPR: Single Image 3D Portrait Relighting with Generative Priors. In Proceedings of the SIGGRAPH Asia 2025 Conference Papers (SA Conference Papers '25). Association for Computing Machinery, New York, NY, USA, Article 108, 1–12.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2026-01-01T08:54:01Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2026-01-01T08:54:02Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record