RelitLRM: Generative Relightable Radiance for Large Reconstruction Models

Tianyuan Zhang 1, Zhengfei Kuang2, Haian Jin3, Zexiang Xu4, Sai Bi4, Hao Tan4, He Zhang4, Yiwei Hu4, Milos Hasan4, William T. Freeman1, Kai Zhang*4, Fujun Luan*4 *(Equal Advising)   
1 Massachusetts Institute of Technology    2Stanford University    3Cornell University    4Adobe Research   
ICLR, 2025 (Spotlight)


A Diffusion Transformer that tries to learn the underlying physics of reconstruction and rendering. It autonomously learns shadow casting, specular highlights, and inter-reflections, though not as precise as ray-tracing, but all neurally learned.

Bottom row showing environment maps under which we relit the object. Results above are generated with 4-8 sparse views of the objects.

(This webpage contains a lot of videos and interactive viewers. We suggest using Chrome or Edge for the best experience.)

Object Relighting Results

(Click to see more results)

Each object is relit by three different environment maps.
Below the videos are: Input views, LDR enviroment map, normalized HDR environment map.

Object Relighting with Rotating Lights

(Click to see more results)

The objects are fixed and captured from multiple static views, while the environment maps are rotating.

In each video, from top to bottom: Relit Views, Input Views, Target Environment Map(with two tonemmaping).

Applications: Scene Insertion

We insert the relit objects into scenes from the Evermotion Archinteriors Volumes.
Scene 1
Scene 2

Applications: Object Gallery

Multiple objects rendered in the same scene with different lighting conditions.
Object Gallery (Lighting #1)
Object Gallery (Lighting #2)
Object Gallery (Lighting #3)