MERF: Memory-Efficient Radiance Fields for
Real-time View Synthesis in Unbounded Scenes
SIGGRAPH 2023
- Christian Reiser1,2,3
- Richard Szeliski1
- Dor Verbin1
- Pratul P. Srinivasan1
- Ben Mildenhall1
- Andreas Geiger2,3
- Jonathan T. Barron1
- Peter Hedman1
- Google Research1
- Tübingen AI Center2
- University of Tübingen3
Abstract
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.
Video
Real-Time Interactive Viewer Demos
Real Captured Scenes
Representation

For a location
Piecewise-projective contraction

To model unbounded scenes we employ a contraction function. Existing works use spherical contraction, which maps straight lines to curves (left). This makes computing intersections between rays and axis-aligned bounding boxes intractable, which is required for empty space skipping. We propose a novel contraction function (right) that maps a line to a small number of segments. Intersections can be computed efficiently and thereby our contraction function is more suitable for real-time rendering.
SNeRG++ vs MERF






Citation
If you want to cite our work, please use:
@article{Reiser2023SIGGRAPH, title={MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes}, author={Christian Reiser and Richard Szeliski and Dor Verbin and Pratul P. Srinivasan and Ben Mildenhall and Andreas Geiger and Jonathan T. Barron and Peter Hedman}, journal={SIGGRAPH}, year={2023} }
Acknowledgements
We thank Marcos Seefelder, Julien Philip and Simon Rodriguez for their suggestions on shader optimization. This work was supported by the ERC Starting Grant LEGO3D (850533) and the DFG EXC number 2064/1 - project number 390727645. The website template was borrowed from Michaël Gharbi. Image sliders are based on dics.