MERF: Memory-Efficient Radiance Fields for
Real-time View Synthesis in Unbounded Scenes
SIGGRAPH 2023

Abstract

Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

Video

Real-Time Interactive Viewer Demos

Real Captured Scenes

Representation


overview

For a location \(\mathbf{x}\) along a ray: (1) We query its eight neighbors on a low-resolution 3D grid; and we project it onto each of the three axis-aligned planes, and then query each projection’s four neighbors on a high-resolution 2D grid. (2) The eight low-resolution 3D neighbors are evaluated and trilinearly interpolated while the three sets of four high-resolution 2D neighbors are evaluated and bilinearly interpolated, and the resulting features are summed into a single feature vector \(\mathbf{t}\). (3) The feature vector is split and nonlinearly mapped into three components: density \(\tau\) , RGB color \(\mathbf{c}_d\), and a feature vector \(\mathbf{f}\) encoding view dependence effects.

Piecewise-projective contraction

contraction

To model unbounded scenes we employ a contraction function. Existing works use spherical contraction, which maps straight lines to curves (left). This makes computing intersections between rays and axis-aligned bounding boxes intractable, which is required for empty space skipping. We propose a novel contraction function (right) that maps a line to a small number of segments. Intersections can be computed efficiently and thereby our contraction function is more suitable for real-time rendering.

SNeRG++ vs MERF

SNeRG++ (210 MB) MERF (220 MB)
SNeRG++  (213 MB) MERF (233 MB)
SNeRG++  (117 MB) MERF (198 MB)

Citation

If you want to cite our work, please use:

@article{Reiser2023SIGGRAPH,
    title={MERF: Memory-Efficient Radiance Fields for
        Real-time View Synthesis in Unbounded Scenes},
    author={Christian Reiser and Richard Szeliski and 
        Dor Verbin and Pratul P. Srinivasan and Ben Mildenhall
        and Andreas Geiger and Jonathan T. Barron and Peter Hedman},
    journal={SIGGRAPH},
    year={2023}
}

Acknowledgements

We thank Marcos Seefelder, Julien Philip and Simon Rodriguez for their suggestions on shader optimization. This work was supported by the ERC Starting Grant LEGO3D (850533) and the DFG EXC number 2064/1 - project number 390727645. The website template was borrowed from Michaël Gharbi. Image sliders are based on dics.