Overview
Block-NeRF is a variant of Neural Radiance Fields (NeRF) designed for representing and rendering large-scale environments, such as city blocks. The core idea is to decompose a large scene into individually trained NeRFs, enabling scalability and efficient rendering. This decoupling allows rendering time to remain independent of the scene size, supporting arbitrarily large environments and per-block updates. Each Block-NeRF incorporates architectural changes like appearance embeddings, learned pose refinement, and controllable exposure to handle data variations captured over time and under different environmental conditions. A procedure for aligning appearance between adjacent NeRFs ensures seamless combination. The Waymo Block-NeRF Dataset facilitates reproduction of results and encourages further research in scene reconstruction techniques.
