Datasets

Blender

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Authors:
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
Paper:

https://arxiv.org/pdf/2003.08934.pdf

Web:

https://www.matthewtancik.com/nerf

ID:
Blender
Evaluation protocol:

nerf (source code)

Blender (nerf-synthetic) is a synthetic dataset used to benchmark NeRF methods. It consists of 8 scenes of an object placed on a white background. Cameras are placed on a semi-sphere around the object.

Mip-NeRF 360

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

Authors:
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Paper:

https://arxiv.org/pdf/2111.12077.pdf

Web:

https://jonbarron.info/mipnerf360/

ID:
Mip-NeRF 360
Evaluation protocol:

nerf (source code)

Mip-NeRF 360 is a collection of four indoor and five outdoor object-centric scenes. The camera trajectory is an orbit around the object with fixed elevation and radius. The test set takes each n-th frame of the trajectory as test views.

Nerfstudio

Nerfstudio: A Modular Framework for Neural Radiance Field Development

Authors:
Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
Paper:

https://arxiv.org/pdf/2302.04264.pdf

Web:

https://nerf.studio

ID:
Nerfstudio
Evaluation protocol:

default (source code)

Nerfstudio Dataset includes 10 in-the-wild captures obtained using either a mobile phone or a mirror-less camera with a fisheye lens. We processed the data using either COLMAP or the Polycam app to obtain camera poses and intrinsic parameters.

Tanks and Temples

Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction

Authors:
Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, Vladlen Koltun
Paper:

https://storage.googleapis.com/t2-downloads/paper/tanks-and-temples.pdf

Web:

https://www.tanksandtemples.org/

ID:
Tanks and Temples
Evaluation protocol:

default (source code)

Tanks and Temples is a benchmark for image-based 3D reconstruction. The benchmark sequences were acquired outside the lab, in realistic conditions. Ground-truth data was captured using an industrial laser scanner. The benchmark includes both outdoor scenes and indoor environments. The dataset is split into three subsets: training, intermediate, and advanced.