Datasets¶
Blender¶
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
- Authors:
- Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
- Paper:
- Web:
- ID:
- Blender
- Evaluation protocol:
Blender (nerf-synthetic) is a synthetic dataset used to benchmark NeRF methods. It consists of 8 scenes of an object placed on a white background. Cameras are placed on a semi-sphere around the object.
Mip-NeRF 360¶
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
- Authors:
- Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
- Paper:
- Web:
- ID:
- Mip-NeRF 360
- Evaluation protocol:
Mip-NeRF 360 is a collection of four indoor and five outdoor object-centric scenes. The camera trajectory is an orbit around the object with fixed elevation and radius. The test set takes each n-th frame of the trajectory as test views.
Nerfstudio¶
Nerfstudio: A Modular Framework for Neural Radiance Field Development
- Authors:
- Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
- Paper:
- Web:
- ID:
- Nerfstudio
- Evaluation protocol:
Nerfstudio Dataset includes 10 in-the-wild captures obtained using either a mobile phone or a mirror-less camera with a fisheye lens. We processed the data using either COLMAP or the Polycam app to obtain camera poses and intrinsic parameters.
Tanks and Temples¶
Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction
- Authors:
- Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, Vladlen Koltun
- Paper:
https://storage.googleapis.com/t2-downloads/paper/tanks-and-temples.pdf
- Web:
- ID:
- Tanks and Temples
- Evaluation protocol:
Tanks and Temples is a benchmark for image-based 3D reconstruction. The benchmark sequences were acquired outside the lab, in realistic conditions. Ground-truth data was captured using an industrial laser scanner. The benchmark includes both outdoor scenes and indoor environments. The dataset is split into three subsets: training, intermediate, and advanced.