Methods

CamP

CamP: Camera Preconditioning for Neural Radiance Fields

Authors:
Keunhong Park, Philipp Henzler, Ben Mildenhall, Jonathan T. Barron, Ricardo Martin-Brualla
Paper:

https://arxiv.org/pdf/2308.10902.pdf

Web:

https://camp-nerf.github.io/

Licenses:

Apache 2.0

ID:
camp
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

CamP is an extension of Zip-NeRF which adds pose refinement to the training process.

COLMAP

Pixelwise View Selection for Unstructured Multi-View Stereo

Authors:
Johannes Lutz Schőnberger, Enliang Zheng, Marc Pollefeys, Jan-Michael Frahm
Paper:

https://demuc.de/papers/schoenberger2016mvs.pdf

Web:

https://colmap.github.io/

Licenses:

BSD

ID:
colmap
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, full_opencv, opencv
Required features:
images_points3D_indices, points3D_xyz, color, points3D_rgb
Supported outputs:
color, depth

COLMAP Multi-View Stereo (MVS) is a general-purpose, end-to-end image-based 3D reconstruction pipeline. It uses the point cloud if available, otherwise it runs a sparse reconstruction to obtained. The reconstruction consists of a stereo matching step, followed by a multi-view stereo step to obtain a dense point cloud. Finally, either Delaunay or Poisson meshing is used to obtain a mesh from the point cloud.

Gaussian Opacity Fields

Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes

Authors:
Zehao Yu, Torsten Sattler, Andreas Geiger
Paper:

https://arxiv.org/pdf/2404.10772.pdf

Web:

https://niujinshuchong.github.io/gaussian-opacity-fields/

Licenses:

custom, research only

ID:
gaussian-opacity-fields
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
points3D_xyz, color
Supported outputs:
color, normal, depth, accumulation, distortion_map

Improved Mip-Splatting with better geometry.

Gaussian Splatting

3D Gaussian Splatting for Real-Time Radiance Field Rendering

Authors:
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis
Paper:

https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/3d_gaussian_splatting_low.pdf

Web:

https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

Licenses:

custom, research only

ID:
gaussian-splatting
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
points3D_xyz, color
Supported outputs:
color

Official Gaussian Splatting implementation extended to support distorted camera models. It is fast to train (1 hous) and render (200 FPS).

GS-W

Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections

Authors:
Dongbin Zhang, Chuming Wang, Weitao Wang, Peihao Li, Minghan Qin, Haoqian Wang
Paper:

https://arxiv.org/pdf/2403.15704.pdf

Web:

https://eastbeanzhang.github.io/GS-W/

Licenses:

unknown

ID:
gaussian-splatting-wild
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
points3D_xyz, color
Supported outputs:
color

Official GS-W implementation - 3DGS modified to handle appearance changes and transient objects. A reference view used to provide appearance conditioning. Note, that the method uses huge appearance embeddings (per-Gaussian) and appearance modeling has a large memory footprint.

Instant NGP

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Authors:
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
Paper:

https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf

Web:

https://nvlabs.github.io/instant-ngp/

Licenses:

custom, research only

ID:
instant-ngp
Backends:
docker, conda, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, accumulation

Instant-NGP is a method that uses hash-grid and a shallow MLP to accelerate training and rendering. This method trains very fast (~6 min) and renders also fast ~3 FPS.

K-Planes

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Authors:
Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
Paper:

https://arxiv.org/pdf/2301.10241

Web:

https://sarafridov.github.io/K-Planes/

Licenses:

BSD 3

ID:
kplanes
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
images_points3D_indices, points3D_xyz, color
Supported outputs:
color, depth

K-Planes is a NeRF-based method representing d-dimensional space using 2 planes allowing for a seamless way to go from static (d=3) to dynamic (d=4) scenes.

Mip-Splatting

Mip-Splatting: Alias-free 3D Gaussian Splatting

Authors:
Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger
Paper:

https://arxiv.org/pdf/2311.16493.pdf

Web:

https://niujinshuchong.github.io/mip-splatting/

Licenses:

custom, research only

ID:
mip-splatting
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
points3D_xyz, color
Supported outputs:
color

A modification of Gaussian Splatting designed to better handle aliasing artifacts.

Mip-NeRF 360

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

Authors:
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Paper:

https://arxiv.org/pdf/2111.12077.pdf

Web:

https://jonbarron.info/mipnerf360/

Licenses:

Apache 2.0

ID:
mipnerf360
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

Official Mip-NeRF 360 implementation addapted to handle different camera distortion/intrinsic parameters. It was designed for unbounded object-centric 360-degree capture and handles anti-aliasing well. It is, however slower to train and render compared to other approaches.

NeRF

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Authors:
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
Paper:

https://arxiv.org/pdf/2003.08934.pdf

Web:

https://www.matthewtancik.com/nerf

Licenses:

MIT

ID:
nerf
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, full_opencv, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

Original NeRF method representing radiance field using a large MLP.

NerfStudio

Nerfstudio: A Modular Framework for Neural Radiance Field Development

Authors:
Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
Paper:

https://arxiv.org/pdf/2302.04264.pdf

Web:

https://docs.nerf.studio/

ID:
nerfacto
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

NerfStudio (Nerfacto) is a method based on Instant-NGP which combines several improvements from different papers to achieve good quality on real-world scenes captured under normal conditions. It is fast to train (12 min) and render speed is ~1 FPS.

NeRF On-the-go

NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild

Authors:
Weining Ren, Zihan Zhu, Boyang Sun, Julia Chen, Marc Pollefeys, Songyou Peng
Paper:

https://arxiv.org/pdf/2405.18715.pdf

Web:

https://rwn17.github.io/nerf-on-the-go/

ID:
nerfonthego
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

NeRF On-the-go enables novel view synthesis in in-the-wild scenes from casually captured images.

NeRF-W (reimplementation)

Licenses:

MIT

ID:
nerfw-reimpl
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, full_opencv, opencv
Required features:
points3D_xyz, color
Supported outputs:
color, depth

Unofficial reimplementation of NeRF-W. Does not reach the performance reported in the original paper, but is widely used for benchmarking.

SeaThru-NeRF

SeaThru-NeRF: Neural Radiance Fields in Scattering Media

Authors:
Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak, Tali Treibitz, Simon Korman
Paper:

https://openaccess.thecvf.com/content/CVPR2023/papers/Levy_SeaThru-NeRF_Neural_Radiance_Fields_in_Scattering_Media_CVPR_2023_paper.pdf

Web:

https://sea-thru-nerf.github.io/

Licenses:

Apache 2.0

ID:
seathru-nerf
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation, depth_mean, color_clean, color_backscatter

Official SeaThru-NeRF implementation. It is based on MipNeRF 360 and was disagned for underwater scenes.

TensoRF

TensoRF: Tensorial Radiance Fields

Authors:
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su
Paper:

https://arxiv.org/pdf/2203.09517.pdf

Web:

https://apchenstu.github.io/TensoRF/

Licenses:

MIT

ID:
tensorf
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, full_opencv, opencv
Required features:
color
Supported outputs:
color, depth

TensoRF factorizes the radiance field into a multiple compact low-rank tensor components. It was designed and tester primarily on Blender, LLFF, and NSVF datasets.

Tetra-NeRF

Tetra-NeRF: Representing Neural Radiance Fields Using Tetrahedra

Authors:
Jonas Kulhanek, Torsten Sattler
Paper:

https://arxiv.org/pdf/2304.09987.pdf

Web:

https://jkulhanek.com/tetra-nerf

Licenses:

MIT

ID:
tetra-nerf
Backends:
docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
points3D_xyz, color, points3D_rgb
Supported outputs:
color, depth, accumulation

Tetra-NeRF is a method that represents the scene as tetrahedral mesh obtained using Delaunay tetrahedralization. The input point cloud has to be provided (for COLMAP datasets the point cloud is automatically extracted). This is the official implementation from the paper.

WildGaussians

WildGaussians: 3D Gaussian Splatting in the Wild

Paper:

https://arxiv.org/pdf/2407.08447.pdf

Web:

https://wild-gaussians.github.io/

Licenses:

MIT, custom, research only

ID:
wild-gaussians
Backends:
conda, docker, apptainer, python
Camera models:
pinhole
Required features:
points3D_xyz, color
Supported outputs:
color, accumulation, depth

WildGaussians adopts 3DGS to handle appearance changes and transient objects. After fixing appearance, it can be baked back into 3DGS.

Zip-NeRF

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Authors:
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Paper:

https://arxiv.org/pdf/2304.06706.pdf

Web:

https://jonbarron.info/zipnerf/

Licenses:

Apache 2.0

ID:
zipnerf
Backends:
conda, docker, apptainer, python
Camera models:
pinhole, opencv_fisheye, opencv
Required features:
color
Supported outputs:
color, depth, accumulation

Zip-NeRF is a radiance field method which addresses the aliasing problem in the case of hash-grid based methods (iNGP-based). Instead of sampling along the ray it samples along a spiral path - approximating integration along the frustum.