nerfbaselines.datasets¶
- class nerfbaselines.datasets.Dataset[source]¶
Bases:
_IncompleteDataset
- image_paths: List[str]¶
- image_paths_root: str¶
- images: ndarray | List[ndarray]¶
- images_points3D_indices: List[ndarray] | None¶
- metadata: Dict¶
- points3D_rgb: ndarray | None¶
- points3D_xyz: ndarray | None¶
- sampling_mask_paths: List[str] | None¶
- sampling_mask_paths_root: str | None¶
- sampling_masks: ndarray | List[ndarray] | None¶
- exception nerfbaselines.datasets.MultiDatasetError(errors, message)[source]¶
Bases:
DatasetNotFoundError
- class nerfbaselines.datasets.UnloadedDataset[source]¶
Bases:
_IncompleteDataset
- image_paths: List[str]¶
- image_paths_root: str¶
- images: NotRequired[ndarray | List[ndarray] | None]¶
- images_points3D_indices: List[ndarray] | None¶
- metadata: Dict¶
- points3D_rgb: ndarray | None¶
- points3D_xyz: ndarray | None¶
- sampling_mask_paths: List[str] | None¶
- sampling_mask_paths_root: str | None¶
- sampling_masks: ndarray | List[ndarray] | None¶
- nerfbaselines.datasets.dataset_index_select(dataset: TDataset, i: slice | int | list | ndarray) TDataset [source]¶
- nerfbaselines.datasets.dataset_load_features(dataset: UnloadedDataset, features=None, supported_camera_models=None) Dataset [source]¶
- nerfbaselines.datasets.experimental_parse_dataset_path(path: str) Tuple[str, Dict[str, Any]] [source]¶
- nerfbaselines.datasets.load_dataset(path: Path | str, split: str, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, supported_camera_models: FrozenSet[Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']] | None = None, load_features: Literal[True] = True, **kwargs) Dataset [source]¶
- nerfbaselines.datasets.load_dataset(path: Path | str, split: str, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, supported_camera_models: FrozenSet[Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']] | None = None, load_features: Literal[False] = True, **kwargs) UnloadedDataset
- nerfbaselines.datasets.new_dataset(*, cameras: Cameras, image_paths: Sequence[str], image_paths_root: str | None = None, images: ndarray | List[ndarray] | None = None, sampling_mask_paths: Sequence[str] | None = None, sampling_mask_paths_root: str | None = None, sampling_masks: ndarray | List[ndarray] | None = None, points3D_xyz: ndarray | None = None, points3D_rgb: ndarray | None = None, images_points3D_indices: Sequence[ndarray] | None = None, metadata: Dict) UnloadedDataset | Dataset [source]¶
nerfbaselines.datasets.blender¶
- nerfbaselines.datasets.blender.camera_model_to_int(camera_model: Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']) int [source]¶
- nerfbaselines.datasets.blender.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
nerfbaselines.datasets.bundler¶
- nerfbaselines.datasets.bundler.camera_model_to_int(camera_model: Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']) int [source]¶
- nerfbaselines.datasets.bundler.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
nerfbaselines.datasets.colmap¶
- class nerfbaselines.datasets.colmap.Camera(id, model, width, height, params)¶
Bases:
tuple
- class nerfbaselines.datasets.colmap.Image(id, qvec, tvec, camera_id, name, xys, point3D_ids)[source]¶
Bases:
BaseImage
- class nerfbaselines.datasets.colmap.Point3D(id, xyz, rgb, error, image_ids, point2D_idxs)¶
Bases:
tuple
- nerfbaselines.datasets.colmap.camera_model_to_int(camera_model: Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']) int [source]¶
- nerfbaselines.datasets.colmap.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
- nerfbaselines.datasets.colmap.load_colmap_dataset(path: Path | str, split: str | None = None, *, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, images_path: Path | str | None = None, colmap_path: Path | str | None = None, sampling_masks_path: Path | str | None = None)[source]¶
- nerfbaselines.datasets.colmap.new_cameras(*, poses: ndarray, intrinsics: ndarray, camera_types: ndarray, distortion_parameters: ndarray, image_sizes: ndarray, nears_fars: ndarray | None = None, metadata: ndarray | None = None) Cameras [source]¶
- nerfbaselines.datasets.colmap.padded_stack(tensors: ndarray | Tuple[ndarray, ...] | List[ndarray]) ndarray [source]¶
- nerfbaselines.datasets.colmap.read_cameras_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::WriteCamerasBinary(const std::string& path) void Reconstruction::ReadCamerasBinary(const std::string& path)
- nerfbaselines.datasets.colmap.read_cameras_text(path)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::WriteCamerasText(const std::string& path) void Reconstruction::ReadCamerasText(const std::string& path)
- nerfbaselines.datasets.colmap.read_images_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesBinary(const std::string& path) void Reconstruction::WriteImagesBinary(const std::string& path)
- nerfbaselines.datasets.colmap.read_images_text(path)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesText(const std::string& path) void Reconstruction::WriteImagesText(const std::string& path)
nerfbaselines.datasets.llff¶
- nerfbaselines.datasets.llff.camera_model_to_int(camera_model: Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']) int [source]¶
nerfbaselines.datasets.mipnerf360¶
- nerfbaselines.datasets.mipnerf360.download_mipnerf360_dataset(path: str, output: Path | str)[source]¶
- nerfbaselines.datasets.mipnerf360.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
- nerfbaselines.datasets.mipnerf360.get_scene_scale(cameras: Cameras, dataset_type: Literal['object-centric', 'forward-facing'] | None)[source]¶
- nerfbaselines.datasets.mipnerf360.load_colmap_dataset(path: Path | str, split: str | None = None, *, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, images_path: Path | str | None = None, colmap_path: Path | str | None = None, sampling_masks_path: Path | str | None = None)[source]¶
nerfbaselines.datasets.nerfonthego¶
- nerfbaselines.datasets.nerfonthego.load_colmap_dataset(path: Path | str, split: str | None = None, *, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, images_path: Path | str | None = None, colmap_path: Path | str | None = None, sampling_masks_path: Path | str | None = None)[source]¶
- nerfbaselines.datasets.nerfonthego.load_nerfonthego_dataset(path: str, split: str, **kwargs) UnloadedDataset [source]¶
nerfbaselines.datasets.nerfstudio¶
- nerfbaselines.datasets.nerfstudio.camera_model_to_int(camera_model: Literal['pinhole', 'opencv', 'opencv_fisheye', 'full_opencv']) int [source]¶
- nerfbaselines.datasets.nerfstudio.download_capture_name(output: Path, file_id_or_zip_url)[source]¶
Download specific captures a given dataset and capture name.
- nerfbaselines.datasets.nerfstudio.download_nerfstudio_dataset(path: str, output: Path | str)[source]¶
Download data in the Nerfstudio format. If you are interested in the Nerfstudio Dataset subset from the SIGGRAPH 2023 paper, you can obtain that by using –capture-name nerfstudio-dataset or by visiting Google Drive directly at: https://drive.google.com/drive/folders/19TV6kdVGcmg3cGZ1bNIUnBBMD-iQjRbG?usp=drive_link.
- nerfbaselines.datasets.nerfstudio.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
- nerfbaselines.datasets.nerfstudio.get_scene_scale(cameras: Cameras, dataset_type: Literal['object-centric', 'forward-facing'] | None)[source]¶
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_all(image_filenames: List) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split where all indices are used for both train and eval.
- Parameters:
image_filenames – list of image filenames
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_filename(image_filenames: List) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split based on the filename of the images.
- Parameters:
image_filenames – list of image filenames
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_fraction(image_filenames: List, train_split_fraction: float) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split fraction based on the number of images and the train split fraction.
- Parameters:
image_filenames – list of image filenames
train_split_fraction – fraction of images to use for training
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_interval(image_filenames: List, eval_interval: float) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split based on the interval of the images.
- Parameters:
image_filenames – list of image filenames
eval_interval – interval of images to use for eval
- nerfbaselines.datasets.nerfstudio.grab_file_id(zip_url: str) str [source]¶
Get the file id from the google drive zip url.
- nerfbaselines.datasets.nerfstudio.load_from_json(filename: Path)[source]¶
Load a dictionary from a JSON filename.
- Parameters:
filename – The filename to load from.
- nerfbaselines.datasets.nerfstudio.load_nerfstudio_dataset(path: Path | str, split: str, downscale_factor: int | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, **kwargs)[source]¶
- nerfbaselines.datasets.nerfstudio.new_cameras(*, poses: ndarray, intrinsics: ndarray, camera_types: ndarray, distortion_parameters: ndarray, image_sizes: ndarray, nears_fars: ndarray | None = None, metadata: ndarray | None = None) Cameras [source]¶
- nerfbaselines.datasets.nerfstudio.read_images_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesBinary(const std::string& path) void Reconstruction::WriteImagesBinary(const std::string& path)
- nerfbaselines.datasets.nerfstudio.read_images_text(path)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesText(const std::string& path) void Reconstruction::WriteImagesText(const std::string& path)
- nerfbaselines.datasets.nerfstudio.read_points3D_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadPoints3DBinary(const std::string& path) void Reconstruction::WritePoints3DBinary(const std::string& path)
nerfbaselines.datasets.phototourism¶
- class nerfbaselines.datasets.phototourism.EvaluationProtocol(*args, **kwargs)[source]¶
Bases:
Protocol
- evaluate(predictions: Iterable[RenderOutput], dataset: Dataset) Iterable[Dict[str, float | int]] [source]¶
- render(method: Method, dataset: Dataset) Iterable[RenderOutput] [source]¶
- class nerfbaselines.datasets.phototourism.NerfWEvaluationProtocol[source]¶
Bases:
EvaluationProtocol
- evaluate(predictions: Iterable[RenderOutput], dataset: Dataset) Iterable[Dict[str, float | int]] [source]¶
- render(method: Method, dataset: Dataset) Iterable[RenderOutput] [source]¶
- nerfbaselines.datasets.phototourism.download_phototourism_dataset(path: str, output: Path | str)[source]¶
- nerfbaselines.datasets.phototourism.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
- nerfbaselines.datasets.phototourism.get_scene_scale(cameras: Cameras, dataset_type: Literal['object-centric', 'forward-facing'] | None)[source]¶
- nerfbaselines.datasets.phototourism.horizontal_half_dataset(dataset: Dataset, left: bool = True) Dataset [source]¶
- nerfbaselines.datasets.phototourism.image_to_srgb(tensor, dtype, color_space: str | None = None, allow_alpha: bool = False, background_color: ndarray | None = None)[source]¶
- nerfbaselines.datasets.phototourism.load_colmap_dataset(path: Path | str, split: str | None = None, *, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, images_path: Path | str | None = None, colmap_path: Path | str | None = None, sampling_masks_path: Path | str | None = None)[source]¶
nerfbaselines.datasets.tanksandtemples¶
- nerfbaselines.datasets.tanksandtemples.download_tanksandtemples_dataset(path: str, output: Path | str) None [source]¶
- nerfbaselines.datasets.tanksandtemples.get_default_viewer_transform(poses, dataset_type: str | None) Tuple[ndarray, ndarray] [source]¶
- nerfbaselines.datasets.tanksandtemples.get_scene_scale(cameras: Cameras, dataset_type: Literal['object-centric', 'forward-facing'] | None)[source]¶
- nerfbaselines.datasets.tanksandtemples.load_colmap_dataset(path: Path | str, split: str | None = None, *, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, images_path: Path | str | None = None, colmap_path: Path | str | None = None, sampling_masks_path: Path | str | None = None)[source]¶
- nerfbaselines.datasets.tanksandtemples.load_tanksandtemples_dataset(path: Path | str, split: str, downscale_factor: int = 2, **kwargs) UnloadedDataset [source]¶