nerfbaselines.datasets¶
- class nerfbaselines.datasets.Dataset(cameras: nerfbaselines.cameras.Cameras, file_paths: List[str], sampling_mask_paths: Optional[List[str]] = None, file_paths_root: Optional[pathlib.Path] = None, images: Optional[numpy.ndarray] = None, sampling_masks: Optional[numpy.ndarray] = None, points3D_xyz: Optional[numpy.ndarray] = None, points3D_rgb: Optional[numpy.ndarray] = None, metadata: Dict = <factory>, color_space: Optional[Literal['srgb', 'linear']] = None)[source]¶
Bases:
object
- color_space: Literal['srgb', 'linear'] | None = None¶
- property expected_scene_scale¶
- file_paths: List[str]¶
- file_paths_root: Path | None = None¶
- images: ndarray | None = None¶
- metadata: Dict¶
- points3D_rgb: ndarray | None = None¶
- points3D_xyz: ndarray | None = None¶
- sampling_mask_paths: List[str] | None = None¶
- sampling_masks: ndarray | None = None¶
- exception nerfbaselines.datasets.MultiDatasetError(errors, message)[source]¶
Bases:
DatasetNotFoundError
- nerfbaselines.datasets.load_dataset(path: Path | str, split: str, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']]) Dataset [source]¶
nerfbaselines.datasets.blender¶
nerfbaselines.datasets.colmap¶
- class nerfbaselines.datasets.colmap.Camera(id, model, width, height, params)¶
Bases:
tuple
- class nerfbaselines.datasets.colmap.Image(id, qvec, tvec, camera_id, name, xys, point3D_ids)[source]¶
Bases:
BaseImage
- class nerfbaselines.datasets.colmap.Point3D(id, xyz, rgb, error, image_ids, point2D_idxs)¶
Bases:
tuple
- nerfbaselines.datasets.colmap.load_colmap_dataset(path: Path, images_path: Path | None = None, split: str | None = None, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None)[source]¶
- nerfbaselines.datasets.colmap.read_cameras_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::WriteCamerasBinary(const std::string& path) void Reconstruction::ReadCamerasBinary(const std::string& path)
- nerfbaselines.datasets.colmap.read_cameras_text(path)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::WriteCamerasText(const std::string& path) void Reconstruction::ReadCamerasText(const std::string& path)
- nerfbaselines.datasets.colmap.read_images_binary(path_to_model_file)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesBinary(const std::string& path) void Reconstruction::WriteImagesBinary(const std::string& path)
- nerfbaselines.datasets.colmap.read_images_text(path)[source]¶
- see: src/base/reconstruction.cc
void Reconstruction::ReadImagesText(const std::string& path) void Reconstruction::WriteImagesText(const std::string& path)
nerfbaselines.datasets.mipnerf360¶
- nerfbaselines.datasets.mipnerf360.load_colmap_dataset(path: Path, images_path: Path | None = None, split: str | None = None, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None)[source]¶
nerfbaselines.datasets.nerfstudio¶
- nerfbaselines.datasets.nerfstudio.download_capture_name(output: Path, file_id_or_zip_url)[source]¶
Download specific captures a given dataset and capture name.
- nerfbaselines.datasets.nerfstudio.download_nerfstudio_dataset(path: str, output: Path)[source]¶
Download data in the Nerfstudio format. If you are interested in the Nerfstudio Dataset subset from the SIGGRAPH 2023 paper, you can obtain that by using –capture-name nerfstudio-dataset or by visiting Google Drive directly at: https://drive.google.com/drive/folders/19TV6kdVGcmg3cGZ1bNIUnBBMD-iQjRbG?usp=drive_link.
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_all(image_filenames: List) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split where all indices are used for both train and eval.
- Parameters:
image_filenames – list of image filenames
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_filename(image_filenames: List) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split based on the filename of the images.
- Parameters:
image_filenames – list of image filenames
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_fraction(image_filenames: List, train_split_fraction: float) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split fraction based on the number of images and the train split fraction.
- Parameters:
image_filenames – list of image filenames
train_split_fraction – fraction of images to use for training
- nerfbaselines.datasets.nerfstudio.get_train_eval_split_interval(image_filenames: List, eval_interval: float) Tuple[ndarray, ndarray] [source]¶
Get the train/eval split based on the interval of the images.
- Parameters:
image_filenames – list of image filenames
eval_interval – interval of images to use for eval
- nerfbaselines.datasets.nerfstudio.grab_file_id(zip_url: str) str [source]¶
Get the file id from the google drive zip url.
- nerfbaselines.datasets.nerfstudio.load_from_json(filename: Path)[source]¶
Load a dictionary from a JSON filename.
- Parameters:
filename – The filename to load from.
- nerfbaselines.datasets.nerfstudio.load_nerfstudio_dataset(path: Path, split: str, downscale_factor: int | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None, **kwargs)[source]¶
nerfbaselines.datasets.tanksandtemples¶
- nerfbaselines.datasets.tanksandtemples.download_tanksandtemples_dataset(path: str, output: Path) None [source]¶
- nerfbaselines.datasets.tanksandtemples.load_colmap_dataset(path: Path, images_path: Path | None = None, split: str | None = None, test_indices: Indices | None = None, features: FrozenSet[Literal['color', 'points3D_xyz', 'points3D_rgb']] | None = None)[source]¶