nerfbaselines.results

class nerfbaselines.results.DatasetInfo[source]

Bases: TypedDict

default_metric: str
description: str
id: str
metrics: List[MetricInfo]
name: str
paper_authors: List[str]
paper_title: str
scenes: List[SceneInfo]
class nerfbaselines.results.MetricInfo[source]

Bases: TypedDict

ascending: bool
description: str
id: str
name: str
class nerfbaselines.results.SceneInfo[source]

Bases: TypedDict

id: str
name: str
nerfbaselines.results.compile_dataset_results(results_path: Path | str, dataset: str, scenes: List[str] | None = None) Dict[str, Any][source]

Compile the results.json file from the results repository.

nerfbaselines.results.format_duration(seconds: float | None) str[source]
nerfbaselines.results.format_memory(memory: float | None) str[source]
nerfbaselines.results.get_benchmark_datasets() List[str][source]

Get the list of registered benchmark datasets.

nerfbaselines.results.get_dataset_info(dataset: str) DatasetInfo[source]

Get the dataset info from the dataset repository.

Parameters:

dataset – The dataset name (type).

Returns:

The dataset info.

nerfbaselines.results.load_metrics_from_results(results: Dict) Dict[str, List[float]][source]

Load the metrics from a results file (obtained from evaluation).

Parameters:

results – A dictionary of results.

Returns:

A dictionary containing the metrics.

nerfbaselines.results.render_markdown_dataset_results_table(results, method_links: Literal['paper', 'website', 'results', 'none'] = 'none') str[source]

Generates a markdown table from the output of the compile_dataset_results method.

Parameters:

results – Output of the nerfbaselines.results.compile_dataset_results method.