nerfbaselines.evaluate

nerfbaselines.evaluate.compute_metrics(pred: ndarray, gt: ndarray, *, run_extras: bool = False, reduce: Literal[True] = True) Dict[str, float][source]
nerfbaselines.evaluate.compute_metrics(pred: ndarray, gt: ndarray, *, run_extras: bool = False, reduce: Literal[False]) Dict[str, ndarray]
nerfbaselines.evaluate.evaluate(predictions: str | Path, output: Path, disable_extra_metrics: bool | None = None, description: str = 'evaluating')[source]

Evaluate a set of predictions.

Parameters:
  • predictions – Path to a directory containing the predictions.

  • output – Path to a json file where the results will be written.

  • disable_extra_metrics – If True, skip the evaluation of metrics requiring extra dependencies.

  • description – Description of the evaluation, used for progress bar.

Returns:

A dictionary containing the results.

nerfbaselines.evaluate.get_predictions_hashes(predictions: Path, description: str = 'hashing predictions')[source]
nerfbaselines.evaluate.test_extra_metrics()[source]

Test if the extra metrics are available.