evaluate_run

lightning_ir.base.validation_utils.evaluate_run(run: DataFrame, qrels: DataFrame, measures: Sequence[str]) Dict[str, float][source]

Convience function to evaluate a run against qrels using a set of measures.

Parameters:
  • run (pd.DataFrame) – Parse TREC run

  • qrels (pd.DataFrame) – Parse TREC qrels

  • measures (Sequence[str]) – Metrics corresponding to ir-measures measure strings

Returns:

Calculated metrics

Return type:

Dict[str, float]