benchopt.run_benchmark

benchopt.run_benchmark(benchmark, solver_names=None, forced_solvers=None, dataset_names=None, objective_filters=None, max_runs=10, n_repetitions=1, timeout=100, n_jobs=1, slurm=None, plot_result=True, html=True, show_progress=True, pdb=False, output='None')

Run full benchmark.

Parameters
benchmarkbenchopt.Benchmark object

Object to represent the benchmark.

solver_nameslist | None

List of solvers to include in the benchmark. If None all solvers available are run.

forced_solverslist | None

List of solvers to include in the benchmark and for which one forces recomputation.

dataset_nameslist | None

List of datasets to include. If None all available datasets are used.

objective_filterslist | None

Filters to select specific objective parameters. If None, all objective parameters are tested

max_runsint

The maximum number of solver runs to perform to estimate the convergence curve.

n_repetitionsint

The number of repetitions to run. Defaults to 1.

timeoutfloat

The maximum duration in seconds of the solver run.

n_jobsint

Maximal number of workers to use to run the benchmark in parallel.

slurmPath | None

If not None, launch the job on a slurm cluster using the file to get the cluster config parameters.

plot_resultbool

If set to True (default), display the result plot and save them in the benchmark directory.

htmlbool

If set to True (default), display the result plot in HTML, otherwise in matplotlib figures, default is True.

show_progressbool

If show_progress is set to True, display the progress of the benchmark.

pdbbool

It pdb is set to True, open a debugger on error.

output_namestr

Filename for the parquet output. If given, the results will be stored at <BENCHMARK>/outputs/<filename>.parquet.

Returns
dfinstance of pandas.DataFrame

The benchmark results. If multiple metrics were computed, each one is stored in a separate column. If the number of metrics computed by the objective is not the same for all parameters, the missing data is set to NaN.