benchopt.run_benchmark(benchmark_path, solver_names=None, forced_solvers=(), dataset_names=None, objective_filters=None, max_runs=10, n_repetitions=1, timeout=None, n_jobs=1, slurm=None, plot_result=True, display=True, html=True, collect=False, show_progress=True, pdb=False, output_name='None')#

Run full benchmark.

benchmarkbenchopt.Benchmark object

Object to represent the benchmark.

solver_nameslist | None

List of solver names to include in the benchmark. If None all solvers available are run.

forced_solverslist | None

List of solvers to include in the benchmark and for which one forces recomputation.

dataset_nameslist | None

List of dataset names to include. If None all available datasets are used.

objective_filterslist | None

Filters to select specific objective parameters. If None, all objective parameters are tested


The maximum number of solver runs to perform to estimate the convergence curve.


The number of repetitions to run. Defaults to 1.


The maximum duration in seconds of the solver run.


Maximal number of workers to use to run the benchmark in parallel.

slurmPath | None

If not None, launch the job on a slurm cluster using the file to get the cluster config parameters.


If set to True (default), generate the result plot and save them in the benchmark directory.


If set to True (default), open the result plots at the end of the run, otherwise, simply save them.


If set to True (default), display the result plot in HTML, otherwise in matplotlib figures, default is True.


If set to True, only collect the results that have been put in cache, and ignore the results that are not computed yet, default is False.


If show_progress is set to True, display the progress of the benchmark.


If pdb is set to True, open a debugger on error.


Filename for the parquet output. If given, the results will be stored at <BENCHMARK>/outputs/<filename>.parquet.

dfinstance of pandas.DataFrame

The benchmark results. If multiple metrics were computed, each one is stored in a separate column. If the number of metrics computed by the objective is not the same for all parameters, the missing data is set to NaN.