API references#

Here is a list of Python functions available to construct a new benchmark with benchopt:

List of base classes:#


Base class to define an objective function


Base class to define a dataset in a benchmark.


A base class for solver wrappers in Benchopt.

Benchopt run hooks#

benchopt.BaseObjective.skip(): hook to allow skipping configurations of objective. It is executed before set_data to skip if the current objective is not compatible with the dataset. It takes in the same arguments that are passed to set_data.

benchopt.BaseSolver.skip(): hook to allow skipping configurations of solver. It is executed right before set_objective to skip a solver if it is not compatible with objective and/or dataset parameters. It takes in the same arguments that are passed to set_objective. Refer to Advanced usage for an example.

benchopt.BaseSolver.get_next(): hook called repeatedly after run to change the sampling points for a given solver. It is called with the previous stop_val (i.e. tolerance or number of iterations), and returns the value for the next run. Refer to Advanced usage for an example.

benchopt.BaseSolver.warm_up(): hook called once before the solver runs. It is typically used to cache jit compilation of solver while not accounting for the time needed in the timings.

benchopt.BaseSolver.pre_run_hook(): hook called before each call to run, with the same argument. Allows to skip certain computation that cannot be cached globally, such as precompilation with different number of iterations in for jitted jax functions.

Benchopt utils#

run_benchmark(benchmark[, solver_names, ...])

Run full benchmark.


Context used to manage import in benchmarks.

plotting.plot_benchmark(fname, benchmark[, ...])

Plot convergence curve and bar chart for a given benchmark.


Generate a linear regression with decaying correlation for the design matrix \(\rho^{|i-j|}\).


Decorator to tell line profiler which function to profile.