benchopt.BaseSolver#

class benchopt.BaseSolver(**parameters)#

A base class for solver wrappers in Benchopt.

Solvers that derive from this class should implement three methods:

  • set_objective(self, **objective_parameters): prepares the solver to be called on a given problem. **objective_parameters is the output of Objective.get_objective from the benchmark objective. In particular, this method should dumps the parameter to compute the objective function in a file for command line solvers to reduce the impact of dumping the data to the disk in the benchmark.

  • run(self, n_iter/tolerance/cb): performs the computation for the previously given objective function, after a call to set_objective. This method is the one timed in the benchmark and should not perform any operation unrelated to the optimization procedure.

  • get_result(self): returns all parameters of interest, as a dict. The output is passed to Objective.evaluate_result.

Optionaly, the Solver can implement the following methods to change its behavior:

  • skip(self, **objective_parameters): decide if the solver is compatible with the given objective. Its inputs are the same as set_objective, and it should return a tuple (skip, reason) where skip is a boolean indicating whether the solver should be skipped for this configuration, and reason is a string used to explain why in the CLI output. If skip is False, reason should be None.

  • get_next(stop_val): Return the next iteration where the result will be evaluated. This is only necessary when sampling_strategy is set to ‘iteration’ or ‘tolerance’ and the default logarithmic spacing is not desired.

  • warm_up(): User specified warm up step, called once before the runs. The time it takes to run this function is not taken into account. The function Solver.run_once can be used here for solvers that require jit compilation.

  • pre_run_hook(stop_val): Hook to run pre-run operations, that are not timed in the benchmark. This is mostly necessary to cache stop_val dependent computations, for instance in jax with different number of iterations in a for loop.

The Solver class also defines class attributes to specify how the benchmark curve should be sampled:

  • sampling_strategy: defines how the benchmark curve should be sampled. It should be one of the following strings: ‘iteration’, ‘tolerance’, ‘callback’ or ‘run_once’:

    • 'iteration': call the run method with max_iter number increasing

    logarithmically to get more an more precise points. - 'tolerance': call the run method with tolerance decreasing logarithmically to get more and more precise points. - 'callback': a callable that should be called after each iteration

    or epoch. This callable periodically runs Objective.evaluate_result and returns False when the solver should stop.

    • 'run_once': call the run method once to get a single point. This is typically used for ML benchmarks.

  • stopping_criterion: an instance of StoppingCriterion that defines when the solver should stop. If not set, a default stopping criterion is used depending on the sampling_strategy. See When are the solvers stopped? for available options.

Note that default values for these attributes can be set at the Objective level so that all solvers in a benchmark share the same default behavior. Typically, for ML benchmarks, all solvers can be run only once by setting sampling_strategy = 'run_once' in the benchmark’s Objective. More details on how the curves are sampled can be found in the performance_curves user guide.

abstractmethod get_result()#

Return the parameters computed by the previous run.

The parameters should be returned as a dictionary.

Returns:
parametersdictionary

All quantities of interest to evaluate the objective.

pre_run_hook(stop_val)#

Hook to run pre-run operations.

This is mostly necessary to cache stop_val dependent computations, for instance in jax with different number of iterations in a for loop.

Parameters:
stop_valint | float | callable

Value for the stopping criterion of the solver for. It allows to sample the time/accuracy curve in the benchmark. If it is a callable, then it should act as a callback. This callback should be called once for each iteration with argument the current iterate parameters. The callback returns False when the computations should stop.

abstractmethod run(stop_val)#

Call the solver with the given stop_val.

This function should not return the parameters which will be retrieved by a subsequent call to get_result.

If sampling_strategy is set to “callback”, then run should call the callback at each iteration. The callback will compute the time, the objective function and store relevant quantities for BenchOpt. Else, the stop_val parameter should be specified.

Parameters:
stop_valint | float | callable

Value for the stopping criterion of the solver for. It allows to sample the time/accuracy curve in the benchmark. If it is a callable, then it should act as a callback. This callback should be called once for each iteration with argument the current iterate parameters. The callback returns False when the computations should stop.

run_once(stop_val=1)#

Run the solver once, to cache warmup times (e.g. pre-compilations).

This function is intended to be called in Solver.warm_up method to avoid taking into account a solver’s warmup costs.

Parameters:
stop_valint or float, (default: 1)

If sampling_strategy is ‘iteration’, this should be an integer corresponding to the number of iterations the solver is run for. If it is ‘callback’, it is an integer corresponding to the number of times the callback is called. If it is ‘tolerance’, it is a float which can be passed to call the solver on an easy to solve problem.

abstractmethod set_objective(**objective_dict)#

Prepare the objective for the solver.

Parameters:
**objective_parametersdict

Dictionary obtained as the output of the method get_objective from the benchmark Objective.

skip(**objective_dict)#

Hook to decide if the Solver is compatible with the objective.

Parameters:
**objective_parametersdict

Dictionary obtained as the output of the method get_objective from the benchmark Objective.

Returns:
skipbool

Whether this solver should be skipped or not for this objective.

reasonstr | None

The reason why it should be skipped for display purposes. If skip is False, the reason should be None.

warm_up()#

User specified warm up step, called once before the runs.

The time it takes to run this function is not taken into account. The function Solver.run_once can be used here for solvers that require jit compilation.