Write a benchmark

A benchmark is composed of three elements: an objective function, a list of datasets, and a list of solvers.

A benchmark is defined in a folder that should respect a certain structure. For example

my_benchmark/
├── objective.py  # contains the definition of the objective
├── datasets/
│   ├── simulated.py  # some dataset
│   └── real.py  # some dataset
└── solvers/
    ├── solver1.py  # some solver
    └── solver2.py  # some solver

Examples of actual benchmarks are available in the benchopt organisation such as for Ordinary Least Square (OLS), Lasso or L1-regularized logistic regression.

Note

The simplest way to create a benchmark is to copy an existing folder and to adapt its content. A benchmark template is provided as a GitHub template repo here

1. Objective

The objective function is defined through a Python class. This class allows to monitor the quantities of interest along the iterations of the solvers. Typically it allows to evaluate the objective function to be minimized by the solvers. An objective class should define 3 methods:

  • get_one_solution(): it returns one solution that can be returned by a solver. This defines the shape of the solution and will be used to test that the benchmark works properly.

  • set_data(**data): it allows to specify the data. See the data as a dictionary of Python variables without any constraint.

  • compute(x): it allows to evaluate the objective for a given value of the iterate, here called x. This method should take only one parameter, the output returned by the solver. All other parameters should be stored in the class with the set_data method. The compute function should return a float (understood as the objective value) or a dictionary. If a dictionary is returned it should contain a key called value (the objective value) and all other keys should have float values allowing to track more than one value of interest (e.g. train and test errors).

  • get_objective(): method that returns a dictionary to be passed to the set_objective methods of solvers.

An objective class also needs to inherit from a base class, benchopt.BaseObjective.

Note

Multiple values can be computed in one objective as long as they are stored in a dictionary with a key being value. This allows to compute different metrics at once.

Example

from benchopt.base import BaseObjective, safe_import_context

# All packages other than benchopt should be imported in this context.
# - This allows to list solvers even when a package is not installed,
#   in particular for listing dependencies to install.
# - This allows to skip imports when listing solvers and datasets
#   for auto completion.
with safe_import_context() as import_ctx:
    import numpy as np


class Objective(BaseObjective):
    # Name of the Objective function
    name = "Lasso Regression"

    # parametrization of the objective with various regularization parameters.
    # All parameters `p` defined here will be accessible as `self.p`.
    parameters = {
        'reg': [0.05, .1, .5]
    }

    def get_one_solution(self):
        "Return one solution for which the objective can be evaluated."
        return np.zeros(self.X.shape[1])

    def _get_lambda_max(self):
        "Helper to compute the scaling of lambda on the given data."
        return abs(self.X.T @ self.y).max()

    def set_data(self, X, y):
        """Set the data from a Dataset to compute the objective.

        The argument are the keys in the data dictionary returned by
        get_data.
        """
        self.X, self.y = X, y
        self.lmbd = self.reg * self._get_lambda_max()

    def compute(self, beta):
        "Compute the objective value given the output x of a solver."
        diff = self.y - self.X @ beta
        objective_value = .5 * diff @ diff + self.lmbd * abs(beta).sum()
        return objective_value  # or return dict(value=objective_value)

    def get_objective(self):
        "Returns a dict to pass to the set_objective method of a solver."
        return dict(X=self.X, y=self.y, lmbd=self.lmbd)

2. Datasets

A dataset defines what can be passed to an objective. More specifically, a dataset should implement one method:

  • get_data(): A method which outputs a dictionary that can be passed as keyword arguments **data to the set_data method of an objective.

A dataset class also needs to inherit from a base class called benchopt.BaseDataset.

Example using a real dataset

from benchopt import BaseDataset

from benchopt import safe_import_context


with safe_import_context() as import_ctx:
    from sklearn.datasets import fetch_openml
    from sklearn.preprocessing import LabelBinarizer


class Dataset(BaseDataset):

    name = "leukemia"

    install_cmd = 'conda'
    requirements = ['scikit-learn']

    def get_data(self):
        # Unlike libsvm[leukemia], this dataset corresponds to the whole
        # leukemia  data with train + test data (72 samples) and not just
        # the training set.
        X, y = fetch_openml("leukemia", return_X_y=True)
        X = X.to_numpy()
        y = LabelBinarizer().fit_transform(y)[:, 0].astype(X.dtype)

        return dict(X=X, y=y)

You can also define a parametrized simulated dataset, for example to test across multiple problem dimensions.

Example of parametrized simulated dataset

Sometimes one wants to test the solvers for variants of the same dataset. For example, one may want to change the dataset size, the noise level, etc. To be able to specify parameters to get a dataset, you can use a class attribute called parameters. This parameter must be a dictionary whose keys are passed to the __init__ of the dataset class. Then Benchopt will automatically allow you to test all combinations of parameters.

import numpy as np

from benchopt import BaseDataset
from benchopt.datasets import make_correlated_data


class Dataset(BaseDataset):

    name = "Simulated"
    __name__ = 'test'

    # List of parameters to generate the datasets. The benchmark will consider
    # the cross product for each key in the dictionary.
    parameters = {
        'n_samples, n_features': [
            (100, 100),
            (100, 200)
        ],
        'rho': [0],
    }

    def __init__(self, n_samples=10, n_features=50,  rho=0.6, random_state=27):
        # Store the parameters of the dataset
        self.n_samples = n_samples
        self.n_features = n_features
        self.random_state = random_state
        self.rho = rho

    def get_data(self):
        rng = np.random.RandomState(self.random_state)

        X, y, _ = make_correlated_data(self.n_samples, self.n_features,
                                       rho=self.rho, random_state=rng)

        return dict(X=X, y=y)

However, all of these variants will not be tested during the call to benchopt test. If you want to test different variants of the simulated dataset with benchopt test, you may use the test_parameters class attribute. The construction of this attribute is similar to the one described above for parameters. This allows you to test solvers that could not be used for a single variant of the dataset.

3. Solvers

A solver must define three methods:

  • set_objective(**objective_dict): This method will be called with the dictionary objective_dict returned by the method get_objective from the objective. The goal of this method is to provide all necessary information to the solver so it can optimize the objective function.

  • run(stop_value): This method takes only one parameter that controls the stopping condition of the solver. Typically this is either a number of iterations n_iter or a tolerance parameter tol. Alternatively, a callback function that will be called at each iteration can be passed. The callback should return False once the computation should stop. The parameter stop_value is controlled by the sampling_strategy, see below for details.

  • get_result(): This method returns a variable that can be passed to the compute method from the objective. This is the output of the solver.

Stop strategy:

A solver should also define a sampling_strategy as class attribute. This sampling_strategy can be:

  • 'iteration': in this case the run method of the solver is parametrized by the number of iterations computed. The parameter is called n_iter and should be an integer.

  • 'tolerance': in this case the run method of the solver is parametrized by a tolerance that should decrease with the running time. The parameter is called tol and should be a positive float.

  • 'callback': in this case, the run method of the solver should call at each iteration the provided callback function. It will compute and store the objective and return False once the computations should stop.

Benchopt supports different types of solvers:

Python solver

The simplest solvers to use are solvers written in pure Python without any compiled code. They are typically written in Numpy with no other dependencies. Here is an example:

from benchopt import BaseSolver, safe_import_context
from benchopt.utils import profile

with safe_import_context() as import_ctx:
    import numpy as np


class Solver(BaseSolver):
    name = 'Python-PGD'  # proximal gradient, optionally accelerated

    # Any parameter defined here is accessible as an attribute of the solver.
    parameters = {'step_size': [1, 1.5]}

    # Store the information to compute the objective. The parameters of this
    # function are the keys of the dictionary obtained when calling
    # ``Objective.get_objective``.
    def set_objective(self, X, y, lmbd):
        self.X, self.y, self.lmbd = X, y, lmbd

    # Main function of the solver, which compute a solution estimate.
    # Here this is the proximal gradient descent.
    @profile
    def run(self, n_iter):
        L = np.linalg.norm(self.X, ord=2) ** 2
        step_size = self.step_size / L
        mu = step_size * self.lmbd

        n_features = self.X.shape[1]
        w = np.zeros(n_features)

        for _ in range(n_iter):
            w -= step_size * self.X.T @ (self.X @ w - self.y)
            w -= np.clip(w, -mu, mu)

        self.w = w

    # Return the solution estimate computed.
    def get_result(self):
        return self.w

For solvers that allow access to each iterate of the solution, using "callback" as a sampling_strategy implies a slight modification for run. A callback should be called at each iteration with parameter the current value of the iterate. Here is an example in the same situation as above:

    def run(self, callback):  # instead of the number of iteration or tolerance
        L = self.compute_lipschitz_cst()
        n_features = self.X.shape[1]
        w = np.zeros(n_features)
        if self.use_acceleration:
            z = np.zeros(n_features)

        t_new = 1
        while callback(w):
            if self.use_acceleration:
                t_old = t_new
                t_new = (1 + np.sqrt(1 + 4 * t_old ** 2)) / 2
                w_old = w.copy()
                z -= self.X.T @ (self.X @ z - self.y) / L
                w = self.st(z, self.lmbd / L)
                z = w + (t_old - 1.) / t_new * (w - w_old)
            else:
                w -= self.X.T @ (self.X @ w - self.y) / L
                w = self.st(w, self.lmbd / L)
        self.w = w

If your Python solver requires some packages such as Numba, Benchopt allows you to list some requirements. The necessary packages should be available via conda or pip.

In this case the install_cmd class variable needs to be set to 'conda' and the list of needed packages is specified in the variable requirements that needs to be a Python list. If a requirement starts with pip: then the package is installed from pypi and not conda-forge. See example:

import warnings
from benchopt import BaseSolver
from benchopt import safe_import_context


with safe_import_context() as import_ctx:
    from sklearn.linear_model import Lasso
    from sklearn.exceptions import ConvergenceWarning


class Solver(BaseSolver):
    name = 'sklearn'

    install_cmd = 'conda'
    requirements = ['scikit-learn']

    def set_objective(self, X, y, lmbd):
        self.X, self.y, self.lmbd = X, y, lmbd

        n_samples = self.X.shape[0]
        self.clf = Lasso(alpha=self.lmbd/n_samples, fit_intercept=False, tol=0)
        warnings.filterwarnings('ignore', category=ConvergenceWarning)

    def run(self, n_iter):
        self.clf.max_iter = n_iter
        self.clf.fit(self.X, self.y)

    def get_result(self):
        return self.clf.coef_.flatten()

Note

The install_cmd can either be 'conda' or 'shell'. If 'shell' a shell script is necessary to explain how to setup the required dependencies. See Solver from source.

Note

Specifying the dependencies is necessary if you let benchopt manage the creation of a dedicated environment. If you want to use your local environment the list of dependencies is not relevant. See Command line interface (CLI).

R solver

A solver written in R needs two files. A .R file that contains the solver and a .py file that knows how to call the R solver using Rpy2. Only the extensions should differ between the two files. Here is the Python file:

from pathlib import Path

from benchopt import BaseSolver
from benchopt import safe_import_context

with safe_import_context() as import_ctx:
    import numpy as np

    from rpy2 import robjects
    from rpy2.robjects import numpy2ri
    from benchopt.helpers.r_lang import import_func_from_r_file

    # Setup the system to allow rpy2 running
    R_FILE = str(Path(__file__).with_suffix('.R'))
    import_func_from_r_file(R_FILE)
    numpy2ri.activate()


class Solver(BaseSolver):
    name = "R-PGD"

    install_cmd = 'conda'
    requirements = ['r-base', 'rpy2']
    stopping_strategy = 'iteration'
    support_sparse = False
    references = [
        'I. Daubechies, M. Defrise and C. De Mol, '
        '"An iterative thresholding algorithm for linear inverse problems '
        'with a sparsity constraint", Comm. Pure Appl. Math., '
        'vol. 57, pp. 1413-1457, no. 11, Wiley Online Library (2004)',
        'A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding '
        'algorithm for linear inverse problems", SIAM J. Imaging Sci., '
        'vol. 2, no. 1, pp. 183-202 (2009)'
    ]

    def skip(self, X, y, lmbd, fit_intercept):
        if fit_intercept:
            return True, f"{self.name} does not handle fit_intercept"

        return False, None

    def set_objective(self, X, y, lmbd, fit_intercept):
        self.X, self.y, self.lmbd = X, y, lmbd
        self.fit_intercept = fit_intercept
        self.r_pgd = robjects.r['proximal_gradient_descent']

    def run(self, n_iter):
        coefs = self.r_pgd(
            self.X, self.y[:, None], self.lmbd,
            n_iter=n_iter)
        as_r = robjects.r['as']
        self.w = np.array(as_r(coefs, "vector"))

    def get_result(self):
        return self.w.flatten()

It uses the R code in:

##' Functions used in ISTA algorithm
##'
##' @title Functions used in ISTA algorithm
##' @author Thomas Moreau
##' @export


# Soft-thresholding operator
St <- function(lambda, X) {
    s <- function(lambda0, x) {
        if (x > lambda0) {
            result0 <- x - lambda0
        } else if (x < (-1) * lambda0) {
            result0 <- x + lambda0
        } else result0 <- 0

        return(result0)
    }
    result <- apply(X, 1, s, lambda0 = lambda)
    return(result)
}


# Main algorithm
proximal_gradient_descent <- function(X, Y, lambda, n_iter) {
    # --------- Initialize parameter ---------
    p <- ncol(X)
    parameters <- numeric(p)
    step_size <- 1 / norm(X, "2") ** 2
    for (i in 1:n_iter) {
        # Compute the gradient
        grad <- crossprod(-X, (Y - X %*% parameters))
        # # Update the parameters
        parameters <- St(step_size * lambda, parameters - step_size * grad)
    }
    return(parameters)
}

Note

This uses the function benchopt.safe_import_context() to avoid a crash when R is not available. The solver is in this case just skipped.

Julia solver

A solver written in Julia needs two files. A .jl file that contains the solver and a .py file that knows how to call the Julia solver using PyJulia. Only the extensions should differ between the two files. Here is the Python file:

from pathlib import Path
from benchopt import safe_import_context

from benchopt.helpers.julia import JuliaSolver
from benchopt.helpers.julia import get_jl_interpreter
from benchopt.helpers.julia import assert_julia_installed

with safe_import_context() as import_ctx:
    assert_julia_installed()


# File containing the function to be called from julia
JULIA_SOLVER_FILE = str(Path(__file__).with_suffix('.jl'))


class Solver(JuliaSolver):

    # Config of the solver
    name = 'Julia-PGD'
    stopping_strategy = 'iteration'
    references = [
        'I. Daubechies, M. Defrise and C. De Mol, '
        '"An iterative thresholding algorithm for linear inverse problems '
        'with a sparsity constraint", Comm. Pure Appl. Math., '
        'vol. 57, pp. 1413-1457, no. 11, Wiley Online Library (2004)',
        'A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding '
        'algorithm for linear inverse problems", SIAM J. Imaging Sci., '
        'vol. 2, no. 1, pp. 183-202 (2009)'
    ]

    def skip(self, X, y, lmbd, fit_intercept):
        # XXX - fit intercept is not yet implemented in julia.jl
        if fit_intercept:
            return True, f"{self.name} does not handle fit_intercept"

        return False, None

    def set_objective(self, X, y, lmbd, fit_intercept):
        self.X, self.y, self.lmbd = X, y, lmbd
        self.fit_intercept = fit_intercept

        jl = get_jl_interpreter()
        self.solve_lasso = jl.include(JULIA_SOLVER_FILE)

    def run(self, n_iter):
        self.beta = self.solve_lasso(self.X, self.y, self.lmbd, n_iter)

    def get_result(self):
        return self.beta.ravel()

It uses the Julia code in:

using Core
using LinearAlgebra

st(w::Vector{Float64}, t::Float64) = @. sign(w) * max(abs(w) - t, 0.)

function solve_lasso(
    X::Matrix{Float64}, 
    y::Vector{Float64}, 
    lambda::Float64,
    n_iter::Int
)
    L = opnorm(X)^2
    p = size(X, 2)
    w = zeros(p)
    residual = similar(y)
    diffvec = similar(w)
    gradient = similar(w)                  
    for i  1:n_iter
        residual = X * w - y
        gradient = X' * residual
        diffvec = w - gradient / L
        w = st(diffvec, lambda / L)
    end

    return w
end

Installing Julia dependencies

Note that it is also possible to install julia dependencies using benchopt install with the class attribute julia_requirements. This attribute should be a list of package names, whose string are directly passed to Pkg.add.

In case it is necessary to install dependencies from a GitHub repository, one can use the following format: PkgName::https://github.com/org/Pkg.jl#branch_name. This will be processed to recover both the url and the package name. Note that the branch_name is optional. Using the :: to specify the PkgName is necessary to allow benchopt to check if it is installed in the targeted environment.

Solver from source

You can install a package from source in case it is not available as binaries from the package managers from either Python, R or Julia.

Note

A package available from source may require a C++ or Fortran compiler.

Here is example using pip from a Python package on GitHub:

import warnings
from benchopt import BaseSolver
from benchopt import safe_import_context


with safe_import_context() as import_ctx:
    from sklearn.linear_model import Lasso
    from sklearn.exceptions import ConvergenceWarning


class Solver(BaseSolver):
    name = 'sklearn'

    install_cmd = 'conda'
    requirements = ['scikit-learn']

    def set_objective(self, X, y, lmbd):
        self.X, self.y, self.lmbd = X, y, lmbd

        n_samples = self.X.shape[0]
        self.clf = Lasso(alpha=self.lmbd/n_samples, fit_intercept=False, tol=0)
        warnings.filterwarnings('ignore', category=ConvergenceWarning)

    def run(self, n_iter):
        self.clf.max_iter = n_iter
        self.clf.fit(self.X, self.y)

    def get_result(self):
        return self.clf.coef_.flatten()

Note

See for example on the L1 logistic regression benchmark for an example that uses a 'shell' as install_cmd.