Benchopt: Benchmark repository for optimization

Test Status Python 3.6+ codecov

Benchopt is a package to make the comparison of optimizations algorithms simple, transparent and reproducible.

It is written in Python but is available with many programming languages. So far it has been tested with Python, R, Julia and compiled binaries written in C/C++ available via a terminal command. If a solver can be installed via conda, it should just work in benchopt!

Benchopt is used through a command line as documented in the Command Line Interface (CLI) Documentation. Once benchopt is installed, running and replicating an optimization benchmark is as simple as doing:

$ git clone https://github.com/benchopt/benchmark_logreg_l2
$ benchopt install --env ./benchmark_logreg_l2
$ benchopt run --env ./benchmark_logreg_l2

Running these commands will fetch the benchmark files, install the benchmark requirements in a dedicated environment called benchopt_benchmark_logreg_l2 and give you a benchmark plot on l2-regularized logistic regression:

_images/sphx_glr_plot_run_benchmark_003.png

Learn how to Write a benchmark.

Install

This package can be installed through pip. To get the latest release, use:

$ pip install benchopt

And to get the latest development version, you can use:

$ pip install -U https://github.com/benchopt/benchopt/archive/master.zip

This will install the command line tool to run the benchmark. Then, existing benchmarks can be retrieved from GitHub or created locally. To discover which benchmarks are presently available look for benchmark_* repositories on GitHub, such as for Lasso – l1-regularized linear regression. This benchmark can be retrieved locally with:

$ git clone https://github.com/benchopt/benchmark_lasso.git

Quickstart: command line usage on the Lasso benchmark

This section illustrates benchopt’s command line interface on the Lasso benchmark; the syntax is applicable to any benchmark. All this section assumes that you are in the parent folder of the benchmark_lasso folder. The --env flag specifies that everything is run in the benchopt_benchmark_lasso conda environment.

Installing benchmark dependencies: to install all requirements of the benchmark, run:

$ benchopt install --env ./benchmark_lasso

Run a benchmark: to run benchmarks on all datasets and with all solvers, run:

$ benchopt run --env ./benchmark_lasso

Run only some solvers and datasets: to run only the sklearn and celer solvers, on the simulated and finance datasets, run:

$ benchopt run --env ./benchmark_lasso -s sklearn -s celer -d simulated -d finance

Run a solver or dataset with specific parameters: some solvers and datasets have parameters; by default all combinations are run. If you want to run a specific configuration, pass it explicitly, e.g., to run the python-pgd solver only with its parameter use_acceleration set to True, use:

$ benchopt run --env ./benchmark_lasso -s python-pgd[use_acceleration=True]

Set the number of repetitions: the benchmark are repeated 5 times by default for greater precision. To run the benchmark 10 times, run:

$ benchopt run --env ./benchmark_lasso -r 10

Getting help: use

$ benchopt run -h

to get more details about the different options. You can also read the Command Line Interface (CLI) Documentation.

Some available benchmarks

Notation: In what follows, n (or n_samples) stands for the number of samples and p (or n_features) stands for the number of features.

\[y \in \mathbb{R}^n, X = [x_1^\top, \dots, x_n^\top]^\top \in \mathbb{R}^{n \times p}\]
\[\min_w \frac{1}{2} \|y - Xw\|^2_2\]
\[\min_{w \geq 0} \frac{1}{2} \|y - Xw\|^2_2\]
\[\min_w \frac{1}{2} \|y - Xw\|^2_2 + \lambda \|w\|_1\]
\[\min_w \sum_{i=1}^{n} \log(1 + \exp(-y_i x_i^\top w)) + \frac{\lambda}{2} \|w\|_2^2\]
\[\min_w \sum_{i=1}^{n} \log(1 + \exp(-y_i x_i^\top w)) + \lambda \|w\|_1\]
\[\min_{w, \sigma} {\sum_{i=1}^n \left(\sigma + H_{\epsilon}\left(\frac{X_{i}w - y_{i}}{\sigma}\right)\sigma\right) + \lambda {\|w\|_2}^2}\]

where

\[\begin{split}H_{\epsilon}(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases}\end{split}\]
\[\min_{w} \frac{1}{n} \sum_{i=1}^{n} PB_q(y_i - X_i w) + \lambda ||w||_1.\]

where \(PB_q\) is the pinball loss:

\[\begin{split}PB_q(t) = q \max(t, 0) + (1 - q) \max(-t, 0) = \begin{cases} q t, & t > 0, \\ 0, & t = 0, \\ (1-q) t, & t < 0 \end{cases}\end{split}\]

Given some data \(X \in \mathbb{R}^{d \times n}\) assumed to be linearly related to unknown independent sources \(S \in \mathbb{R}^{d \times n}\) with

\[X = A S\]

where \(A \in \mathbb{R}^{d \times d}\) is also unknown, the objective of linear ICA is to recover \(A\) up to permutation and scaling of its columns. The objective in this benchmark is related to some estimation on \(A\) quantified with the so-called AMARI distance.

See benchmark_* repositories on GitHub for more.

Benchmark results

All the public benchmark results are available at Benchopt Benchmarks results.

Publish results: You can directly publish the result of a run of benchopt on Benchopt Benchmarks results. You can have a look at this page to Publish benchmark results.

Contents