Benchmark repository for optimization

Build Status Python 3.6+ codecov

BenchOpt is a package to simplify, make more transparent and more reproducible the comparisons of optimization algorithms.

BenchOpt is written in Python but it is available with many programming languages. So far it has been tested with Python, R, Julia and compiled binaries written in C/C++ available via a terminal command. If it can be installed via conda it should just work!

BenchOpt is used through a command line as documented in Command Line Interface (CLI) Documentation. Ultimately running and replicating an optimization benchmark should be as simple as doing:

$ git clone https://github.com/benchopt/benchmark_logreg_l2
$ benchopt run --env ./benchmark_logreg_l2

Running these commands will fetch the benchmark files and give you a benchmark plot on l2-regularized logistic regression:

_images/sphx_glr_plot_run_benchmark_001.png

Learn how to Write a benchmark.

Install

This package can be installed through pip. To get the last release, use:

$ pip install benchopt

And to get the latest development version, you can use:

$ pip install -U https://github.com/benchopt/benchOpt/archive/master.zip

Note: due to major API modifications, the last benchopt release does not correspond to the latest version of the documentation at the moment. Until further notice, it is recommended to use the development version.

This will install the command line tool to run the benchmark. Then, existing benchmarks can be retrieved from git or created locally. To discover which benchmarks are presently available look for benchmark_* repositories on GitHub, such as for l1-regularized logistic regression. This benchmark can then be retrieved locally with:

$ git clone https://github.com/benchopt/benchmark_lasso.git

Command line usage

To run Lasso benchmarks on all datasets and with all solvers, run:

$ benchopt run --env benchmark_lasso

Use

$ benchopt run -h

to get more details about the different options or read the Python API Documentation.

Benchmarks available

Notation: In what follows, n (or n_samples) stands for the number of samples and p (or n_features) stands for the number of features.

\[y \in \mathbb{R}^n, X = [x_1^\top, \dots, x_n^\top]^\top \in \mathbb{R}^{n \times p}\]
  • ols: ordinary least-squares. This consists in solving the following program:

\[\min_w \frac{1}{2} \|y - Xw\|^2_2\]
  • nnls: non-negative least-squares. This consists in solving the following program:

\[\min_{w \geq 0} \frac{1}{2} \|y - Xw\|^2_2\]
  • lasso: l1-regularized least-squares. This consists in solving the following program:

\[\min_w \frac{1}{2} \|y - Xw\|^2_2 + \lambda \|w\|_1\]
  • logreg_l2: l2-regularized logistic regression. This consists in solving the following program:

\[\min_w \sum_i \log(1 + \exp(-y_i x_i^\top w)) + \frac{\lambda}{2} \|w\|_2^2\]
  • logreg_l1: l1-regularized logistic regression. This consists in solving the following program:

\[\min_w \sum_i \log(1 + \exp(-y_i x_i^\top w)) + \lambda \|w\|_1\]

Benchmark results

All the public benchmark results are available at BenchOpt Benchmarks results.

Publish results: You can directly publish the result of a run of benchopt on BenchOpt Benchmarks results. You can have a look at this page to Publish benchmark results.