What’s new

Version 1.3 - in development

CLI

  • Add support for custom parameters in CLI for objectives, datasets, and solvers, through the syntax -s solver_name[parameter=value]. See the CLI documentation for more details on the syntax. By Tom Dupré la Tour (#362).

  • Add --slurm option in benchopt run to allow running the benchmark on a SLURM cluster. See the Running the benchmark on a SLURM cluster for more details on the config. By Thomas Moreau (#407)

  • Add benchopt archive to create a tar.gz archive with the benchmark’s files for sharing with others or as supplementary materials for papers. By Thomas Moreau (#408).

Version 1.2 - 06/05/2022

Changelog

  • New benchopt info command to display information about solvers and datasets of a benchmark, by Ghislain Durif (#140).

  • New --profile option to the run command in order to profile with the line-profiler package all functions decorated with benchopt.utils.profile(), by Alexandre Gramfort (#186).

  • Replace SufficientDescentCriterion by SufficientProgressCriterion, which measures progress relatively to the best attained value instead of the previous one, by Thomas Moreau (#176)

  • Now all values returned by Objective.compute are included in reports, by Thomas Moreau and Alexandre Gramfort (#200).

  • New --n-jobs, -j option to run the benchmark in parallel with joblib, by Thomas Moreau (#265).

API

  • When returning a dict, Objective.compute should at least include value key instead of objective_value, by Thomas Moreau and Alexandre Gramfort (#200).

  • ‘stop_strategy’ attribute is replaced by ‘stopping_strategy’ to harmonize with ‘stopping_criterion’, by Benoît Malézieux (#274).

  • Add import_from method in safe_import_context to allow importing common files and packages without install from BENCHMARK_DIR/utils, by Thomas Moreau (#286).

  • Add X_density argument to datasets.make_correlated_data to simulate sparse design matrices, by Mathurin Massias (#289).

  • Dataset.get_data should now return a dict and not a tuple. A point for testing should be returned by a dedicated method Objective.get_one_solution, by Thomas Moreau (#345).

CLI

Version 1.1 - 22-04-2021

Changelog

API

  • Objective.compute can now return a dictionary with multiple outputs to monitor several metrics at once, by Thomas Moreau (#84).

  • Solver.skip can now be used to skip objectives that are incompatible for the Solver, by Thomas Moreau (#113).

  • Solver can now use stop_strategy = 'callback' to allow for single call curve construction, by Tanguy Lefort and Thomas Moreau (#137).

  • Add StoppingCriterion to reliably and flexibly assess a solver cvg. For now, only SufficientDescentCriterion is implemented but better API to set criterion per benchmark should be implemented in future release, by Thomas Moreau (#151)

CLI

  • Add --version option for benchopt, by Thomas Moreau (#83).

  • Add --pdb option for benchopt run to open debugger on error and help benchmark debugging, by Thomas Moreau (#86).

  • Change default run to local mode. Can call a run in a dedicated env with option --env or --env-name ENV_NAME to specify the env, by Thomas Moreau (#94).

  • Add benchopt publish command to push benchmark results to GitHub, by Thomas Moreau (#110).

  • Add benchopt clean command to remove cached file and output files locally, by Thomas Moreau (#128).

  • Add benchopt config command to allow easy configuration of benchopt using the CLI, by Thomas Moreau (#128).

  • Add benchopt install command to install benchmark requirements (not done in benchopt run anymore) by Ghislain Durif (#135).

  • Add benchopt info command to print information about a benchmark (including solvers, datasets, dependencies, etc.) by Ghislain Durif (#140).

BUG

  • Throw a warning when benchopt version in conda env does not match the one of calling benchopt, by Thomas Moreau (#83).

  • Fix Lapack issue with R code, by Tanguy Lefort (#97).

DOC

The committer list for this release is the following:

Version 1.0 - 2020-09-25

Release highlights

  • Provide a command line interface for benchmarking optimisation algorithm implementations:

    • benchopt run to run the benchmarks

    • benchopt plot to display the results

    • benchopt test to test that a benchmark folder is correctly structured.

The committer list for this release is the following: