Configure Benchopt#

Benchopt can be configured using setting files. These files can either be created directly or generated and modified using benchopt config.

There are two configuration levels. The first level is the global config for the benchopt client. It contains the system-specific tweaks, the user info such as the <GitHub token>, and the output levels. The second level is the configuration of the benchmarks. Each benchmark can have its own config for the kind of plots it displays by default and other display tweaks.

To get the BenchOpt global config file used by the benchopt command, you can run benchopt config. Using the option --benchmark,-b <benchmark> allows to display the config file for a specific benchmark. See Config File Location for more details on how the config file path is resolved.

The structure of the files follows the Microsoft Windows INI files structure and is described in Config File Structure. The available settings are listed in Config Settings.

The value of each setting can be accessed with the CLI using benchopt config [-b <benchmark>] get <name>. Similarly, the setting value can be set using benchopt config [-b <benchmark>] set <name> <value>.

Config File Location#

For the global configuration file, the resolution order is the following:

  1. The environment variable BENCHOPT_CONFIG is set to an existing file,

  2. A file benchopt.ini in the current directory,

  3. The default file is $HOME/.config/benchopt.ini.

For benchmark configuration files, they are usually located in the benchmark folder, and named benchopt.ini. If it does not exist, the default is to use the global config file.

Config File Structure#

The config files for benchopt follow the Microsoft Windows INI files structure. The global setting are grouped in a [benchopt] section:

[benchopt]
debug = true  # Activate or not debug logs
raise_install_error = no  # Raise/ignore install error. Default is ignore.
github_token = 0...0  # Token used to publish results on benchopt/results

For benchmark settings, they are grouped in a section with the same name as the benchmark. For a benchmark named benchmark_bench, the config structure is:

[benchmark_bench]
plots =
    suboptimality_curve
    bar_chart
    objective_curve

Config Settings#

This section lists the available settings.

Global settings

benchopt.config.DEFAULT_GLOBAL_CONFIG = {'cache': None, 'conda_cmd': 'conda', 'data_dir': './data/', 'debug': False, 'github_token': None, 'raise_install_error': False, 'shell': 'bash'}#
  • debug: If set to true, enable debug logs.

  • raise_install_error, boolean: If set to true, raise error when install fails.

  • github_token, str: token to publish results on benchopt/results via github.

  • conda_cmd, str: can be used to give the path to conda if it is not directly installed on $PATH. This can also be used to use mamba to install benchmarks instead of conda. See Using mamba to install packages.

  • shell, str: can be used to specify the shell to use. Default to SHELL from env if it exists and 'bash' otherwise.

  • cache, str: can be used to specify where the cache for the benchmarks should be stored. By default, the cache files are stored in the benchmark directory, under the folder __cache__. Setting this configuration would results in having the cache for benchmark B1 stored in ${cache}/B1/.

Benchmark settings

benchopt.config.DEFAULT_BENCHMARK_CONFIG = {'plot_configs': {}, 'plots': ['objective_curve', 'suboptimality_curve', 'relative_suboptimality_curve', 'bar_chart']}#
  • plots, list: Select the plots to display for the benchmark. Should be valid plot kinds. The list can simply be one item by line, with each item indented, as:

    plots:
    - objective_curve
    - suboptimality_curve
    - relative_suboptimality_curve
    - bar_chart
    
  • plot_configs, list: list of saved views that can be easily display for the plot. Each view corresponds to a name, with specified values to select either:

    dataset, objective, objective_column, kind, scale, with_quantiles, xaxis_type, xlim, ylim

    Values that are not specified by the view are left as is when setting the view in the interface. An example of views is:

    plot_configs:
      linear_objective:
          kind: objective_curve
          ylim: [0.0, 1.0]
          scale: linear
      view2:
          objective_column: objective_score_train
          kind: suboptimality_curve
          ylim: [1e-10, 1.0]
          scale: loglog
    

    These views can be easily created from the interactive HTML page, by hitting the Save as view button in the plot controls and downloading eiher the new HTML file to save them or the config file in th erepo of the benchmark, so that these saved views are embeded in the next plot results automatically.

Using mamba to install packages#

When many packages need to be installed, conda can be slow or even fail to resolve the dependency graph. Using mamba can speed up this process and make it more reliable.

To use mamba instead of conda when installing benchmark requirements, it is necessary to have mamba installed in the base conda environment, e.g. using conda install -n base mamba. Then, benchopt can be configured to use this command instead of conda by either configuring the CLI using benchopt config set conda_cmd mamba or setting the environment variable BENCHOPT_CONDA_CMD=mamba.