Command line interface (CLI) references#
The following commands are built using the click package which provides tab completion for the command options. You however need to activate shell completion by following the instructions given in the click documentation. For example using BASH shell you need to run:
eval "$(_BENCHOPT_COMPLETE=bash_source benchopt)"
The benchopt command also comes with tab completion for the solver name and the dataset name.
Optional parameters syntax
For some CLI parameters (solver, objective, dataset), additional values can be given with the following syntax:
# Run a particular solver with a particular set of parameters:
--solver solver_name[param_1=method_name, param_2=100]
# To select a grid of parameters, the following syntax is allowed:
--solver solver_name[param_1=[True, False]]
# For objects with only one parameter, the name can be omitted:
--solver solver_name[True, False]
# For more advanced selections over multiple parameters, use:
--solver solver_name["param_1,param_2"=[(True, 100), (False, 1000)]]
benchopt#
Command line interface to benchopt
benchopt [OPTIONS] COMMAND [ARGS]...
Options
- -v, --version#
Print version
- --check-editable#
Output more info for version checking, and format as: BENCHOPT_VERSION:<version>:<is_editable>.
Main commands#
Main commands that are used in benchopt
.
install#
Install the requirements (solvers/datasets) for a benchmark.
benchopt install [OPTIONS] [BENCHMARK]
Options
- -f, --force#
If this flag is set, the reinstallation of the benchmark requirements is forced.
- --minimal#
If this flag is set, only install requirements for the benchmark’s objective.
- -s, --solver <solver_name>#
Include <solver_name> in the installation. By default, all solvers are included except when -d flag is used. If -d flag is used, then no solver is included by default. When -s is used, only listed estimators are included. To include multiple solvers, use multiple -s options.To include all solvers, use -s ‘all’ option.
- -d, --dataset <dataset_name>#
Install the dataset <dataset_name>. By default, all datasets are included, except when -s flag is used. If -s flag is used, then no dataset is included. When -d is used, only listed datasets are included. Note that <dataset_name> can include parameters with the syntax dataset[parameter=value]. To include multiple datasets, use multiple -d options.To include all datasets, use -d ‘all’ option.
- --config <config_file>#
YAML configuration file containing benchmark options, whose solvers and datasets will be installed.
- --download#
If this flag is set, call Dataset.get_data for all datasets, to make sure the data are present on the system.
- -e, --env#
Install all requirements in a dedicated conda environment for the benchmark. The environment is named ‘benchopt_<BENCHMARK>’ and all solver dependencies and datasets are installed in it.
- --env-name <env_name>#
Install the benchmark requirements in the conda environment named <env_name>. If it does not exist, it will be created by this command.
- --recreate#
If this flag is set, start with a fresh conda environment. It can only be used combined with options -e/–env or –env-name.
- -q, --quiet#
If this flag is set, conda’s output is silenced.
- Default:
False
- -y, --yes#
If this flag is set, no confirmation will be asked to the user to install requirements in the current environment. Useless with options -e/–env or –env-name.
Arguments
- BENCHMARK#
Optional argument
run#
Run a benchmark with benchopt.
benchopt run [OPTIONS] [BENCHMARK]
Options
- -o, --objective <objective_filter>#
Select the objective based on its parameters, with the syntax objective[parameter=value]. This can be used to only include one set of parameters.
- -s, --solver <solver_name>#
Include <solver_name> in the benchmark. By default, all solvers are included. When -s is used, only listed solvers are included. Note that <solver_name> can include parameters, with the syntax solver[parameter=value]. To include multiple solvers, use multiple -s options.
- -f, --force-solver <solver_name>#
Force the re-run for <solver_name>. This avoids caching effect when adding a solver. To select multiple solvers, use multiple -f options.
- -d, --dataset <dataset_name>#
Run the benchmark on <dataset_name>. By default, all datasets are included. When -d is used, only listed datasets are included. Note that <dataset_name> can include parameters, with the syntax dataset[parameter=value]. To include multiple datasets, use multiple -d options.
- -j, --n-jobs <int>#
Maximal number of workers to run the benchmark in parallel.
- Default:
1
- --slurm <slurm_config.yml>#
Run the computation using submitit on a SLURM cluster. The YAML file provided to this argument is used to setup the SLURM job. See Running the benchmark on a SLURM cluster for a detailed description.
- -n, --max-runs <int>#
Maximal number of runs for each solver. This corresponds to the number of points in the time/accuracy curve.
- Default:
100
- -r, --n-repetitions <int>#
Number of repetitions that are averaged to estimate the runtime.
- --timeout <timeout>#
Stop a solver when run for more than <timeout> seconds. The syntax 10h or 10m can be used to denote 10 hours or minutes respectively. Not compatible with the –no-timeout option.
- --no-timeout#
If set, prevent solvers from stopping after running for a long time. Not compatible with the –timeout option.
- --collect#
If set, this run will only collect results which are already available in the cache. This flag allows to collect results while all the solvers are not finished yet.
- --config <config_file>#
YAML configuration file containing benchmark options.
- --plot, --no-plot#
Whether or not to create plots from the results. Default is True.
- --display, --no-display#
Whether or not to display the plot on the screen. Default is True.
- --html, --no-html#
If set to True (default), render the results as an HTML page, otherwise create matplotlib figures, saved as PNG.
- --pdb#
Launch a debugger if there is an error. This will launch ipdb if it is installed and default to pdb otherwise.
- -l, --local#
Run the benchmark in the local conda environment.
- --profile#
Will do line profiling on all functions with @profile decorator. Requires the line-profiler package. The profile decorator needs to be imported with: from benchopt.utils import profile
- -e, --env#
Run the benchmark in a dedicated conda environment for the benchmark. The environment is named benchopt_<BENCHMARK>.
- --env-name <env_name>#
Run the benchmark in the conda environment named <env_name>. To install the required solvers and datasets, see the command benchopt install.
- --output <output>#
Filename for the result output. If given, the results will be stored at <BENCHMARK>/outputs/<filename>.parquet, if another result file has the same name, a number is happened to distinguish them (ex: <BENCHMARK>/outputs/<filename>_1.parquet). If not provided, the output will be saved as <BENCHMARK>/outputs/benchopt_run_<timestamp>.parquet.
Arguments
- BENCHMARK#
Optional argument
To (re-)install the required solvers and datasets in a benchmark-dedicated conda environment or in your own conda environment, see the command benchopt install.
test#
Test a benchmark for benchopt. The benchmark must feature a simulated dataset to test for all solvers. For more info about the benchmark tests configuration and requirements, see Testing a benchmark.
benchopt test [OPTIONS] [BENCHMARK]
[PYTEST_ARGS]...
Options
- --env-name <NAME>#
Environment to run the test in. If it is not provided a temporary one is created for the test.
Arguments
- BENCHMARK#
Optional argument
- PYTEST_ARGS#
Optional argument(s)
Process results#
Utilities to process benchmark outputs produced by benchopt.
generate-results#
Generate result website from list of benchmarks.
benchopt generate-results [OPTIONS]
Options
- -b, --benchmark <bench>#
Folders containing benchmarks to include.
- -k, --pattern <pattern>#
Include results matching <pattern>.
- --root <root>#
If no benchmark is provided, include all benchmark in sub-directories of <root>. Default to current dir.
- --display, --no-display#
Whether or not to display the plot on the screen.
plot#
Plot the result from a previously run benchmark.
benchopt plot [OPTIONS] [BENCHMARK]
Options
- -f, --filename <filename>#
Specify the file to select in the benchmark. If it is not specified, take the latest one in the benchmark output folder.
- -k, --kind <kinds>#
Specify the type of figure to plot:
objective_curve
: str(object=’’) -> strsuboptimality_curve
: str(object=’’) -> strrelative_suboptimality_curve
: str(object=’’) -> strbar_chart
: str(object=’’) -> strboxplot
: str(object=’’) -> str
- --display, --no-display#
Whether or not to display the plot on the screen. Default is True.
- --html, --no-html#
If set to True (default), render the results as an HTML page, otherwise create matplotlib figures, saved as PNG.
- --plotly#
If this flag is set, generate figure as HTML with plotly. This option does not work with all plot kinds and requires to have installed plotly.
- --all#
If this flag is set, generate the plot for all existing runs of a benchmark at once.
Arguments
- BENCHMARK#
Optional argument
publish#
Publish the result from a previously run benchmark.
See the publish_doc documentation for more info on how to use this command.
benchopt publish [OPTIONS] [BENCHMARK]
Options
- -t, --token <token>#
Github token to access the result repo.
- -f, --filename <filename>#
Specify the file to publish in the benchmark. If it is not specified, take the latest one in the benchmark output folder.
Arguments
- BENCHMARK#
Optional argument
Helpers#
Helpers to clean and config benchopt
.
archive#
Create an archive of the benchmark that can easily be shared.
benchopt archive [OPTIONS] [BENCHMARK]
Options
- --with-outputs#
If this flag is set, also store the outputs of the benchmark in the archive.
Arguments
- BENCHMARK#
Optional argument
clean#
Clean the cache and the outputs from a benchmark.
benchopt clean [BENCHMARK]
Options
- -f, --filename <filename>#
Name of the output file to remove.
Arguments
- BENCHMARK#
Optional argument
config#
Configuration helper for benchopt. The configuration of benchopt is detailed in config_doc.
benchopt config [OPTIONS] COMMAND [ARGS]...
Options
- -b, --benchmark <benchmark>#
get#
Get config value for setting <name>.
benchopt config get [OPTIONS] <name>
Arguments
- <name>#
Required argument
set#
Set value of setting <name> to <val>.
Multiple values can be provided as separate arguments. This will generate a list of values in the config file.
benchopt config set [OPTIONS] <name> <val>
Options
- -a, --append#
Can be used to append values to the existing ones for settings that takes list of values.
Arguments
- <name>#
Required argument
- <val>#
Required argument(s)
info#
List information (solvers/datasets) and corresponding requirements for a given benchmark.
benchopt info [OPTIONS] [BENCHMARK]
Options
- -s, --solver <solver_name>#
Display information about <solver_name>. By default, all solvers are included except when -d flag is used. If -d flag is used, then no solver is included by default. When -s is used, only listed estimators are included. To include multiple solvers, use multiple -s options.To include all solvers, use -s ‘all’ option. Using a -s option will trigger the verbose output.
- -d, --dataset <dataset_name>#
Display information about <dataset_name>. By default, all datasets are included, except when -s flag is used. If -s flag is used, then no dataset is included. When -d is used, only listed datasets are included. Note that <dataset_name> can be a regexp. To include multiple datasets, use multiple -d options.To include all datasets, use -d ‘all’ option.Using a -d option will trigger the verbose output.
- -e, --env#
Additional checks for requirement availability in the dedicated conda environment for the benchmark named ‘benchopt_<BENCHMARK>’.
- --env-name <env_name>#
Additional checks for requirement availability in the conda environment named <env_name>.
- -v, --verbose#
If used, list solver/dataset parameters, dependencies and availability.
Arguments
- BENCHMARK#
Optional argument
sys-info#
Get details on the system (processor, RAM, etc..).
benchopt sys-info [OPTIONS]