Demo benchmark with Julia/R/Python

  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]
  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]
  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]
  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]
  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]
  • Lasso Regression[reg=0.5,fit_intercept=False] Data: Simulated[n_samples=100,n_features=5000,rho=0]

Out:

BenchOpt is running
Simulated[n_samples=100,n_features=5000,rho=0]
|--Lasso Regression[reg=0.5,fit_intercept=True]
|----Julia-PGD: skip
Reason: Julia-PGD does not handle fit_intercept
|----R-PGD: skip
Reason: R-PGD does not handle fit_intercept
|--Lasso Regression[reg=0.5,fit_intercept=False]
/home/circleci/miniconda/lib/python3.8/site-packages/julia/core.py:703: FutureWarning: Accessing `Julia().<name>` to obtain Julia objects is deprecated.  Use `from julia import Main; Main.<name>` or `jl = Julia(); jl.eval('<name>')`.
  warnings.warn(
|----Julia-PGD: done (timeout)
|----R-PGD: done (timeout)
Saving result in: /home/circleci/project/benchmarks/benchmark_lasso/outputs/benchopt_run_2021-11-25_14h59m38.csv
Save objective_curve plot of objective_value for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_value_objective_curve.pdf
Save objective_curve plot of objective_support_size for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_support_size_objective_curve.pdf
Save objective_curve plot of objective_duality_gap for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_duality_gap_objective_curve.pdf
Save suboptimality_curve plot of objective_value for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_value_suboptimality_curve.pdf
Save relative_suboptimality_curve plot of objective_value for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_value_relative_suboptimality_curve.pdf
Solver R-PGD did not reach precision 1e-06.
Save histogram plot of objective_value for Simulated[n_samples=100,n_features=5000,rho=0] and Lasso Regression[reg=0.5,fit_intercept=False] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/f366400cd3c8eeb6add02cab8b906b30_objective_value_histogram.pdf

from pathlib import Path
import matplotlib.pyplot as plt
from benchopt import run_benchmark
from benchopt.benchmark import Benchmark
from benchopt.tests import SELECT_ONE_SIMULATED
from benchopt.plotting import plot_benchmark, PLOT_KINDS


BENCHMARK_PATH = Path().resolve().parent / 'benchmarks' / 'benchmark_lasso'

try:
    save_file = run_benchmark(
        Benchmark(BENCHMARK_PATH),
        ['Python-PGD[^-]*use_acceleration=False', 'R-PGD', 'Julia-PGD'],
        dataset_names=[SELECT_ONE_SIMULATED],
        objective_filters=['*reg=0.5'],
        max_runs=100, timeout=100, n_repetitions=5,
        plot_result=False, show_progress=False
    )
except RuntimeError:
    raise RuntimeError(
        "This example can only work when Lasso benchmark is cloned in the "
        "example folder. Please run:\n"
        "$ git clone https://github.com/benchopt/benchmark_lasso "
        f"{BENCHMARK_PATH.resolve()}"
    )

kinds = list(PLOT_KINDS.keys())
figs = plot_benchmark(save_file, benchmark=Benchmark(BENCHMARK_PATH),
                      kinds=kinds, html=False)
plt.show()

Total running time of the script: ( 4 minutes 42.774 seconds)

Gallery generated by Sphinx-Gallery