Note
Go to the end to download the full example code.
Demo benchmark with R/Python#
Simulated[n_features=5000,n_samples=100,rho=0]
|--Lasso Regression[fit_intercept=False,reg=0.5]
|--Python-PGD[use_acceleration=False]: done
|--R-PGD: done (timeout)
Saving result in: /home/circleci/project/benchmarks/benchmark_lasso/outputs/benchopt_run_2024-12-31_02h05m46.parquet
Save objective_curve plot of objective_value for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_value_objective_curve.pdf
Save objective_curve plot of objective_support_size for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_support_size_objective_curve.pdf
Save objective_curve plot of objective_duality_gap for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_duality_gap_objective_curve.pdf
Save suboptimality_curve plot of objective_value for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_value_suboptimality_curve.pdf
Save relative_suboptimality_curve plot of objective_value for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_value_relative_suboptimality_curve.pdf
Solver R-PGD did not reach precision 1e-06.
Save bar_chart plot of objective_value for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_value_bar_chart.pdf
Save boxplot plot of objective_value for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_value_boxplot.pdf
Save boxplot plot of objective_support_size for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_support_size_boxplot.pdf
Save boxplot plot of objective_duality_gap for Simulated[n_features=5000,n_samples=100,rho=0] and Lasso Regression[fit_intercept=False,reg=0.5] as: /home/circleci/project/benchmarks/benchmark_lasso/outputs/3ebdde1738d5255ff1b6b4a7ea598289_objective_duality_gap_boxplot.pdf
from pathlib import Path
import matplotlib.pyplot as plt
from benchopt import run_benchmark
from benchopt.benchmark import Benchmark
from benchopt.plotting import plot_benchmark, PLOT_KINDS
from benchopt.plotting.plot_objective_curve import reset_solver_styles_idx
BENCHMARK_PATH = Path().resolve().parent / 'benchmarks' / 'benchmark_lasso'
if not BENCHMARK_PATH.exists():
raise RuntimeError(
"This example can only work when Lasso benchmark is cloned in the "
"example folder. Please run:\n"
"$ git clone https://github.com/benchopt/benchmark_lasso "
f"{BENCHMARK_PATH.resolve()}"
)
save_file = run_benchmark(
BENCHMARK_PATH,
solver_names=['Python-PGD[use_acceleration=False]', 'R-PGD'],
dataset_names=["Simulated[n_features=5000,n_samples=100,rho=0]"],
objective_filters=['*[fit_intercept=False,reg=0.5]'],
max_runs=100, timeout=100, n_repetitions=5,
plot_result=False, show_progress=False
)
kinds = list(PLOT_KINDS.keys())
reset_solver_styles_idx()
figs = plot_benchmark(
save_file, benchmark=Benchmark(BENCHMARK_PATH), kinds=kinds, html=False
)
plt.show()
Total running time of the script: (2 minutes 15.897 seconds)