starzを使って集計:
前回: 2019/04/10
$ starz sile
Total: 2087
jsone ★ 250starzを使って集計:
前回: 2019/04/10
$ starz sile
Total: 2087
jsone ★ 250| train.batch_size = 10 | |
| train.learning_rate = 0.1 |
| # A solver implementation based on Random Search algorithm. | |
| from kurobako import problem | |
| from kurobako import solver | |
| import numpy as np | |
| class RandomSolverFactory(solver.SolverFactory): | |
| def specification(self): | |
| return solver.SolverSpec(name='Random Search') | |
| def create_solver(self, seed, problem): |
| # A solver implementation based on Simulated Annealing algorithm. | |
| from kurobako import solver | |
| from kurobako.solver.optuna import OptunaSolverFactory | |
| import optuna | |
| class SimulatedAnnealingSampler(optuna.BaseSampler): | |
| # Please refer to | |
| # https://github.com/optuna/optuna/blob/v1.0.0/examples/samplers/simulated_annealing_sampler.py | |
| # for the implementation. | |
| ... |
| $ kurobako solver command python random.py > solvers.json | |
| $ kurobako solver command python sa.py >> solvers.json | |
| $ kurobako studies --problems $(cat problems.json) --solvers $(cat solvers.json) | kurobako run > result.json |
best value -> AUCPlease refer to ["A Strategy for Ranking Optimizers using Multiple Criteria"][Dewancker, Ian, et al., 2016] for the ranking strategy used in this report.
| # 1. Download kurobako binary. | |
| $ curl -L https://github.com/sile/kurobako/releases/download/0.2.6/kurobako-0.2.6.linux-amd64 -o kurobako | |
| $ chmod +x kurobako && sudo mv kurobako /usr/local/bin/ | |
| # 2. Download the data file for HPOBench (note that the file size is about 700MB). | |
| $ curl -OL http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz | |
| $ tar xf fcnet_tabular_benchmarks.tar.gz && cd fcnet_tabular_benchmarks/ | |
| # 3. Specify problems used in this benchmark. | |
| # |
best value -> AUCPlease refer to ["A Strategy for Ranking Optimizers using Multiple Criteria"][Dewancker, Ian, et al., 2016] for the ranking strategy used in this report.
The aim of this benchmark is to compare the performances of Optuna's pruners (i.e., NopPruner, MedianPruner, SuccessiveHalvingPruner and the ongoing HyperbandPruner). All of the pruners were used by the default settings in this benchmark.
The commands to execute this benchmark.
// (1) Downloads `kurobako` (BBO benchmark tool) binary.
$ curl -L https://github.com/sile/kurobako/releases/download/0.1.3/kurobako-0.1.3.linux-amd64 -o kurobako
$ chmod +x kurobako && sudo mv kurobako /usr/local/bin/
// (2) Downloads data files of HPOBench. (notice that the total size is over 700MB)
$ curl -OL http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz$ cargo install kurobako
// or (only linux)
$ wget https://github.com/sile/kurobako/releases/download/0.0.15/kurobako-0.0.15.linux-amd64 -o kurobako && chmod +x kurobako
// or
$ git clone git://github.com/sile/kurobako.git && cd kurobako && git checkout 0.0.14 && cargo install --path .
// 独自サンプラの場合
$ kurobako benchmark --problems (kurobako problem-suite sigopt auc) --solvers (kurobako solver command -- python3 /tmp/optuna_solver_example.py ) --budget 100 --iterations 10 | kurobako run > /tmp/sigopt-my-sampler.json