250 likes | 436 Views
Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming. Lin Xu, Frank Hutter , Holger H. Hoos , and Kevin Leyton-Brown Department of Computer Science University of British Columbia. Solving MIP more effectively.
E N D
Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown Department of Computer Science University of British Columbia
Solving MIP more effectively Portfolio-based algorithm selection (SATzilla) [Xu et al., 2007;2008;2009] Where are the solvers? Parameter settings of a single solver (e.g. CPLEX) How to find good settings? Automated algorithm configuration tool[Hutter et al., 2007;2009] How to find good candidates for algorithm selection? Algorithm configuration with dynamic performance metric [Xu et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Hydra • Portfolio-based algorithm selection: • Automated algorithm configuration: NEW MODELS Some particularly related work: [Rice, 1976]; [Leyton-Brown, Nudelman & Shoham, 2003; 2009]; [Guerri & Milano, 2004]; [Nudelman, Leyton-Brown, Shoham & Hoos, 2004] Better use Some particularly related work: [Gratch & Dejong, 1992]; [Balaprakash, Birattari & Stuetzle, 2007]; [Hutter, Babic, Hoos & Hu, 2007]; [Hutter, Hoos, Stuetzle & Leyton-Brown, 2009] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Outline • Improve algorithm selection • SATzilla • Drawback of SATzilla • New SATzilla with cost sensitive classification • Results • Reduce the construction cost • Hydra • The cost • Make full use of configuration • Results • Conclusion Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
SATzilla: Portfolio-Based Algorithm Selection [Xu, Hutter, Hoos, Leyton-Brown, 2007; 2008] NovelInstance Metric Portfolio Builder • Given: • training set of instances • performance metric • candidate solvers • portfolio builder (incl. instance features) • Training: • collect performance data • portfolio builder learns predictive models • At Runtime: • predict performance • select solver Candidate Solvers Training Set SelectedSolver Portfolio-BasedAlgorithm Selector 5 Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Drawback of SATzilla Algorithm selectionin SATzilla based on regression: • Predict each solver performance independently • Select best predicted solver • Classification based on regression Goal of regression: Accurately predict each solver’s performance Algorithm selection: Pick solvers on a per-instance basis in order to minimize some overall performance metric Better regression Better algorithm selection Algorithm Selector Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Cost sensitive classification for SATzilla Loss function: the performance difference • Punish misclassifications in direct proportion to their impact on portfolio performance • No need for predicting runtime Implementation: Binary cost sensitive classifier: decision forest (DF) • Build DF for each pair of candidate solvers • one vote for the better solver • Most votes -> Best solver Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
SATzillaDF performance LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
SATzillaDF performance LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
MIPzillaDF performance Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
MIPzillaDF performance Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Hydra Procedure: Iteration 1 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Hydra Procedure: Iteration 2 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Hydra Procedure: Iteration 3 Metric PortfolioBuilder Candidate Solver Set Training Set Algorithm Configurator CandidateSolver Portfolio-BasedAlgorithm Selector ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Hydra Procedure: After Termination NovelInstance Output: SelectedSolver Portfolio-BasedAlgorithm Selector Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
We are wasting configuration results! Metric Training Set Algorithm Configurator CandidateSolver ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Make full use of configurations Metric k Candidate Solvers Training Set Algorithm Configurator ParameterizedAlgorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Make full use of configurations Advantage: Add k solvers instead of 1 in each iteration (good for algorithm selection) No need for validation step in configuration (SAVE time) Disadvantage: Need to collect runtime data for more solvers (COST time) In our experiment, we found SAVE = COST(k=4) Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Experimental Setup: Hydra’s Inputs Portfolio Builder: MIPzillaLR (SATzilla for MIP) [Xu et al., 2008] MIPzillaDF (MIPzilla using cost sensitive DF) Parameterized Solver: CPLEX12.1 Algorithm Configurator:FocusedILS 2.4.3 [Hutter, Hoos, Leyton-Brown, 2009] Performance Metric: Penalized average runtime (PAR) Instance Sets: 4 heterogeneous sets by combining homogeneous subsets [Hutter et al., 2010];[Kadioglu et al., 2010]; [Ahmadizadeh et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Three versions of Hydra for MIP HydraLR,1: Original Hydra for MIP [Xu et al., 2010] HydraDF,1: Hydra for MIP with Improvement I HydraDF,4: Hydra for MIP with Improvement I and II Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
MIP-Hydra performance on MIX • HydraDF,* performs better than HydraLR,1 • HydraDF,4 performs similar to HydraDF,1 , but converge faster • Performance close to Oracle and MIPzillaDF Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Conclusion Cost sensitive classification based SATzilla outperforms original SATzilla New Hydra-MIP outperforms CPLEX default, algorithm configuration alone, and original Hydra on four heterogeneous MIP sets Technical contributions: Cost sensitive classification results better algorithm selection for SAT and MIP Using multiple configurations speeds up the convergence of Hydra Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP