Abstract
We define a general parallelization scheme for randomized convex optimization algorithms which optimize not directly observable functions. The scheme is consistent in the sense that it is able to maintain probabilistic performance guarantees of the underlying algorithm. At the same time—due to the parallelization—it achieves a speedup over the optimization algorithm of θ(c1=1g d), where c is the number of employed processors and d is the dimension of the solutions. A particular consequence is that all convex optimization problems that can be solved in polynomial time with probabilistic performance guarantees can also be solved efficiently in parallel, i.e., in polylogarithmic time on polynomially many processors. To achieve that, the scheme runs the algorithm in parallel to produce intermediate solutions of a weak guarantee and improves them by iteratively replacing solution sets by their Radon point.
Original language | English |
---|---|
Title of host publication | 8th NIPS Workshop on Optimization for Machine Learning (OPT) |
Editors | Miroslav Dudik, Martin Jaggi, Zaid Harchaoui, Aaditya Ramdas |
Place of Publication | San Diego CA USA |
Publisher | Neural Information Processing Systems (NIPS) |
Number of pages | 5 |
Publication status | Published - 2015 |
Externally published | Yes |
Event | Optimization for Machine Learning 2015 - Vancouver, Canada Duration: 11 Dec 2015 → 11 Dec 2015 Conference number: 8th http://www.opt-ml.org/oldopt/opt15/cfp.html |
Conference
Conference | Optimization for Machine Learning 2015 |
---|---|
Abbreviated title | OPT 2015 |
Country/Territory | Canada |
City | Vancouver |
Period | 11/12/15 → 11/12/15 |
Internet address |