Parallelizing randomized convex optimization

Michael Kamp, Mario Boley

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

We define a general parallelization scheme for randomized convex optimization algorithms which optimize not directly observable functions. The scheme is consistent in the sense that it is able to maintain probabilistic performance guarantees of the underlying algorithm. At the same time—due to the parallelization—it achieves a speedup over the optimization algorithm of θ(c1=1g d), where c is the number of employed processors and d is the dimension of the solutions. A particular consequence is that all convex optimization problems that can be solved in polynomial time with probabilistic performance guarantees can also be solved efficiently in parallel, i.e., in polylogarithmic time on polynomially many processors. To achieve that, the scheme runs the algorithm in parallel to produce intermediate solutions of a weak guarantee and improves them by iteratively replacing solution sets by their Radon point.
Original languageEnglish
Title of host publication8th NIPS Workshop on Optimization for Machine Learning (OPT)
EditorsMiroslav Dudik, Martin Jaggi, Zaid Harchaoui, Aaditya Ramdas
Place of PublicationSan Diego CA USA
PublisherNeural Information Processing Systems (NIPS)
Number of pages5
Publication statusPublished - 2015
Externally publishedYes
EventOptimization for Machine Learning 2015 - Vancouver, Canada
Duration: 11 Dec 201511 Dec 2015
Conference number: 8th
http://www.opt-ml.org/oldopt/opt15/cfp.html

Conference

ConferenceOptimization for Machine Learning 2015
Abbreviated titleOPT 2015
Country/TerritoryCanada
CityVancouver
Period11/12/1511/12/15
Internet address

Cite this