Abstract
We consider distributed online learning protocols that control the exchange of information between local learners in a round-based learning scenario. The learning performance of such a protocol is intuitively optimal if approximately the same loss is incurred as in a hypothetical serial setting. If a protocol accomplishes this, it is inherently impossible to achieve a strong communication bound at the same time. In the worst case, every input is essential for the learning performance, even for the serial setting, and thus needs to be exchanged between the local learners. However, it is reasonable to demand a bound that scales well with the hardness of the serialized prediction problem, as measured by the loss received by a serial online learning algorithm. We provide formal criteria based on this intuition and show that they hold for a simplified version of a previously published protocol.
Original language | English |
---|---|
Title of host publication | OPT2014 |
Subtitle of host publication | Optimization for Machine Learning |
Editors | Suvrit Sra, Alekh Agarwal, Miro Dudik, Zaid Harchaoui, Zaid Harchaoui, Martin Jaggi, Aaditya Ramdas |
Place of Publication | Montreal Quebec CA |
Publisher | Neural Information Processing Systems (NIPS) |
Number of pages | 5 |
Publication status | Published - 2014 |
Externally published | Yes |
Event | Optimization for Machine Learning 2014 - Montreal , Canada Duration: 12 Dec 2012 → 12 Dec 2012 http://opt-ml.org/oldopt/opt14/index.html |
Conference
Conference | Optimization for Machine Learning 2014 |
---|---|
Abbreviated title | OPT2014 |
Country/Territory | Canada |
City | Montreal |
Period | 12/12/12 → 12/12/12 |
Internet address |