Adaptive communication bounds for distributed online learning

Michael Kamp, Mario Boley, Daniel Keren, Assaf Schuster, Izchak Sharfman

Research output: Chapter in Book/Report/Conference proceedingConference PaperOther


We consider distributed online learning protocols that control the exchange of information between local learners in a round-based learning scenario. The learning performance of such a protocol is intuitively optimal if approximately the same loss is incurred as in a hypothetical serial setting. If a protocol accomplishes this, it is inherently impossible to achieve a strong communication bound at the same time. In the worst case, every input is essential for the learning performance, even for the serial setting, and thus needs to be exchanged between the local learners. However, it is reasonable to demand a bound that scales well with the hardness of the serialized prediction problem, as measured by the loss received by a serial online learning algorithm. We provide formal criteria based on this intuition and show that they hold for a simplified version of a previously published protocol.
Original languageEnglish
Title of host publicationOPT2014
Subtitle of host publicationOptimization for Machine Learning
EditorsSuvrit Sra, Alekh Agarwal, Miro Dudik, Zaid Harchaoui, Zaid Harchaoui, Martin Jaggi, Aaditya Ramdas
Place of PublicationMontreal Quebec CA
PublisherNeural Information Processing Systems (NIPS)
Number of pages5
Publication statusPublished - 2014
Externally publishedYes
EventOptimization for Machine Learning 2014 - Montreal , Canada
Duration: 12 Dec 201212 Dec 2012


ConferenceOptimization for Machine Learning 2014
Abbreviated titleOPT2014
Internet address

Cite this