Abstract
One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatically grow with data in a principled way. We provide theoretical analysis to show that our framework is guaranteed to converge for a large collection of loss functions (e.g. Hinge, Logistic, L2, L1, and \varepsilon-insensitive) which enables the proposed algorithm to perform both classification and regression tasks without hurting the ideal convergence rate O\left(\frac1T\right) of the standard Stochastic Gradient Descent. We validate our algorithm on the real-world datasets to consolidate the theoretical claims.
Original language | English |
---|---|
Title of host publication | Proceedings of Machine Learning Research |
Subtitle of host publication | Artificial Intelligence and Statistics, 9-11 May 2016, Cadiz, Spain |
Editors | Arthur Gretton, Christian C. Robert |
Place of Publication | USA |
Publisher | Proceedings of Machine Learning Research (PMLR) |
Pages | 564-572 |
Number of pages | 9 |
Volume | 51 |
Publication status | Published - 2016 |
Externally published | Yes |
Event | International Conference on Artificial Intelligence and Statistics 2016 - Cadiz, Spain Duration: 9 May 2016 → 11 May 2016 Conference number: 19th http://proceedings.mlr.press/v51/ (Proceedings) |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
ISSN (Print) | 1938-7228 |
Conference
Conference | International Conference on Artificial Intelligence and Statistics 2016 |
---|---|
Abbreviated title | AISTATS 2016 |
Country/Territory | Spain |
City | Cadiz |
Period | 9/05/16 → 11/05/16 |
Internet address |
|