Robust Online Learning Method Based on Dynamical Linear Quadratic Regulator

Hanwen Ning, Jiaming Zhang, Xingjian Jing, Tianhai Tian

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)

Abstract

In this paper, a novel algorithm is proposed for inferring online learning tasks efficiently. By a carefully designed scheme, the online learning problem is first formulated as a state feedback control problem for a series of finite-dimensional systems. Then, the online linear quadratic regulator (OLQR) learning algorithm is developed to obtain the optimal parameter updating. Solid mathematical analysis on the convergence and rationality of our method is also provided. Compared with the conventional learning methods, our learning framework represents a completely different approach with optimal control techniques, but does not introduce any assumption on the characteristics of noise or learning rate. The proposed method not only guarantees the fast and robust convergence but also achieves better performance in learning efficiency and accuracy, especially for the data streams with complex noise disturbances. In addition, under the proposed framework, new robust algorithms can be potentially developed for various machine learning tasks by using the powerful optimal control techniques. Numerical results on benchmark datasets and practical applications confirm the advantages of our new method.

Original languageEnglish
Pages (from-to)117780-117795
Number of pages16
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019

Keywords

  • complex noise disturbances
  • linear quadratic regulator
  • Online machine learning
  • optimal control

Cite this