Approximate, computationally efficient online learning in bayesian spiking neurons

Levin Kuhlmann, Michael Hauser-Raspe, Jonathan H. Manton, David B. Grayden, Jonathan Tapson, André van Schaik

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)

Abstract

Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning.Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs).AlthoughML-EMappears to converge 2.0 to 3.6 times faster than FL, the computational cost ofML-EM means thatML-EM takes longer to simulate to convergence than FL. FL also provides reasonable.

Original languageEnglish
Pages (from-to)472-496
Number of pages25
JournalNeural Computation
Volume26
Issue number3
DOIs
Publication statusPublished - Mar 2014
Externally publishedYes

Cite this