Abstract
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning.Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs).AlthoughML-EMappears to converge 2.0 to 3.6 times faster than FL, the computational cost ofML-EM means thatML-EM takes longer to simulate to convergence than FL. FL also provides reasonable.
Original language | English |
---|---|
Pages (from-to) | 472-496 |
Number of pages | 25 |
Journal | Neural Computation |
Volume | 26 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2014 |
Externally published | Yes |