-
3
Jan
In this tutorial by Dr. Liam Paninski, the Expectation-Maximization (EM) algorithm is discussed and illustrated in a variety of neural examples.
Key topics addressed:
- Example: Mixture models and spike sorting
- The method of bound optimization via auxiliary functions provides a useful alternative optimization technique
- The EM algorithm for maximizing the likelihood given hidden data may be derived as a bound optimization algorithm
- EM may easily be adapted to optimize the log-posterior instead of the log-likelihood
- Example: Deriving the EM algorithm for the mixture model (spike sorting) case
- Example: Spike sorting given stimulus observations
- Example: Generalized linear point-process models with spike-timing jitter
- Example: Fitting hierarchical generalized linear models for spike trains
- Example: Latent-variable models of overdispersion and common-input correlations in spike counts
- Example: Iterative proportional fitting
- The E-step may be used to compute the gradients of the marginal likelihood
- The convergence rate of the EM algorithm depends on the “ratio of missing information”.
- Published by Dimitrios A. Adamos in: Tutorials
- RSS feed subscription!