Search

Talk

Sparse coding and invariance in neural systems

  • Bruno Olshausen (Redwood Center for Theoretical Neuroscience, University of California at Berkeley, USA)
A3 01 (Sophus-Lie room)

Abstract

There is now accumulating evidence that cortical neurons represent sensory input in a sparse format. Such sparse representations are useful because they make explicit the features contained in sensory data, and they facilitate the learning of associations and higher-order statistical relationships at higher levels of analysis. However, the neural mechanisms of sparse coding are not well understood. Here, I will describe a model neural circuit that computes sparse representations efficiently using a network of recurrently connected leaky integrator + threshold units (essentially a Hopfield network that minimizes a weighted combination of reconstruction error and an activity cost function). When applied to video sequences, the resulting sparse codes demonstrate inertial properties that are more regular (i.e., smoother and more predictable) than representations produced by greedy algorithms such as matching pursuit. I will also describe how sparse representations can be factorized into amplitude and phase components, which then allows higher levels of analysis to learn invariances from natural image sequences.