Mathematical Problems in Engineering
Volume 2010 (2010), Article ID 621670, 34 pages
Research Article

Generalised Filtering

1Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London WC1N 3BG, UK
2Laboratory for Social and Neural Systems Research, Institute of Empirical Research in Economics, University of Zurich, 8006 Zurich, Switzerland
3College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, Hunan 410073, China

Received 29 January 2010; Accepted 17 March 2010

Academic Editor: Massimo Scalia

Copyright © 2010 Karl Friston et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We describe a Bayesian filtering scheme for nonlinear state-space models in continuous time. This scheme is called Generalised Filtering and furnishes posterior (conditional) densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on time-varying states and time-invariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a free-energy bound on the model's log-evidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixed-form variational version, and conclude with an illustrative application to a nonlinear state-space model of brain imaging time-series.