FACTOID # 3: South Carolina has the highest rate of violent crimes and aggravated assaults per capita among US states.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Particle filter
Result of particle filtering (red line) based on observed data generated from the blue line ( Much larger image)

Particle filters, also known as Sequential Monte Carlo methods (SMC), are sophisticated model estimation techniques based on simulation. A diesel particulate filter (top left) in a Peugeot A diesel particulate filter, sometimes called a DPF, is device designed to remove diesel particulate matter or soot from the exhaust gas of a diesel engine, most of which are rated at 85% efficiency, but often attaining efficiencies of over 90... An example result after particle filtering. ... An example result after particle filtering. ... Download high resolution version (1095x820, 53 KB)An example result after particle filtering. ... Monte Carlo methods are a widely used class of computational algorithms for simulating the behavior of various physical and mathematical systems, and for other computations. ... Estimation is the calculated approximation of a result which is usable even if input data may be incomplete, uncertain, or noisy. ... This article is about the general term. ...

They are usually used to estimate Bayesian models and are the sequential ('on-line') analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. If well designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than the EKF or UKF. The approaches can also be combined by using a version of the Kalman filter as a proposal distribution for the particle filter. Bayesian refers to probability and statistics -- either methods associated with the Reverend Thomas Bayes (ca. ... Markov chain Monte Carlo (MCMC) methods (which include random walk Monte Carlo methods) are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its stationary distribution. ... Importance sampling is a general technique for estimating the properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. ... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ...

The particle filter aims to estimate the sequence of hidden parameters, xk for $k=0,1,2,3, ldots$, based only on the observed data yk for $k=0,1,2,3, ldots$. All Bayesian estimates of xk follow from the posterior distribution $p(x_k|y_0,y_1,ldots,y_k)$. In contrast, the MCMC or importance sampling approach would model the full posterior $p(x_0,x_1,ldots,x_k|y_0,y_1,ldots,y_k)$.

## Model

Particle methods assume xk and the observations yk can be modeled in this form:

• $x_0, x_1, ldots$ is a first order Markov process such that
$x_k|x_{k-1} sim p_{x_k|x_{k-1}}(x|x_{k-1})$

and with an initial distribution p(x0). It has been suggested that this article or section be merged with Markov property. ...

• The observations $y_0, y_1, ldots$ are conditionally independent provided that $x_0, x_1, ldots$ are known
In other words, each yk only depends on xk
$y_k|x_k sim p_{y|x}(y|x_k)$

One example form of this scenario is

$x_k = f(x_{k-1}) + v_k ,$
$y_k = h(x_k) + w_k ,$

where both vk and wk are mutually independent and identically distributed sequences with known probability density functions and $f(cdot)$ and $h(cdot)$ are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions $f(cdot)$ and $h(cdot)$ were linear, and if both vk and wk were Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation. Particle filters are also an approximation, but with enough particles can be much more accurate. In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals. ... In control engineering, a state space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. ... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... GAUSSIAN is a computational chemistry software program, first written by John Pople. ...

## Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution $p(x_k|y_0,dots,y_k)$. So, with P samples, expectations with respect to the filtering distribution are approximated by

$int f(x_k)p(x_k|y_0,dots,y_k)dx_kapproxfrac1Psum_{L=1}^Pf(x_k^{(L)})$

and $f(cdot)$, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.-1...

Generally, the algorithm is repeated iteratively for a specific number of k values (call this N). Initializing xk = 0 | k = 0 for all particles provides a starting place to generate x1, which can then be used to generate x2, which can be used to generate x3 and so on up to k = N. When done, the mean of xk over all the particles (or $frac{1}{P}sum_{L=1}^P x_k^{(L)}$) is approximately the actual value of xk. In statistics, mean has two related meanings: the arithmetic mean (and is distinguished from the geometric mean or harmonic mean). ...

## Sampling Importance Resampling (SIR)

Sampling importance resampling (SIR) is a very commonly used particle filtering algorithm, which approximates the filtering distribution $p(x_k|y_0,ldots,y_k)$ by a weighted set of particles In statistics, resampling is any of a variety of methods for doing one of the following: Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (jackknife) or drawing randomly with replacement from a set of data points (bootstrapping) Exchanging labels on data points when...

${(w^{(L)}_k,x^{(L)}_k)~:~L=1,ldots,P}$.

The importance weights $w^{(L)}_k$ are approximations to the relative posterior probabilities (or densities) of the particles such that $sum_{L=1}^P w^{(L)}_k = 1$.

SIR is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function $f(cdot)$ can be approximated as a weighted average Importance sampling is a general technique for estimating the properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. ...

$int f(x_k) p(x_k|y_0,dots,y_k) dx_k approx sum_{L=1}^P w^{(L)} f(x_k^{(L)}).$

The algorithm performance is dependent on the choice of the importance distribution

$pi(x_k|x_{0:k-1},y_{0:k}),$.

The optimal importance distribution is given as

$pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k}). ,$

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:

$pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1}). ,$

Sampling Importance Resampling (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm. In statistics, resampling is any of a variety of methods for doing one of the following: Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (jackknife) or drawing randomly with replacement from a set of data points (bootstrapping) Exchanging labels on data points when... The condensation algorithm (Conditional Density Propagation) is a computer vision algorithm. ...

Resampling is used to avoid the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified resampling proposed by Kitagawa (1996) is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For $L=1,ldots,P$ draw samples from the importance distributions
$x^{(L)}_k sim pi(x_k|x^{(L)}_{0:k-1},y_{0:k})$
2) For $L=1,ldots,P$ evaluate the importance weights up to a normalizing constant:
$hat{w}^{(L)}_k = w^{(L)}_{k-1} frac{p(y_k|x^{(L)}_k) p(x^{(L)}_k|x^{(L)}_{k-1})} {pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k})}.$
3) For $L=1,ldots,P$ compute the normalized importance weights:
$w^{(L)}_k = frac{hat{w}^{(L)}_k}{sum_{J=1}^P hat{w}^{(J)}_k}$
4) Compute an estimate of the effective number of particles as
$hat{N}_mathit{eff} = frac{1}{sum_{L=1}^Pleft(w^{(L)}_kright)^2}$
5) If the effective number of particles is less than a given threshold $hat{N}_mathit{eff} < N_{thr}$, then perform resampling:
a) Draw P particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For $L=1,ldots,P$ set $w^{(L)}_k = 1/P$.

The term Sequential Importance Resampling is also sometimes used when referring to SIR filters.

## Sequential Importance Sampling (SIS)

• Is the same as Sampling Importance Resampling, but without the resampling stage.

## "Direct version" algorithm

The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from $p_{x_k|y_{1:k}}(x|y_{1:k})$:

1) Set p=1
2) Uniformly generate L from {1,...,P}
3) Generate a test $hat{x}$ from its distribution $p_{x_k|x_{k-1}}(x|x_{k-1|k-1}^{(L)})$
4) Generate the probability of $hat{y}$ using $hat{x}$ from $p_{y|x}(y_k|hat{x})$ where yk is the measured value
5) Generate another uniform u from [0,mk]
6) Compare u and $hat{y}$
6a) If u is larger then repeat from step 2
6b) If u is smaller then save $hat{x}$ as xk | k(p) and increment p
7) If p > P then quit

The goal is to generate P "particles" at k using only the particles from k − 1. This requires that a Markov equation can be written (and computed) to generate a xk based only upon xk − 1. This algorithm uses composition of the P particles from k − 1 to generate a particle at k and repeats (steps 2-6) until P particles are generated at k. In probability theory and statistics, the continuous uniform distribution is a family of probability distributions such that for each member of the family, all intervals of the same length on the distributions support are equally probable. ... In probability theory and statistics, the continuous uniform distribution is a family of probability distributions such that for each member of the family, all intervals of the same length on the distributions support are equally probable. ...

This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimensions is the particle number. For example, x(k,L) would be the Lth particle at k and can also be written $x_k^{(L)}$ (as done above in the algorithm). Step 3 generates a potential xk based on a randomly chosen particle ($x_{k-1}^{(L)}$) at time k − 1 and rejects or accepts it in step 6. In other words, the xk values are generated using the previously generated xk − 1.

## Other Particle Filters

• Auxiliary particle filter
• Gaussian particle filter
• Unscented particle filter
• Monte Carlo particle filter
• Gauss-Hermite particle filter
• Cost Reference particle filter

// The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. ... The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... It has been suggested that this article or section be merged with Sequential_bayesian_filtering. ...

## References

• Sequential Monte Carlo Methods in Practice, by A Doucet, N de Freitas and N Gordon. Springer, 2001. ISBN 978-0387951461
• On Sequential Monte Carlo Sampling Methods for Bayesian Filtering, by A Doucet, C Andrieu and S. Godsill, Statistics and Computing, vol. 10, no. 3, pp. 197-208, 2000 CiteSeer link
• Tutorial on Particle Filters for On-line Nonlinear/Non-Gaussian Bayesian Tracking (2001); S. Arulampalam, S. Maskell, N. Gordon and T. Clapp; CiteSeer link
• Particle Methods for Change Detection, System Identification, and Control, by Andrieu C., Doucet A., Singh S.S., and Tadic V.B., Proceedings of the IEEE, Vol 92, No 3, March 2004. SMC Homepage link
• A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes, A.J. Haug, The MITRE Corporation; Mitre link
• Distributed Implementations of Particle Filters, Anwer Bashi, Vesselin Jilkov, Xiao Rong Li, Huimin Chen, Available from the University of New Orleans
• A survey of convergence results on particle filtering methods for practitioners, Crisan, D. and Doucet, A., IEEE Transactions on Signal Processing 50(2002)736-746 doi:10.1109/78.984773
• Beyond the Kalman Filter: Particle Filters for Tracking Applications, by B Ristic, S Arulampalam, N Gordon. Artech House, 2004. ISBN 1-58053-631-x
• Filtering via simulation: auxiliary particle filter, by M. K. Pitt and N. Shephard, Journal of the Americian Statistical Association, 94 (1999), 590-599.

Results from FactBites:

 Patent 5238472: Particle filter that can be regenerated by burning free for the exhaust gases of internal combustion ... (3279 words) A particle filter for the exhaust gases of internal combustion engines is described, which can be regenerated by burning free, and in which the filter cartridges (12) include a support tube (16) with a lining of filter material (20). This leads to a particle filter in which the flow resistance of the filter cartridges, in the direction of flow through the filter, in the outlet-side area and in the non-loaded state of the filter is lower than in the inlet-side area. The filter cartridges 12 are mounted in the mounting plates 5 and 10 with the connection pieces 18/19 and fastened at least on the inlet side (18 in 5), while the outlet-side mounting (19 in 10) may be designed as a sliding fit.
 Particle filter - Wikipedia, the free encyclopedia (814 words) Particle filter methods, also known as Sequential Monte Carlo (SMC), are sophisticated model estimation techniques based on simulation. Particle filters are also an approximation, but with enough particles can be much more accurate. This algorithm uses composition of the P particles from k â 1 to generate a particle at k and repeats (steps 2-6) until P particles are generated at k.
More results at FactBites »

Share your thoughts, questions and commentary here