FACTOID # 13: New York has America's lowest percentage of residents who are veterans.
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 


FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:



(* = Graphable)



Encyclopedia > Recursive Bayesian estimation

Recursive Bayesian estimation is a general probabilistic approach for estimating an unknown probability density function recursively over time using incoming measurements and a mathematical process model. Wikipedia does not have an article with this exact name. ... It has been suggested that this article or section be merged with Recursive_Bayesian_estimation. ... In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals. ...


The true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model. It has been suggested that this article or section be merged with Markov property. ... State transitions in a hidden Markov model (example) x — hidden states y — observable outputs a — transition probabilities b — output probabilities A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to...

Hidden Markov Model

Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. Image created by chrislloyd This image has been released into the public domain by the copyright holder, its copyright has expired, or it is ineligible for copyright. ...

p(textbf{x}_k|textbf{x}_0,dots,textbf{x}_{k-1}) = p(textbf{x}_k|textbf{x}_{k-1} )

Similarly the measurement at the kth timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.

p(textbf{z}_k|textbf{x}_0,dots,textbf{x}_{k}) = p(textbf{z}_k|textbf{x}_{k} )

Using these assumptions the probability distribution over all states of the HMM can be written simply as:

p(textbf{x}_0,dots,textbf{x}_k,textbf{z}_1,dots,textbf{z}_k) = p(textbf{x}_0)prod_{i=1}^k p(textbf{z}_i|textbf{x}_i)p(textbf{x}_i|textbf{x}_{i-1})

However, when using the Kalman filter to estimate the state x, the probability distribution of interest is associated with the current states conditioned on the measurements up to the current timestep. (This is achieved by marginalising out the previous states and dividing by the probability of the measurement set.)

This leads to the predict and update steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is product of the probability distribution associated with the transition from the (k - 1) th timestep to the kth and the probability distribution associated with the previous state, with the true state at (k - 1) integrated out.

p(textbf{x}_k|textbf{Z}_{k-1}) = int p(textbf{x}_k | textbf{x}_{k-1}) p(textbf{x}_{k-1} | textbf{Z}_{k-1} ) , dtextbf{x}_{k-1}

The measurement set up to time t is

textbf{Z}_{t} = left { textbf{z}_{1},dots,textbf{z}_{t} right }

The probability distribution of updated is proportional to the product of the measurement likelihood and the predicted state.

p(textbf{x}_k|textbf{Z}_{k}) = frac{p(textbf{z}_k|textbf{x}_k) p(textbf{x}_k|textbf{Z}_{k-1})}{p(textbf{z}_k|textbf{Z}_{k-1})}

The denominator

p(textbf{z}_k|textbf{Z}_{k-1}) = int p(textbf{z}_k|textbf{x}_k) p(textbf{x}_k|textbf{Z}_{k-1}) dtextbf{x}_k

is a less significant normalisation term.


The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. ... In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution, is a specific probability distribution, which can be thought of as a generalization to higher dimensions of the one-dimensional normal distribution (also called a Gaussian distribution). ... Result of particle filtering (red line) based on observed data generated from the blue line ( Much larger image) Particle filter methods, also known as Sequential Monte Carlo (SMC), are sophisticated model estimation techniques based on simulation. ...

External links

  • On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing (2000)
  • A Tutorial on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian Tracking, IEEE Transactions on Signal Processing (2001)
  • Special Issue on Sequential State Estimation - Proceedings of the IEEE (Mar 2004)



Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m