FACTOID # 19: Cheap sloppy joes: Looking for reduced-price lunches for schoolchildren? Head for Oklahoma!

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Posterior probability

The posterior probability can be calculated by Bayes' theorem from the prior probability and the likelihood function. Jump to: navigation, search Bayes theorem is a result in probability theory. ... In statistics, a likelihood function is a conditional probability function considered a function of its second argument with its first argument held fixed, thus: and also any other function proportional to such a function. ...

Similarly a posterior probability distribution is the conditional probability distribution of the uncertain quantity given the data. It can be calculated by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant. For example Jump to: navigation, search In mathematics, a probability distribution assigns to every interval of the real numbers a probability, so that the probability axioms are satisfied. ... A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. ... In statistics, a likelihood function is a conditional probability function considered a function of its second argument with its first argument held fixed, thus: and also any other function proportional to such a function. ... The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics. ...

gives the posterior probability density function for a random variable X given the data Y=y, where In mathematics, a probability density function (pdf) serves to represent a probability distribution in terms of integrals. ...

• fX(x) is the prior density of X,
• is the likelihood function as a function of x,
• is the normalizing constant, and
• is the posterior density of X.

Results from FactBites:

 POSTERIOR PROBABILITY DECODING, CONFIDENCE ESTIMATION AND SYSTEM COMBINATION (3138 words) The posterior probability estimates were used as the basis for the estimation of word-level confidence scores. The estimation of word-level posterior probabilities is based on the word lattices generated by a conventional Viterbi decoder. The word posterior probabilities that result from the confusion network clustering procedure can be used directly as confidence scores but they tend to overestimate the probabilities of correct recognition.
More results at FactBites »

Share your thoughts, questions and commentary here