FACTOID # 5: Minnesota and Connecticut are both in the top 5 in saving money and total tax burden per capita.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Prior probability

A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. The posterior probability is then the conditional probability of the variable taking the evidence into account. The posterior probability is computed from the prior and the likelihood function via Bayes' theorem. This article defines some terms which characterize probability distributions of two or more variables. ... In Bayesian probability theory, a marginal likelihood function is a likelihood function integrated over some variables, typically model parameters. ... The posterior probability can be calculated by Bayes theorem from the prior probability and the likelihood function. ... This article defines some terms which characterize probability distributions of two or more variables. ... In statistics, a likelihood function is a conditional probability function considered a function of its second argument with its first argument held fixed, thus: and also any other function proportional to such a function. ... Bayess theorem (also known as Bayess rule) is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. ...


As prior and posterior are not terms used in frequentist analyses, this article uses the vocabulary of Bayesian probability and Bayesian inference. Statistical regularity has motivated the development of the relative frequency concept of probability. ... In the philosophy of mathematics Bayesianism is the tenet that the mathematical theory of probability is applicable to the degree to which a person believes a proposition. ... Bayesian inference is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. ...


Throughout this article, for the sake of brevity the term variable encompasses observable variables, latent (unobserved) variables, parameters, and hypotheses.

Contents


Prior probability distribution

In Bayesian statistical inference, a prior probability distribution, often called simply the prior, of an uncertain quantity p (for example, suppose p is the proportion of voters who will vote for the politician named Smith in a future election) is the prretsrobability distribution that would express one's uncertainty about p before the "dagfergrgta" (for example, an opinion poll) are taken into account. It is meant to attribute uncertainty rather than randomness to the uncertain quantity. gfgdfgsrtuncertain quantity given the data. In the philosophy of mathematics Bayesianism is the tenet that the mathematical theory of probability is applicable to the degree to which a person believes a proposition. ... The topics below are usually included in the area of interpreting statistical data. ...


A prior is often the purely subjective assessment of an experienced expert. Some will choose a conjugate prior when they can, to make calculation of the posterior distribution easier. dsfed In Bayesian probability theory, a conjugate prior is a family of prior probability distributions which has the property that the posterior probability distribution also belongs to that family. ...


Informative priors

An informative prior expresses specific, definite information about a variable. An example is a prior distribution for the temperature at noon tomorrow. A reasonable approach is to make the prior a normal distribution with expected value equal to today's noontime temperature, with variance equal to the day-to-day variance of atmospheric temperature. The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. ... In probability theory (and especially gambling), the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects to win per bet if bets with identical... In probability theory and statistics, the variance of a random variable is a measure of its statistical dispersion, indicating how far from the expected value its values typically are. ...


This example has a property in common with many priors, namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and as more evidence accumulates the prior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation.


Uninformative priors

An uninformative prior expresses vague or general information about a variable. The term "uninformative prior" is a misnomer; such a prior might be called a not very informative prior. Uninformative priors can express information such as "the variable is positive" or "the variable is less than some limit". Some authorities prefer the term objective prior.


In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior.


Some attempts have been made at finding probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy. For example, Edwin T. Jaynes has published an argument (Jaynes 1968) based on Lie groups that suggests that the prior for the proportion p of voters voting for a candidate, given no other information, should be p − 1(1 − p) − 1. If one is so uncertain about the value of the aforementioned proportion p that one knows only that at least one voter will vote for Smith and at least one will not, then the conditional probability distribution of p given this information alone is the uniform distribution on the interval [0, 1], which is obtained by applying Bayes' Theorem to the data set consisting of one vote for Smith and one vote against, using the above prior. Edwin Thompson Jaynes (July 5th, 1922 – April 30th, 1998) was Wayman Crow Distinguished Professor of Physics at Washington University in St. ... In mathematics, a Lie group is a group whose elements can be continuously parametrized by real numbers, such as the rotation group, which can be parametrized by the Euler angles. ... In mathematics, the uniform distributions are simple probability distributions. ... Bayess theorem (also known as Bayess rule) is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. ...


Priors can be constructed which are proportional to the Haar measure if the parameter space X carries a natural group structure. For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the translation group on X, and the resulting prior is a constant improper prior. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (i.e., it doesn't matter if we use centimeters or inches, we should get results that are physically the same). In such a case, the scale group is the natural group structure, and the corresponding prior on X is proportional to 1 / x. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the affine group are not equal. Berger (1985, p. 413) argues that the right-invariant Haar measure is the correct choice. In mathematical analysis, the Haar measure is a way to assign an invariant volume to subsets of locally compact topological groups and subsequently define an integral for functions on those groups. ... A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. ... In mathematics, the affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself. ...


Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy. The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on X, one finds that distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and variance unity is the standard normal distribution. The principle of maximum entropy is a method for analyzing the available information in order to determine a unique epistemic probability distribution. ... Entropy of a Bernoulli trial as a function of success probability. ... The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. ...


A related idea, reference priors, was introduced by Jose M. Bernardo. Here, the idea is to maximize the expected Kullback-Leibler divergence of the posterior distribution relative to the prior. This maximizes the expected posterior information about x when the prior density is p(x). The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., Jeffreys' rule) may result in priors with problematic behavior. In probability theory and information theory, the Kullback-Leibler divergence (or information divergence, or information gain, or relative entropy) is a natural distance measure from a true probability distribution P to an arbitrary probability distribution Q. Typically P represents data, observations, or a precise calculated probability distribution. ... In Bayesian probability, the Jeffreys prior is a noninformative prior distribution proportional to the square root of the Fisher information: and is invariant under reparameterization of . ...


Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred. Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, which gives a prior proportional to the logarithm. The Jeffreys prior attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion p is p1 / 2(1 − p)1 / 2, which differs from Jaynes' recommendation. In Bayesian probability, the Jeffreys prior is a noninformative prior distribution proportional to the square root of the Fisher information: and is invariant under reparameterization of . ...


Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used routinely, i.e., with many different data sets, it should have good frequentist properties. Normally a Bayesian would not be concerned with such issues, but it can be important in this situation. For example, one would want any decision rule based on the posterior distribution to be admissible under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with hierarchical Bayes models; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy. Statistical regularity has motivated the development of the relative frequency concept of probability. ... Bayesian refers to probability and statistics -- either methods associated with the Reverend Thomas Bayes (ca. ... Decision theory is an interdisciplinary area of study, related to and of interest to practitioners in mathematics, statistics, economics, philosophy, management and psychology. ... In classical (frequentist) decision theory, an admissible decision rule is a rule for making a decision that is better in some sense than any other rule that may compete with it. ... In statistics, empirical Bayes methods involve: An underlying probability distribution of some unobservable quantity assigned to each member of a statistical population. ...


Improper priors

If Bayes' theorem is written as

then it is clear that it would remain true if all the prior probabilities P(Ai) and P(Aj) were multiplied by a given constant; the same would be true for a continuous random variable. The posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors only need be specified in the correct proportion. By one convention, a random variable X is called continuous if its cumulative distribution function is continuous. ...


Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities. When this is the case, the prior is called an improper prior. Some statisticians use improper priors as uninformative priors. For example, if they need a prior distribution for the mean and variance of a random variable, they may assume p(mv) ~ 1/v (for v > 0) which would suggest that any value for the mean is equally likely and that a value for the positive variance becomes less likely in inverse proportion to its value. Since

this would be an improper prior both for the mean and for the variance.


References

  • Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis, 2nd edition. CRC Press, 2003. ISBN 1-58488-388-X
  • James O. Berger, Statistical Decision Theory and Bayesian Analysis, Second Edition. Springer-Verlag, 1985. ISBN 0-387-96098-8
  • James O. Berger and William E. Strawderman, Choice of hierarchical priors: admissibility in estimation of normal means, Annals of Statistics, 24, pp. 931-95, 1996.
  • Jose M. Bernardo, Reference Posterior Distributions for Bayesian Inference, Journal of the Royal Statististical Society, Series B, 41, 113-147, 1979.
  • Edwin T. Jaynes, "Prior Probabilities," IEEE Transactions of Systems Science and Cybernetics, SSC-4, 227-241, Sept. 1968. Reprinted in Roger D. Rosenkrantz, Compiler, E. T. Jaynes: Papers on Probability, Statistics and Statistical Physics. Dordrecht, Holland: Reidel Publishing Company, pp. 116-130, 1983. ISBN 90-277-1448-7
Image:Bvn-small.png Probability distributions  view • talk • edit 
Univariate Multivariate
Discrete: BernoullibinomialBoltzmanncompound Poissondegeneratedegree • Gauss-Kuzmin • geometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniform • Yule-Simon • zetaZipf • Zipf-Mandelbrot Ewensmultinomial
Continuous: BetaBeta primeCauchychi-squareexponentialexponential powerFfading • Fisher's z • Fisher-Tippett • Gammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponential • inverse chi-square • inverse gaussianinverse gammaKumaraswamyLandauLaplaceLévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleigh • relativistic Breit-Wigner • Rice • Student's t • triangular • type-1 Gumbel • type-2 Gumbel • uniformVoigtvon MisesWeibullWigner semicircle Dirichletmatrix normalmultivariate normal • Wigner quasi • Wishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisible • location-scale family • marginalmaximum entropy • phase-type • posteriorprior • quasi • sampling

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m