FACTOID # 24: Looking for table makers? Head to Mississippi, with an overwhlemingly large number of employees in furniture manufacturing.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > A priori (statistics)

In statistics, a priori knowledge refers to actual knowledge of the underlying situation, rather than that estimated by observation. It is a commonly in Bayesian inference to make inferences conditional upon this knowledge, and the integration of a priori knowledge is the central difference between the Bayesian and Frequenist approach to statistics. It is generally not advised to conduct estimation conditional upon assumptions since it will skew the results. A priori knowledge often consists of knowledge of the domain of a parameter (for example, that it is positive) that can be incorporated to improve an estimate. Within this domain the distribution is ussually assumed to be uniform in order to avoid bias. Bayesian inference is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. ... This article defines some terms which characterize probability distributions of two or more variables. ... Statistical regularity has motivated the development of the relative frequency concept of probability. ... Look up assumption in Wiktionary, the free dictionary. ... In mathematics, the domain of a function is the set of all input values to the function. ... In mathematics, the uniform distributions are simple probability distributions. ...

Contents

Examples

Basic example

Suppose that we pick two red beads and three black beads from a bag; what is the probability that the next bead we pick out will be red? Without a priori knowledge, we cannot answer the question. But if we knew, a priori, that there were only two red beads in the bag, then we would know for certain that the probability of picking out another red bead was in fact zero.


More theoretical example

Suppose that we are trying to estimate the coefficients of a autoregressive (AR) stochastic process based on recorded data, and we know beforehand that the process is stationary. Recall that an AR(2) process can be written: In statistics, autoregressive moving average (ARMA) models, sometimes called Box-Jenkins models after George Box and G. M. Jenkins, are typically applied to time series data. ... In the mathematics of probability, a stochastic process is a random function. ... In the mathematical sciences, a stationary process (or strict(ly) stationary process) is a stochastic process in which the probability density function of some random variable X does not change over time or position. ...

Xk + θ1Xk − 1 + θ2Xk − 2 = εk

Normally we would proceed with Maximum Likelihood (ML) estimation as with a frequentist approach, but instead we can integrate our knowledge into the Likelihood function, and instead maximize our likelihood conditional upon the fact that we know the process is stationary. We can assign prior distributions to the AR coefficients θ12 that are uniform across a limited domain in line with the constraints upon stationary process coefficients. For an AR(2) process, the constaints are: Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution of a given data set. ... In statistics, a likelihood function is a conditional probability function considered a function of its second argument with its first argument held fixed, thus: and also any other function proportional to such a function. ... This article defines some terms which characterize probability distributions of two or more variables. ... A prior probability is a marginal probability, interpreted as a description of what is known about a variable in the absence of some evidence. ...

| θ2 | < 1,
θ2 + 1 > | θ1 |

Adding this information will change the Likelihood function, and when we now use ML-estimation to estimate the coefficients, we will in general obtain a better estimate. This is in particular true of we suspect that the coefficients are near the boundary of the stationary domain. Note that the distribution on the domain is uniform, so we have not made any assumptions about what the coefficients actually are, only their domain.


See also


 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m