 FACTOID # 10: The total number of state executions in 2005 was 60: 19 in Texas and 41 elsewhere. The racial split was 19 Black and 41 White.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Uniform distribution (continuous)
Parameters Probability density function Using maximum convention Cumulative distribution function  $a,b in (-infty,infty) ,!$ $a le x le b ,!$ $begin{matrix} frac{1}{b - a} & mbox{for }a le x le b 0 & mathrm{for} xb end{matrix} ,!$ $begin{matrix} 0 & mbox{for }x < a frac{x-a}{b-a} & ~~~~~ mbox{for }a le x < b 1 & mbox{for }x ge b end{matrix} ,!$ $frac{a+b}{2} ,!$ $frac{a+b}{2} ,!$ any value in $[a,b] ,!$ $frac{(b-a)^2}{12} ,!$ $0 ,!$ $-frac{6}{5} ,!$ $ln(b-a) ,!$ $frac{e^{tb}-e^{ta}}{t(b-a)} ,!$ $frac{e^{itb}-e^{ita}}{it(b-a)} ,!$

### Probability density function

The probability density function of the continuous uniform distribution is: In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals. ... $f(x)=left{begin{matrix} frac{1}{b - a} & mathrm{for} a le x le b, 0 & mathrm{for} xb, end{matrix}right.$

The values at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(xdx over any interval, nor of x f(xdx or the like. Sometimes they are chosen to be zero, and sometimes chosen to be 1/(b − a). The latter is appropriate in the context of estimation by the method of maximum likelihood. In the context of Fourier analysis, one may take the value of f(a) or f(b) to be 1/(2(b − a)), since then the inverse transform of many integral transforms of this uniform function will yield back the function itself, rather than a function which is equal "almost everywhere", i.e. except on a set of points with zero [[measure the Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution from a given data set. ... Fourier analysis, named after Joseph Fouriers introduction of the Fourier series, is the decomposition of a function in terms of a sum of sinusoidal basis functions (vs. ... In mathematics, an integral transform is any transform T of the following form: The input of this transform is a function f, and the output is another function Tf. ... In measure theory (a branch of mathematical analysis), one says that a property holds almost everywhere if the set of elements for which the property does not hold is a null set, i. ...

### Cumulative distribution function

The cumulative distribution function is: In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by where the right-hand side represents the probability that the random variable X takes on a value less than...

### Generating functions

#### Moment-generating function

The moment-generating function is In probability theory and statistics, the moment-generating function of a random variable X is wherever this expectation exists. ... $M_x = E(e^{tx}) = frac{e^{tb}-e^{ta}}{t(b-a)} ,!$

from which we may calculate the raw moments m k See also moment (physics). ... $m_1=frac{a+b}{2}, ,!$ $m_2=frac{a^2+ab+b^2}{3}, ,!$ $m_k=frac{1}{k+1}sum_{i=0}^k a^ib^{k-i}. ,!$

For a random variable following this distribution, the expected value is then m1 = (a + b)/2 and the variance is m2 − m12 = (b − a)2/12. In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are... In probability theory and statistics, the variance of a random variable (or somewhat more precisely, of a probability distribution) is a measure of its statistical dispersion, indicating how its possible values are spread around the expected value. ...

#### Cumulant-generating function

For n ≥ 2, the nth cumulant of the uniform distribution on the interval [0, 1] is bb/n, where bn is the nth Bernoulli number. // Cumulants of probability distributions In probability theory and statistics, the cumulants Îºn of the probability distribution of a random variable X are given by In other words, Îºn/n! is the nth coefficient in the power series representation of the logarithm of the moment-generating function. ... In mathematics, the Bernoulli numbers are a sequence of rational numbers with deep connections in number theory. ...

## Properties

### Generalization to Borel sets

This distribution can be generalized to more complicated sets than intervals. If S is a Borel set of positive, finite measure, the uniform probability distribution on S can be specified by defining the pdf to be zero outside S and constantly equal to 1/K on S, where K is the Lebesgue measure of S. In mathematics, the Lebesgue measure is the standard way of assigning a length, area or volume to subsets of Euclidean space. ...

### Order statistics

Let X1, ..., Xn be an i.i.d. sample from U(0,1). Let X(k) be the kth order statistic from this sample. Then the probability distribution of X(k) is a Beta distribution with parameters k and n − k + 1. The expected value is In probability theory, a sequence or other collection of random variables is independent and identically distributed (i. ... Probability distributions for the n = 5 order statistics of an exponential distribution with Î¸ = 3. ... In probability theory and statistics, the beta distribution is a continuous probability distribution with the probability density function (pdf) defined on the interval [0, 1]: where Î± and Î² are parameters that must be greater than zero and B is the beta function. ... $operatorname{E}(X_{(k)}) = {k over n+1}.$

This fact is useful when making Q-Q plots. In statistics, a Q-Q plot (Q stands for quantile) is a tool for diagnosing differences in distributions (such as non-normality) of a population from which a random sample has been taken. ...

The variances are $operatorname{Var}(X_{(k)}) = {k (n-k+1) over (n+1)^2 (n+2)} .$

### 'Uniformity'

The probability that a uniformly distributed random variable falls within any interval of fixed length is independent of the location of the interval itself (but it is dependent on the interval size), so long as the interval is contained by the distribution's support.

To see this, if X ≈ U(0,b) and [x, x+d] is a subinterval of [0,b] with fixed d > 0, then $Pleft(Xinleft [ x,x+d right ]right) = int_{x}^{x+d} frac{mathrm{d}y}{b-a}, = frac{d}{b-a} ,!$

which is independent of x. This fact motivates the distribution's name.

## Standard uniform

Restricting a = 0 and b = 1, the resulting distribution U(0,1) is called a standard uniform distribution.

One interesting property of the standard uniform distribution is that if u1 has a standard uniform distribution, then so does 1-u1.

## Related distributions

If X has a standard uniform distribution,

• Y = -ln(X)/λ has an exponential distribution with (rate) parameter λ.
• Y = 1 - X1/n has a beta distribution with parameters 1 and n. (Note this implies that the standard uniform distribution is a special case of the beta distribution, with parameters 1 and 1.)

In probability theory and statistics, the exponential distributions are a class of continuous probability distribution. ... In probability theory and statistics, the beta distribution is a continuous probability distribution with the probability density function (pdf) defined on the interval [0, 1]: where Î± and Î² are parameters that must be greater than zero and B is the beta function. ...

## Relationship to other functions

As long as the same conventions are followed at the transition points, the probability density function may also be expressed in terms of the Heaviside step function: The Heaviside step function, using the half-maximum convention The Heaviside step function, sometimes called the unit step function and named in honor of Oliver Heaviside, is a discontinuous function whose value is zero for negative argument and one for positive argument: The function is used in the mathematics of... $f(x)=frac{operatorname{H}(x-a)-operatorname{H}(x-b)}{b-a}, ,!$

or in terms of the rectangle function The rectangular function (also known as the rectangle function or the normalized boxcar function) is defined as or in terms of the Heaviside step function, u(t). ... $f(x)=frac{1}{b-a},operatorname{rect}left(frac{x-left(frac{a+b}{2}right)}{b-a}right) .$

There is no ambiguity at the transition point of the sign function. Using the half-maximum convention at the transition points, the uniform distribution may be expressed in terms of the sign function as: Signum function In mathematics and especially in computer science, the sign function is a logical function which extracts the sign of a real number. ... $f(x)=frac{ sgn{(x-a)}-sgn{(x-b)}} {2(b-a)}.$

## Applications

In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the test statistic is uniformly distributed between 0 and 1 if the null hypothesis is true. This article is about the field of statistics. ... In statistical hypothesis testing, the p-value of a random variable T used as a test statistic is the probability that T will assume a value at least as extreme as the observed value tobserved, given that a null hypothesis being considered is true. ... In statistics, a null hypothesis is a hypothesis set up to be nullified or refuted in order to support an alternative hypothesis. ...

### Sampling from a uniform distribution

There are many applications in which it is useful to run simulation experiments. Many programming languages have the ability to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution. A programming language is an artificial language that can be used to control the behavior of a machine, particularly a computer. ... A Pseudorandom number sequence is a sequence of numbers that has been computed by some defined arithmetic process but is effectively a random number sequence for the purpose for which it is required. ...

If u is a value sampled from the standard uniform distribution, then the value a + (ba)u follows the uniform distribution parametrised by a and b, as described above.

### Sampling from an arbitrary distribution

The uniform distribution is useful for sampling from arbitrary distributions. A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the cdf is not known in closed form. One such method is rejection sampling. The inverse transform sampling method is a method of sampling a number at random from any probability distribution given its cumulative distribution function (cdf). ... In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by where the right-hand side represents the probability that the random variable X takes on a value less than... In mathematics, rejection sampling is a technique used to generate observations from a distribution. ...

The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box-Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables. The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. ... A Box-Muller transform is a method of generating pairs of independent normally distributed random numbers, given a source of uniformly distributed random numbers. ... In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. ... Probability distributionsview  talk  edit ]
Univariate Multivariate
Discrete: BenfordBernoullibinomialBoltzmanncategoricalcompound Poisson • discrete phase-type • degenerate • Gauss-Kuzmin • geometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniform • Yule-Simon • zetaZipf • Zipf-Mandelbrot Ewensmultinomialmultivariate Polya
Continuous: BetaBeta primeCauchychi-squareDirac delta function • Coxian • Erlangexponentialexponential powerFfading • Fermi-Dirac • Fisher's z • Fisher-Tippett • Gammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-logistic • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponential • inverse chi-square (scaled inverse chi-square) • inverse Gaussianinverse gamma (scaled inverse gamma) • KumaraswamyLandauLaplaceLévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speedNakagaminormal (Gaussian) • normal-gamma • normal inverse Gaussian • ParetoPearson • phase-type • polarraised cosineRayleigh • relativistic Breit-Wigner • Riceshifted Gompertz • Student's t • triangulartruncated normal • type-1 Gumbel • type-2 Gumbel • uniform • Variance-Gamma • Voigtvon MisesWeibullWigner semicircleWilks' lambda Dirichlet • Generalized Dirichlet distribution . inverse-Wishart • Kentmatrix normalmultivariate normalmultivariate Student • von Mises-Fisher • Wigner quasi • Wishart
Miscellaneous: bimodalCantorconditionalequilibriumexponential family • Infinite divisibility (probability) • location-scale family • marginalmaximum entropyposteriorprior • quasi • samplingsingular Results from FactBites:

 Uniform Distribution Theory - journal (340 words) Continuous uniform distribution, discrepancies, distribution of one dimensional and multidimensional sequences. Distribution of one dimensional and multidimensional sequences, distribution of binary sequences, spectral properties of sequences, trigonometric sums, dynamic emerging from sequences. Distribution of integer sequences and sequences from groups and generalized spaces, theory of distribution functions of sequences (limit measures), distribution of binary sequences, dynamic emerging from sequences.
 Uniform distribution (continuous) - Wikipedia, the free encyclopedia (841 words) In mathematics, the continuous uniform distributions are probability distributions such that all intervals of the same length are equally probable. The continuous uniform distribution is a generalization of the rectangle function because of the shape of its probability density function. For n ≥ 2, the nth cumulant of the uniform distribution on the interval [0, 1] is b
More results at FactBites »

Share your thoughts, questions and commentary here
Press Releases | Feeds | Contact