FACTOID # 18: Alaska spends more money per capita on elementary and secondary education than any other state.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Negative binomial distribution
Parameters Probability mass function Cumulative distribution function $r > 0!$ (real) $p in (0;1)!$ (real) $k in {0,1,2,ldots}!$ $frac{Gamma(r+k)}{k!,Gamma(r)},p^r,(1-p)^k !$ $I_p(r,k+1)!$ $frac{r}{p}!$ $lfloor(r-1),(1-p)/prfloor!$ if r > 1 0 if $rleq1$ $r,frac{1-p}{p^2}!$ $frac{2-p}{sqrt{r,(1-p)}}!$ $frac{6}{r} + frac{p^2}{r,(1-p)}!$ $left(frac{p}{1-(1-p) e^t}right)^r !$ $left(frac{p}{1-(1-p) e^{i,t}}right)^r !$

In probability and statistics the negative binomial distribution is a discrete probability distribution. The Pascal distribution and the Polya distribution are special cases of the negative binomial. There is a convention among engineers, climatologists, and others to reserve "negative binomial" in a strict sense or "Pascal" (after Blaise Pascal) for the case of an integer-valued parameter r, and use "Polya" (for George Pólya) for the real-valued case, to the right. The Polya distribution more accurately models occurrences of "contagious" discrete events, like tornado outbreaks, than does the Poisson distribution. In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In mathematics, the support of a real-valued function f on a set X is sometimes defined as the subset of X on which f is nonzero. ... In probability theory, a probability mass function (abbreviated pmf) gives the probability that a discrete random variable is exactly equal to some value. ... In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by where the right-hand side represents the probability that the random variable X takes on a value less than... In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are... In probability theory and statistics, a median is a number dividing the higher half of a sample, a population, or a probability distribution from the lower half. ... In, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. ... In probability theory and statistics, the variance of a random variable (or somewhat more precisely, of a probability distribution) is a measure of its statistical dispersion, indicating how its possible values are spread around the expected value. ... Example of the experimental data with non-zero skewness (gravitropic response of wheat coleoptiles, 1,790) In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable. ... The far red light has no effect on the average speed of the gravitropic reaction in wheat coleoptiles, but it changes kurtosis from platykurtic to leptokurtic (-0. ... Claude Shannon In information theory, the Shannon entropy or information entropy is a measure of the uncertainty associated with a random variable. ... In probability theory and statistics, the moment-generating function of a random variable X is wherever this expectation exists. ... In probability theory, the characteristic function of any random variable completely defines its probability distribution. ... Probability is the chance that something is likely to happen or be the case. ... A graph of a Normal bell curve showing statistics used in educational assessment and comparing various grading methods. ... In mathematics, a probability distribution is called discrete, if it is fully characterized by a probability mass function. ... Blaise Pascal (pronounced ), (June 19, 1623â€“August 19, 1662) was a French mathematician, physicist, and religious philosopher. ... George PÃ³lya ca 1973 George PÃ³lya (December 13, 1887 â€“ September 7, 1985, in Hungarian PÃ³lya GyÃ¶rgy) was a Hungarian mathematician. ... George PÃ³lya (December 13, 1887 â€“ September 7, 1985, in Hungarian PÃ³lya GyÃ¶rgy) was a Hungarian mathematician. ... In probability theory and statistics, the Poisson distribution is a discrete probability distribution. ...

## Specification of the negative binomial distribution GA_googleFillSlot("encyclopedia_square");

### Probability mass function

The family of negative binomial distributions is a two-parameter family; several parametrizations are in common use. One very common parameterization employs two real-valued parameters p and r with 0 < p < 1 and r > 0. Under this parameterization, the probability mass function of a random variable with a NegBin(r, p) distribution takes the following form: In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In probability theory, a probability mass function (abbreviated pmf) gives the probability that a discrete random variable is exactly equal to some value. ... A random variable is a mathematical function that maps outcomes of random experiments to numbers. ...

$f(k;r,p) = frac{Gamma(r+k)}{k!;Gamma(r)} ; p^r , (1-p)^k !$

for k = 0,1,2,... (Γ is the gamma function). The Gamma function along part of the real axis In mathematics, the Gamma function is an extension of the factorial function to complex numbers. ...

Under an alternative parameterization, let

$p = frac{omega}{lambda+omega} !$ and $r = omega, !$

and so the mass function becomes

$g(k) = frac{lambda^k}{k!} times frac{Gamma(omega+k)}{Gamma(omega);(lambda+omega)^k} times frac{1}{left(1+frac{lambda}{omega}right)^{omega}} !$

where λ and ω are nonnegative real parameters. Under this parameterization, we have

$lim_{omegatoinfty} g(k) = frac{lambda^k}{k!} times 1 times frac{1}{exp(lambda)} !$

which is precisely the mass function of a Poisson-distributed random variable with Poisson rate λ. In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and ω controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large ω, but which has larger variance than the Poisson for small ω. In probability theory and statistics, the Poisson distribution is a discrete probability distribution. ... In probability theory, there exist several different notions of convergence of random variables. ...

Third, the negative binomial distribution arises as a continuous mixture of Poisson distributions where the mixing distribution of the Poisson rate is a gamma distribution. Formally, this means that the mass function of the negative binomial distribution can also be written as In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions that represents the sum of exponentially distributed random variables, each of which has mean . ...

 $f(k)!!!!$ $= int_0^{infty} mathrm{Poisson}(k ,|, lambda) times mathrm{Gamma}(lambda ,|, r, (1-p)/p) ; mathrm{d}lambda !$ $= int_0^{infty} frac{lambda^k}{k!} exp(-lambda) times frac{lambda^{r-1} exp(-lambda p/(1-p))}{Gamma(r);((1-p)/p)^r} ; mathrm{d}lambda !$ $= frac{1}{k!;Gamma(r)} ; p^r ; frac{1}{(1-p)^r} ; int_0^{infty} lambda^{(r+k)-1} , exp(-lambda/(1-p)) ;mathrm{d}lambda !$ $= frac{1}{k!;Gamma(r)} ; p^r ; frac{1}{(1-p)^r} ; (1-p)^{r+k} ; Gamma(r+k) !$ $= frac{Gamma(r+k)}{k!;Gamma(r)} ; p^r , (1-p)^k. !$

Because of this, the negative binomial distribution is also known as the gamma-Poisson (mixture) distribution.

### Cumulative distribution function

The cumulative distribution function can be expressed in terms of the regularized incomplete beta function: In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by where the right-hand side represents the probability that the random variable X takes on a value less than... In mathematics, the incomplete beta function is a generalization of the beta function that replaces the definite integral of the beta function with an indefinite integral. ...

$F(k) = I_{p}(r, k+1). !$

## Occurrence

### Waiting time in a Bernoulli process

The NegBin(r, p) distribution is the probability distribution of a certain number of failures and successes in a series of independent and identically distributed Bernoulli trials. Specifically, for k+r Bernoulli trials with success probability p, the negative binomial gives the probability of k failures and r successes, with success on the last trial. In other words, the negative binomial distribution is the probability distribution of the number of failures before the rth success in a Bernoulli process, with probability p of success on each trial. In probability theory, a sequence or other collection of random variables is independent and identically distributed (i. ... In the theory of probability and statistics, a Bernoulli trial is an experiment whose outcome is random and can be either of two possible outcomes, called success and failure. ... In probability and statistics, a Bernoulli process is a discrete-time stochastic process consisting of a sequence of independent random variables taking values over two letters. ...

Consider the following example. Suppose we repeatedly throw a die, and consider a "1" to be a "success". The probability of success on each trial is 1/6. The number of trials needed to get three successes belongs to the infinite set { 3, 4, 5, 6, ... }. That number of trials is a (displaced) negative-binomially distributed random variable. The number of failures before the third success belongs to the infinite set { 0, 1, 2, 3, ... }. That number of failures is also a negative-binomially distributed random variable.

A Bernoulli process is a discrete time process, and so the number of trials, failures, and successes are integers. For the special case where r is an integer, the negative binomial distribution is known as the Pascal distribution. In this case the gamma function is not needed to express the probability mass function, and factorials or binomial coefficients can be used instead: In mathematics, a random variable is discrete if its probability distribution is discrete; a discrete probability distribution is one that is fully characterized by a probability mass function. ... For factorial rings in mathematics, see unique factorisation domain. ... In mathematics, particularly in combinatorics, the binomial coefficient of the natural number n and the integer k is the number of combinations that exist. ...

$f(k) = frac{(k+r-1)!}{k!;(r-1)!} ; p^r , (1-p)^k = {k+r-1 choose r-1} ; p^r , (1-p)^k !$

A further specialization occurs when r = 1: in this case we get the probability distribution of failures before the first success (i.e. the probability of success on the (k+1)th trial), which is a geometric distribution. To wit: In probability theory and statistics, the geometric distribution is either of two discrete probability distributions: the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}, or the probability distribution of the number Y = X âˆ’ 1 of failures before...

$f(k) = {k+1-1 choose 1-1} ; p^1 , (1-p)^k = p , (1-p)^k !$

### Overdispersed Poisson

The negative binomial distribution, especially in its alternative parameterization described above, can be used as an alternative to the Poisson distribution. It is especially useful for discrete data over an unbounded positive range whose sample variance exceeds the sample mean. If a Poisson distribution is used to model such data, the model mean and variance are equal. In that case, the observations are overdispersed with respect to the Poisson model. Since the negative binomial distribution has one more parameter than the Poisson, the second parameter can be used to adjust the variance independently of the mean. In probability theory and statistics, the variance of a random variable (or somewhat more precisely, of a probability distribution) is a measure of its statistical dispersion, indicating how its possible values are spread around the expected value. ... In statistics, mean has two related meanings: Look up mean in Wiktionary, the free dictionary. ...

## Related distributions

$mathrm{Geometric}(p) = mathrm{Neg Bin}(1, p).,$
$mathrm{Poisson}(lambda) = lim_{r to infty} mathrm{NegBin}(r, r/(lambda+r)).,$

In probability theory and statistics, the geometric distribution is either of two discrete probability distributions: the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}, or the probability distribution of the number Y = X âˆ’ 1 of failures before... In probability theory, there exist several different notions of convergence of random variables. ... In probability theory and statistics, the Poisson distribution is a discrete probability distribution. ...

## Properties

### Relation to other distributions

If Xr is a random variable following the negative binomial distribution with parameters r and p, then Xr is a sum of r independent variables following the geometric distribution with parameter p. As a result of the central limit theorem, Xr is therefore approximately normal for sufficiently large r. In probability theory and statistics, the geometric distribution is either of two discrete probability distributions: the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}, or the probability distribution of the number Y = X âˆ’ 1 of failures before... A central limit theorem is any of a set of weak-convergence results in probability theory. ... The normal distribution, also called Gaussian distribution by scientists (named after Carl Friedrich Gauss due to his rigorous application of the distribution to astronomical data (Havil, 2003)), is a continuous probability distribution of great importance in many fields. ...

Furthermore, if Ys+r is a random variable following the binomial distribution with parameters s + r and p, then In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. ...

 $Pr(X_r leq s) !!!!$ $= I_p(r, s+1) ,$ $= 1 - I_{1-p}(s+1, r) ,$ $= 1 - I_{1-p}((s+r)-(r-1), (r-1)+1) ,$ $= 1 - Pr(Y_{s+r} leq r-1) ,$ $= Pr(Y_{s+r} geq r) ,$ $= Pr(mathrm{after } s+r mathrm{ trials, there are at least } r mathrm{ successes})$

In this sense, the negative binomial distribution is the "inverse" of the binomial distribution.

The sum of independent negative-binomially distributed random variables with the same value of the parameter p but the "r-values" r1 and r2 is negative-binomially distributed with the same p but with "r-value" r1 + r2.

The negative binomial distribution is infinitely divisible, i.e., if X has a negative binomial distribution, then for any positive integer n, there exist independent identically distributed random variables X1, ..., Xn whose sum has the same distribution that X has. These will not be negative-binomially distributed in the sense defined above unless n is a divisor of r (more on this below). The concept of infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). ...

### Sampling and point estimation of p

Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. The sufficient statistics for the experiment is k, the number of failures.

In estimating p, the minimum variance unbiased point estimator is $hat{p}=frac{r-1}{r+k-1}$. One might think the estimator is $tilde{p}=frac{r}{r+k}$, but this is biased. Haldane Article

### Relation to the binomial theorem

Suppose X is a random variable with a negative binomial distribution with parameters r and p. The statement that the sum from x = r to infinity, of the probability Pr[X = x], is equal to 1, can be shown by a bit of algebra to be equivalent to the statement that (1 − p)r is what Newton's binomial theorem says it should be. In mathematics, the binomial series generalizes the purely algebraic binomial theorem. ...

Suppose Y is a random variable with a binomial distribution with parameters n and p. The statement that the sum from y = 0 to n, of the probability Pr[Y = y], is equal to 1, says that 1 = (p + (1 − p))n is what the strictly finitary binomial theorem of rudimentary algebra says it should be. In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. ... In mathematics, the binomial theorem is an important formula giving the expansion of powers of sums. ...

Thus the negative binomial distribution bears the same relationship to the negative-integer-exponent case of the binomial theorem that the binomial distribution bears to the positive-integer-exponent case.

Assume p + q = 1. Then the binomial theorem of elementary algebra implies that In mathematics, the binomial theorem is an important formula giving the expansion of powers of sums. ...

$1=1^n=(p+q)^n=sum_{x=0}^n {n choose x} p^x q^{n-x}.$

This can be written in a way that may at first appear to some to be incorrect, and perhaps perverse even if correct:

$(p+q)^n=sum_{x=0}^infty {n choose x} p^x q^{n-x},$

in which the upper bound of summation is infinite. If the binomial coefficient is defined by In mathematics, particularly in combinatorics, the binomial coefficient of the natural number n and the integer k is the number of combinations that exist. ...

${n choose x}={n! over x!(n-x)!}$

then it does not make sense when x > n, since factorials of negative numbers are not defined. But one may also read it as For factorial rings in mathematics, see unique factorisation domain. ...

${n choose x}={n(n-1)(n-2)cdots(n-x+1) over x! }.$

In that case it is defined even when n is negative or is not an integer. But in our case of the binomial distribution it is zero when x > n. So why would we write the result in that form, with a seemingly needless sum of infinitely many zeros? The answer comes when we generalize the binomial theorem of elementary algebra to Newton's binomial theorem. Then we can say, for example In mathematics, the binomial series generalizes the purely algebraic binomial theorem; it is the series in which where is the Pochhammer symbol, and in particular because it is the product of no terms at all. ...

$(p+q)^{8.3}=sum_{x=0}^infty {8.3 choose x} p^x q^{8.3 - x}.$

Now suppose r > 0 and we use a negative exponent:

$1=p^r p^{-r}=p^r (1-q)^{-r}=p^rsum_{x=0}^infty {-r choose x} (-q)^x.$

Then all of the terms are positive, and the term

$p^r {-r choose x} (-q)^x$

is just the probability that the number of failures before the rth success is equal to x, provided r is an integer. (If r is a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.)

Now we also allow non-integer values of r. Then we have a proper negative binomial distribution, which is a generalization of the Pascal distribution, which coincides with the Pascal distribution when r happens to be a positive integer.

Recall from above that

The sum of independent negative-binomially distributed random variables with the same value of the parameter p but the "r-values" r1 and r2 is negative-binomially distributed with the same p but with "r-value" r1 + r2.

This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution is infinitely divisible. The concept of infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). ...

## Examples

(After a problem by Dr. Diane Evans, professor of mathematics at Rose-Hulman Institute of Technology) Rose-Hulman Institute of Technology (abbreviated RHIT), formerly Rose Polytechnic Institute, is a small, private, non-sectarian college specializing in teaching engineering, mathematics, and science. ...

Pat is required to sell candy bars to raise money for the 6th grade field trip. There are thirty houses in the neighborhood, and Pat is not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.4 probability of selling one candy bar and a 0.6 probability of selling nothing.

What's the probability mass function for selling the last candy bar at the nth house?

Recall that the NegBin(r, p) distribution describes the probability of k failures and r successes in k+r Bernoulli(p) trials with success on the last trial. Selling five candy bars means getting five successes. The number of trials (i.e. houses) this takes is therefore k+5 = n. The random variable we are interested in is the number of houses, so we substitute k = n − 5 into a NegBin(5, 0.4) mass function and obtain the following mass function of the distribution of houses (for n ≥ 5):

$f(n) = {(n-5) + 5 - 1 choose 5-1} ; 0.4^5 ; 0.6^{n-5} = {n-1 choose 4} ; 2^5 ; frac{3^{n-5}}{5^n}$

What's the probability that Pat finishes on the tenth house?

$f(10) = 0.1003290624 ,$

What's the probability that Pat finishes on or before reaching the eighth house?

To finish on or before the eighth house, Pat must finish at the fifth, sixth, seventh, or eighth house. Sum those probabilities:

$f(5) = 0.01024 ,$
$f(6) = 0.03072 ,$
$f(7) = 0.055296 ,$
$f(8) = 0.0774144 ,$
$sum_{j=5}^8 f(j) = 0.17367$

What's the probability that Pat exhausts all 30 houses in the neighborhood?

$1-sum_{j=5}^{30} f(j) = 1 - I_{0.4}(5, 30-5+1) approx 1 - 0.99849 = 0.00151$
Probability distributionsview  talk  edit ]
Univariate Multivariate
Discrete: BenfordBernoullibinomialBoltzmanncategoricalcompound Poisson • discrete phase-type • degenerate • Gauss-Kuzmin • geometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniform • Yule-Simon • zetaZipf • Zipf-Mandelbrot Ewensmultinomialmultivariate Polya
Continuous: BetaBeta primeCauchychi-squareDirac delta function • Coxian • Erlangexponentialexponential powerFfading • Fisher's z • Fisher-Tippett • Gammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-Logistic • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponential • inverse chi-square (scaled inverse chi-square)• inverse Gaussianinverse gamma (scaled inverse gamma) • KumaraswamyLandauLaplaceLévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speednormal (Gaussian) • normal-gamma • normal inverse Gaussian • ParetoPearson • phase-type • polarraised cosineRayleigh • relativistic Breit-Wigner • Riceshifted Gompertz • Student's t • triangular • type-1 Gumbel • type-2 Gumbel • uniform • Variance-Gamma • Voigtvon MisesWeibullWigner semicircleWilks' lambda Dirichlet • inverse-Wishart • Kentmatrix normalmultivariate normalmultivariate Student • von Mises-Fisher • Wigner quasi • Wishart
Miscellaneous: Cantorconditionalequilibrium[disambiguation needed]exponential familyinfinitely divisible • location-scale family • marginalmaximum entropyposteriorprior • quasi • samplingsingular

Results from FactBites:

 APPENDIX I: THE NEGATIVE BINOMIAL DISTRIBUTION (747 words) We have derived the Poisson Distribution from the Binomial Distribution, and the necessary condition for the Binomial Distribution to hold is that the probability, p, of an event E shall remain constant for all occurrences of its context-events. The parameters of the distribution are the arithmetic mean (m) and the exponent k. Unlike the positive Binomial, k is not necessarily an integer in the Negative Binomial Distribution.
 Journal of Nanjing Agricultural University 1996, 19(3): 55~58 (1202 words) The fundamental components of the spatial distributions of Aphis glycines are aggregated distribution of individual populations; the degrees of aggregation increase with the population densities. The distribution patterns were analyzed using the frequency mapping, aggregation degree indices and regression model methods respectively. glycines are all clustered distributions in all densities and the aggregation degrees increase with the population densities.
More results at FactBites »

Share your thoughts, questions and commentary here