FACTOID # 16: In the 2000 Presidential Election, Texas gave Ralph Nader the 3rd highest popular vote count of any US state.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Multinomial distribution

In probability theory, the multinomial distribution is a generalization of the binomial distribution. It has been suggested that this article or section be merged with Probability axioms. ... In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. ...

The binomial distribution is the probability distribution of the number of "successes" in n independent Bernoulli trials, with the same probability of "success" on each trial. In a multinomial distribution, each trial results in exactly one of some fixed finite number k of possible outcomes, with probabilities p1, ..., pk (so that pi ≥ 0 for i = 1, ..., k and $sum_{i=1}^k p_i = 1$), and there are n independent trials. Then let the random variables Xi indicate the number of times outcome number i was observed over the n trials. $X=(X_1,ldots,X_k)$ follows a multinomial distribution with parameters n and p. In mathematics and statistics, a probability distribution, more properly called a probability density, assigns to every interval of the real numbers a probability, so that the probability axioms are satisfied. ... In the theory of probability and statistics, a Bernoulli trial is an experiment whose outcome is random and can be either of two possible outcomes, called success and failure. ...

## Contents

### Probability mass function

The probability mass function of the multinomial distribution is: In probability theory, a probability mass function (abbreviated pmf) gives the probability that a discrete random variable is exactly equal to some value. ...

$f(x_1,ldots,x_k;n,p_1,ldots,p_k) = begin{cases}{n! over x_1!cdots x_k!}p_1^{x_1}cdots p_k^{x_k} quad & mbox{when } sum_{i=1}^k x_i=n 0 & mbox{otherwise.} end{cases}$

for non-negative integers x1, ..., xk.

## Properties

The expected value is In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are...

$operatorname{E}(X_i) = n p_i.$

The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. ... In probability theory and statistics, the variance of a random variable (or somewhat more precisely, of a probability distribution) is a measure of its statistical dispersion, indicating how its possible values are spread around the expected value. ...

$operatorname{var}(X_i)=np_i(1-p_i).$

The off-diagonal entries are the covariances: In probability theory and statistics, the covariance between two real-valued random variables X and Y, with expected values and is defined as: where E is the expected value. ...

$operatorname{cov}(X_i,X_j)=-np_i p_j$

for i, j distinct.

All covariances are negative because for fixed N, an increase in one component of a multinomial vector requires a decrease in another component.

This is a k × k nonnegative-definite matrix of rank k − 1.

The off-diagonal entries of the corresponding correlation matrix are

$rho(X_i,X_j) = -sqrt{frac{p_i p_j}{ (1-p_i)(1-p_j)}}.$

Note that the sample size drops out of this expression.

Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.

The support of the multinomial distribution is the set :${(n_1,...,n_k)in mathbb{N}^{k}| n_1+...+n_k=n}.$ Its number of elements is :${n+k-1 choose k-1} = leftlangle begin{matrix}n k end{matrix}rightrangle,$ the number of n-combinations of a multiset with k types, or Multiset#Multiset_coefficients. In mathematics, a multiset (sometimes also called a bag) differs from a set in that each member has a multiplicity, which is a natural number indicating (loosely speaking) how many times it is a member, or perhaps how many memberships it has in the multiset. ... In mathematics, a multiset (sometimes also called a bag) differs from a set in that each member has a multiplicity, which is a natural number indicating (loosely speaking) how many times it is a member, or perhaps how many memberships it has in the multiset. ...

## Related distributions

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. ... Several images of the probability density of the Dirichlet distribution when K=3 for various parameter vectors Î±. Clockwise from top left: Î±=(6, 2, 2), (3, 7, 5), (6, 2, 6), (2, 3, 4). ... In Bayesian probability theory, a conjugate prior is a family of prior probability distributions which has the property that the posterior probability distribution also belongs to that family. ... Bayesian refers to probability and statistics -- either methods associated with the Reverend Thomas Bayes (ca. ...

Probability distributionsview  talk  edit ]
Univariate Multivariate
Discrete: BenfordBernoullibinomialBoltzmanncategoricalcompound Poissondegenerate • Gauss-Kuzmin • geometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniform • Yule-Simon • zetaZipf • Zipf-Mandelbrot Ewensmultinomialmultivariate Polya
Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfading • Fisher's z • Fisher-Tippett • Gammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-Logistic • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponential • inverse chi-square (scaled inverse chi-square)• inverse Gaussianinverse gamma (scaled inverse gamma) • KumaraswamyLandauLaplaceLévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speednormal (Gaussian) • normal inverse Gaussian • ParetoPearsonpolarraised cosineRayleigh • relativistic Breit-Wigner • Riceshifted Gompertz • Student's t • triangular • type-1 Gumbel • type-2 Gumbel • uniform • Variance-Gamma • Voigtvon MisesWeibullWigner semicircleWilks' lambda Dirichlet • inverse-Wishart • Kentmatrix normalmultivariate normalmultivariate Student • von Mises-Fisher • Wigner quasi • Wishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisible • location-scale family • marginalmaximum entropy • phase-type • posteriorprior • quasi • samplingsingular

Results from FactBites:

 PlanetMath: multinomial distribution (106 words) , the multinomial distribution is the same as the binomial distribution Cross-references: conditional probability, induction, distribution, multinomial, joint distribution, Poisson random variables, independent, binomial distribution, probability distribution function, vector, parameter, integer, fixed, random vector This is version 4 of multinomial distribution, born on 2004-08-26, modified 2006-10-02.
 PlanetMath: multinomial distribution (105 words) , the multinomial distribution is the same as the binomial distribution Cross-references: conditional probability, induction, distribution, multinomial, joint distribution, Poisson random variables, independent, binomial distribution, probability distribution function, vector, integer, fixed, random vector This is version 4 of multinomial distribution, born on 2004-08-26, modified 2006-10-02.
More results at FactBites »

Share your thoughts, questions and commentary here