You are currently browsing the tag archive for the ‘dispersion’ tag.

Negative binomial observations naturally appears in many biological applications, such as in genomics. This article describes the details of the likelihood-ratio test to determine if two sets of observations come from the same negative binomial distribution or not. Given probability of failure p and number of failures r, the negative binomial distribution if expressed as

 f(k;r,p) = \frac{\Gamma(k + r)}{\Gamma(r)\Gamma(k+1)}p^r(1-p)^k

where k denotes the number of success.

For likelihood-ratio test one needs to find the maximum likelihood solution of r and p. Given samples \{k_i\}_{i=1}^n, p is found as p = \frac{r}{\sum_i k_i/n + r}. On the other hand, r does not have a closed form solution, and it is found iteratively by solving the following equation

\sum_{i=1}^n \psi(k_i + r) - n\psi(r) + n\log\frac{r}{r + \sum_i k_i / n} = 0

where \psi is the digamma function.
However, this approach problematic when the observations are only slightly over-dispersed, i.e., they resemble Poisson distribution. In this case, r \to \infty, and it becomes hard to find the via optimization. Also since p depends on r, it approaches to one, and thus, the overall estimation breaks down.

To tackle this issue, a reparameterization of negative binomial distribution is suggested in terms of dispersion parameter \alpha = 1/r and mean \mu = r(1-p)/p [1]. Under this reparameterization the probability distribution of negative binomial is

f(k;\alpha,\mu) = \frac{\Gamma\left(k+\frac{1}{\alpha}\right)}{\Gamma\left(\frac{1}{\alpha}\right)\Gamma(k+1)} \left(\frac{\alpha\mu}{1 + \alpha\mu}\right)^k\left(\frac{1}{1 + \alpha\mu}\right)^\frac{1}{\alpha},

and the maximum likelihood estimator of \mu is the sample mean, which does not depend on \alpha. On the other hand, given \mu, \alpha can be found by maximizing the likelihood. Unfortunately, this does not have a closed form solution either. However, one can use iterative method such as Newton’s method to solve this. For Newton’s method the gradient and Hessian can be computed as,

\mathrm{G} = \frac{1}{\alpha^2}\left[-\sum_i \psi\left(k_i + \frac{1}{\alpha}\right) + n\psi\left(\frac{1}{\alpha}\right) + n\log(1 + \alpha\mu)\right],

\mathrm{H} = \frac{1}{\alpha^4}\left[\sum\psi'\left( k_i +\frac{1}{\alpha} \right) + 2\alpha \sum_i \psi\left( k_i + \frac{1}{\alpha} \right) - n \psi'\left(\frac{1}{\alpha}\right) - 2n \alpha \psi\left(\frac{1}{\alpha}\right) + n\alpha \left( \frac{\mu\alpha}{1 + \mu\alpha} - 2\log(1 + \mu\alpha)\right)\right],

and Newton’s update rule can then be written as

\alpha_n = \alpha_{n-1} - \frac{\mathrm{G}(\alpha_{n-1})}{\mathrm{H}(\alpha_{n-1})}.

The iteration can be initiated with the moments estimate

\hat{\alpha} = \frac{\mathrm{var}(\{k_i\}) - \mathrm{mean}(\{k_i\})}{\mathrm{mean}(\{k_i\})^2}.

Notice that when the observations are underdispersed then this quantity is negative. In such cases, and in situations when the observations are only slightly overdispersed, i.e., this quantity is very close to zero, the estimate of gradient and Hessian might be unstable. Therfore, it is safer to assign \alpha to a small value without carrying out the optimization. In this case the distribution will resemble a Poisson distribution.

A Matlab implementation of this test can be found here.

[1] “Estimating the Negative Binomial Dispersion Parameter“, Mohanad F. Al-Khasawneh, Asian Journal of Mathematics and Statistics 3 (1), 1-15, 2010, ISSN 1994-5418