In statistics, the Cramér-Rao inequality, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, states that the reciprocal of the Fisher information, [itex]\mathcal{I}(\theta)[itex], of a parameter [itex]\theta[itex], is a lower bound on the variance of an unbiased estimator of the parameter (denoted [itex]\hat{\theta}[itex]).

[itex]

\mathrm{var} \left(\hat{\theta}\right) \geq \frac{1}{\mathcal{I}(\theta)} = \frac{1} {

\mathrm{E}
\left[
\left[
\frac{d}{d\theta} \log f(X;\theta)
\right]^2
\right]

} [itex]

In some cases, no unbiased estimator exists that realizes the lower bound.

The Cramér-Rao inequality is also known as the Cramér-Rao bounds (CRB) or Cramér-Rao lower bounds (CRLB) because it puts a lower bounds on the variance of an estimator [itex]\hat{\theta}[itex]

 Contents

## Regularity conditions

This inequality relies on two weak regularity conditions on the probability density function, [itex]f(x; \theta)[itex], and the estimator [itex]T(X)[itex]:

• The Fisher information is always defined; equivalently, for all [itex]x[itex] such that [itex]f(x; \theta) > 0[itex],
[itex] \frac{\partial}{\partial\theta} \ln f(x;\theta)[itex]
is finite.
• The operations of integration with respect to s and differentiation with respect to [itex]\theta[itex] can be interchanged in the expectation of [itex]T[itex]; that is,
[itex]
\frac{\partial}{\partial\theta}
\left[
\int T(x) f(x;\theta) \,dx
\right]
=
\int T(x)
\left[
\frac{\partial}{\partial\theta} f(x;\theta)
\right]
\,dx

[itex]

whenever the right-hand side is finite.

In some cases, a biased estimator can have both a variance and a mean squared error that are below the Cramér-Rao lower bound (the lower bound applies only to estimators that are unbiased). See bias (statistics).

If the second regularity condition extends to the second derivative, then an alternative form of Fisher information can be used and yields a new Cramér-Rao inequality

[itex]

\mathrm{var} \left(\hat{\theta}\right) \geq \frac{1}{\mathcal{I}(\theta)} = \frac{1} {

-\mathrm{E}
\left[
\frac{d^2}{d\theta^2} \log f(X;\theta)
\right]

} [itex]

In some cases, it may be easier to take the expectation with respect to the second derivative than to take the expectation of the square of the first derivative.

## Multiparameter

Extending the Cramér-Rao inequality to multiple parameters, define a parameter column vector

[itex]\boldsymbol{\theta} = \left[ \theta_1, \theta_2, \dots, \theta_d \right]^T \in \mathbb{R}^d[itex]

with probability density function (pdf), [itex]f(x; \boldsymbol{\theta})[itex], that satisfies the above two regularity conditions.

The Fisher information matrix is a [itex]d \times d[itex] matrix with element [itex]\mathcal{I}_{m, k}[itex] defined as

[itex]

\mathcal{I}_{m, k} = \mathrm{E} \left[

\frac{d}{d\theta_m} \log f\left(x; \boldsymbol(\theta)\right)
\frac{d}{d\theta_k} \log f\left(x; \boldsymbol(\theta)\right)

\right] [itex]

then the Cramér-Rao inequality is

[itex]

\mathrm{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) \geq \frac

{\partial \boldsymbol{\psi} \left(\boldsymbol{\theta}\right)}
{\partial \boldsymbol{\theta}^T}

\mathcal{I}\left(\boldsymbol{\theta}\right)^{-1} \frac

{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)^T}
{\partial \boldsymbol{\theta}}

[itex]

where

• [itex]

\boldsymbol{T}(X) = \begin{bmatrix} T_1(X) & T_2(X) & \cdots & T_d(X) \end{bmatrix}^T [itex]

• [itex]

\boldsymbol{\psi} = \mathrm{E}\left[\boldsymbol{T}(X)\right] = \begin{bmatrix} \psi_1\left(\boldsymbol{\theta}\right) &

\psi_2\left(\boldsymbol{\theta}\right) &
\cdots &
\psi_d\left(\boldsymbol{\theta}\right)

\end{bmatrix}^T [itex]

• [itex]\frac{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}^T}

= \begin{bmatrix}

\psi_1 \left(\boldsymbol{\theta}\right) \\
\psi_2 \left(\boldsymbol{\theta}\right) \\
\vdots \\
\psi_d \left(\boldsymbol{\theta}\right)

\end{bmatrix} \begin{bmatrix}

\frac{\partial}{\partial \theta_1} &
\frac{\partial}{\partial \theta_2} &
\cdots &
\frac{\partial}{\partial \theta_d}

\end{bmatrix} = \begin{bmatrix}

\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} &
\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} &
\cdots &
\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \\
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} &
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} &
\cdots &
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} \\
\vdots &
\vdots &
\ddots &
\vdots \\
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_1} &
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_2} &
\cdots &
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_d}

\end{bmatrix} [itex]

• [itex]

\frac{\partial \boldsymbol{\psi}\left(\boldsymbol{\theta}\right)^T}{\partial \boldsymbol{\theta}} = \begin{bmatrix}

\frac{\partial}{\partial \theta_1} \\
\frac{\partial}{\partial \theta_2} \\
\vdots \\
\frac{\partial}{\partial \theta_d}

\end{bmatrix} \begin{bmatrix}

\psi_1 \left(\boldsymbol{\theta}\right) &
\psi_2 \left(\boldsymbol{\theta}\right) &
\cdots &
\psi_d \left(\boldsymbol{\theta}\right)

\end{bmatrix} = \begin{bmatrix}

\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} &
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_1} &
\cdots &
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_1} \\
\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} &
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_2} &
\cdots &
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_2} \\
\vdots &
\vdots &
\ddots &
\vdots \\
\frac{\partial \psi_1 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} &
\frac{\partial \psi_2 \left(\boldsymbol{\theta}\right)}{\partial \theta_d} &
\cdots &
\frac{\partial \psi_d \left(\boldsymbol{\theta}\right)}{\partial \theta_d}

\end{bmatrix} [itex]

And [itex]\mathrm{cov}_{\boldsymbol{\theta}} \left( \boldsymbol{T}(X) \right)[itex] is a positive-semidefinite matrix, that is

[itex] x^{T} \mathrm{cov}_{\boldsymbol{\theta}} \left( \boldsymbol{T}(X) \right) x \geq 0 \quad \forall x \in \mathbb{R}^d[itex]

If [itex]\boldsymbol{T}(X) = \begin{bmatrix} T_1(X) & T_2(X) & \cdots & T_d(X) \end{bmatrix}^T[itex] is an unbiased estimator (i.e., [itex]\boldsymbol{\psi}\left(\boldsymbol{\theta}\right) = \boldsymbol{\theta}[itex]) then the Cramér-Rao inequality is

[itex]

\mathrm{cov}_{\boldsymbol{\theta}}\left(\boldsymbol{T}(X)\right) \geq \mathcal{I}\left(\boldsymbol{\theta}\right)^{-1} [itex]

## Single-parameter proof

First, a more general version of the inequality will be proven; namely, that if the expectation of [itex]T[itex] is denoted by [itex]\psi (\theta)[itex], then for all [itex]\theta[itex]

[itex]{\rm var}(t(X)) \geq \frac{[\psi^\prime(\theta)]^2}{I(\theta)}[itex]

The Cramér-Rao inequality will then follow as a consequence.

Let [itex]X[itex] be a random variable with probability density function [itex]f(x, \theta)[itex]. Here [itex]T = t(X)[itex] is a statistic, which is used as an estimator for [itex]\theta[itex]. If [itex]V[itex] is the score, i.e.

[itex]V = \frac{\partial}{\partial\theta} \log f(X;\theta)[itex]

then the expectation of [itex]V[itex], written [itex]{\rm E}(V)[itex], is zero. If we consider the covariance [itex]{\rm cov}(V, T)[itex] of [itex]V[itex] and [itex]T[itex], we have [itex]{\rm cov}(V, T) = {\rm E}(V T)[itex], because [itex]{\rm E}(V) = 0[itex]. Expanding this expression we have

[itex]

{\rm cov}(V,T) = {\rm E} \left(

T \cdot \frac{\partial}{\partial\theta} \ln f(X;\theta)

\right) [itex]

This may be expanded using the chain rule

[itex]\frac{\partial}{\partial\theta} \ln Q = \frac{1}{Q}\frac{\partial Q}{\partial\theta}[itex]

and the definition of expectation gives, after cancelling [itex]f(x; \theta)[itex],

[itex]

{\rm E} \left(

T \cdot \frac{\partial}{\partial\theta} \ln f(X;\theta)

\right) = \int

t(x)
\left[
\frac{\partial}{\partial\theta} f(x;\theta)
\right]

\, dx = \frac{\partial}{\partial\theta} \left[

\int t(x)f(x;\theta)\,dx

\right] = \psi^\prime(\theta) [itex]

because the integration and differentiation operations commute (second condition).

The Cauchy-Schwarz inequality shows that

[itex]

\sqrt{ {\rm var} (T) {\rm var} (V)} \geq {\rm cov}(V,T) = \psi^\prime (\theta) [itex]

therefore

[itex]

{\rm var\ } T \geq \frac{[\psi^\prime(\theta)]^2}{{\rm var} (V)} = \frac{[\psi^\prime(\theta)]^2}{I(\theta)} = \left[

\frac{\partial}{\partial\theta}
{\rm E} (T)

\right]^2 \frac{1}{I(\theta)} [itex]

If [itex]T[itex] is an unbiased estimator of [itex]\theta[itex], that is, [itex]{\rm E}(T) =\theta[itex], then [itex]\psi'(\theta) = 1[itex]; the inequality then becomes

[itex]

{\rm var}(T) \geq \frac{1}{I(\theta)} [itex]

This is the Cramér-Rao inequality.

The efficiency of [itex]T[itex] is defined as

[itex]e(T) = \frac{\frac{1}{I(\theta)}}{{\rm var}(T)}[itex]

or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér-Rao lower bound thus gives [itex]e(T) \le 1[itex].

## Multivariate normal distribution

For the case of a d-variate normal distribution

[itex]

\boldsymbol{x} \sim N_d \left(

\boldsymbol{\mu} \left( \boldsymbol{\theta} \right)
,
C \left( \boldsymbol{\theta} \right)

\right) [itex]

[itex]

f\left( \boldsymbol{x}; \boldsymbol{\theta} \right) = \frac{1}{\sqrt{ (2\pi)^d \left| C \right| }} \exp \left(

-\frac{1}{2}
\left(
\boldsymbol{x} - \boldsymbol{\mu}
\right)^{T}
C^{-1}
\left(
\boldsymbol{x} - \boldsymbol{\mu}
\right)

\right). [itex]

The Fisher information matrix has elements

[itex]

\mathcal{I}_{m, k} = \frac{\partial \boldsymbol{\mu}^T}{\partial \theta_m} C^{-1} \frac{\partial \boldsymbol{\mu}}{\partial \theta_k} + \frac{1}{2} \mathrm{tr} \left(

C^{-1}
\frac{\partial C}{\partial \theta_m}
C^{-1}
\frac{\partial C}{\partial \theta_k}

\right) [itex]

where "tr" is the trace.

Let [itex]w[n][itex] be a white Gaussian noise (a sample of [itex]N[itex] independent observations) with variance [itex]\sigma^2[itex]

[itex]w[n] \sim \mathbb{N}_N \left( 0, \sigma^2 I \right).[itex]

Then the Fisher information matrix is 1 × 1

[itex]

\mathcal{I}(\sigma^2) = \frac{1}{2} \mathrm{tr} \left(

C^{-1}
\frac{\partial C}{\partial \theta_m}
C^{-1}
\frac{\partial C}{\partial \theta_k}

\right) = \frac{1}{2 \sigma^2} \mathrm{tr} \left(I\right) = \frac{N}{2 \sigma^2}, [itex]

and so the Cramér-Rao inequality is

[itex]

\mathrm{var}\left(\sigma^2\right) \geq \frac{2 \sigma^2}{N}. [itex]

• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy