高斯分布的微分熵

,其微分熵推导过程如下:

又因为 ,于是取

所以

概率

Bounds on tail probabilities

Markov’s inequality: For any r.v. and constant ,

Let . We need to show that . Note that

since if then , and if then (because the indicator says so). Taking the expectation of both sides, we have Markov’s inequality.

Chebyshev’s inequality: Let have mean and variance . Then for any ,

By Markov’s inequality,

Chernoff inequality: For any r.v. and constants and ,

The transformation with is invertible and strictly increasing. So by Markov’s inequality, we have

Law of large numbers

Assume we have i.i.d. with finite mean and finite variance . For all positive integers , let

be the sample mean of through . The sample mean is itself an r.v., with mean and variance :

Strong law of large numbers The sample mean converges to the true mean pointwise as , with probability 1. In other words,

Weak law of large numbers For all , as . (This is called convergence in probability.) In other words,

Fix , by Chebyshev’s inequality,

As , the right-hand side goes to 0, ans so must the left-hand side.

References

  1. Blitzstein, Joseph K, and Hwang, Jessica. “Introduction to probability.”