Exponential Distribution
Contents
Story
The Exponential distribution is the continuous counterpart to the Geometric distribution. The story of the Exponential distribution is analogous, but we are now waiting for a success in continuous time, where successes arrive at a rate of
successes per unit of time. The average number of successes in a time interval of length is , though the actual number of successes varies randomly. An Exponential random variable represents the waiting time until the first arrival of a success. ——adapted from Book BH
Basic
Definition: A continuous r.v.
To calculate the expectation and variance, we first consider
Now let
or
Hence, we can get
- MGF (moment generating function):
Memeoryless Property
Memoryless is something like $P(X \ge s+t | X \ge s) = P(X \ge t)
$$
\begin{split}
P(X \ge s+t | X \ge s) &= \frac{P(X \ge s+t, ~X \ge s)}{P(X \ge s)} \newline
&= \frac{P(X \ge s+t)}{P(X \ge s)} \newline
&= \frac{e^{-\lambda (s+t)}}{e^{-\lambda s}} = e^{-\lambda t} \newline
&= P(X \ge t)
\end{split}
$$
Theorem: If
Proof idea: use survival function and solve differential equations.
Examples
eg.1
Proof: By LOTP (law of total probability),
$$
\begin{split}
P(X_1 < X_2) &= \int_0^{\infty} f_{X_1}(x) P(X_2 > X_1 | X_1=x) dx \newline
&= \int_0^{\infty} f_{X_1}(x) P(X_2 > x | X_1=x) dx \newline
&= \int_0^{\infty} f_{X_1}(x) P(X_2 > x) dx \quad \text{(independence)} \newline
&= \int_0^{\infty} \lambda_1 e^{-\lambda_1 x} e^{-\lambda_2 x} dx \newline
&= \lambda_1 \int_0^{\infty} e^{-(\lambda_1 + \lambda_2) x} dx \newline
&= \frac{\lambda_1}{\lambda_1 + \lambda_2}
\end{split}
$$
eg.2
Proof:
The intuition of this result is that if you consider
as the waiting time for a green car as the waiting time for a red car- …
Then
eg.3 (Difference of two exponetial) Let
Solution:
Recall the story of exponential, one can think of
as the waiting time for a red car passing by as the waiting time for a blue car
If we see a blue car passing by, then the further waiting time for a red car is still distributed as same distribution as
The above intuition says that, the conditional distribution of
To make full use of our intuition, we know that
If
, which means , then $Z~| X>Y = X|$ \begin{gathered} f_Z(z X>Y) = \lambda e^{-\lambda z} \newline \text{and since }P(X<Y) = 0 \newline \implies f_Z(z) = f_Z(z|~X>Y)P(X>Y) \newline = \frac{\mu}{\lambda + \mu}\lambda e^{-\lambda z}. \end{gathered} $$If
, which means , then $Z~| X<Y = -Y|$ \begin{gathered} f_Z(z X<Y) = f_Y(y(z))\left|\frac{dy}{dz}\right| = \mu e^{\mu z} \newline \implies f_Z(z) = f_Z(z|~X<Y)P(X<Y) \ = \frac{\lambda}{\lambda + \mu} \mu e^{\mu z} \end{gathered} $$However, this is just a sketch. Later we will see how to derivate the form mathematically.
From the above point of view, the PDF of
- If
, which implies , then
$$
\begin{split}
P(Z > z) &= P(X-Y>z | X>Y)P(X>Y) + P(Z>z~|~X<Y)P(X<Y) \newline
&= P(X>z)P(X>Y) + 0 \quad \text{(memoryless)} \newline
&= \frac{\mu}{\lambda + \mu} e^{-\lambda z} \quad \text{(by eg.1)} \newline
\implies f_Z(z) &= \frac{\lambda\mu}{\lambda + \mu} e^{-\lambda z} \quad \text{for }z>0
\end{split}
$$
- If
, which implies , then
$$
\begin{split}
P(Z < z) &= P(Z<z | X>Y)P(X>Y) + P(X-Y<z~|~X<Y)P(X<Y) \newline
&= 0 + P(Y-X > -z | Y>X)P(Y>X) \newline
&= P(Y>X)P(Y > -z) \quad \text{(memoryless)} \newline
&= \frac{\lambda}{\lambda + \mu}e^{\mu z} \quad \text{(by eg.1)} \newline
\implies f_Z(z) &= \frac{\lambda\mu}{\lambda + \mu}e^{\mu z} \quad \text{for }z<0
\end{split}
$$
Therefore, the PDF of
Note:
since the integral domain is a line ( ) whose measure is 0. That is . This is why we can give no care of the case .