4.4 Quiz 4

Let \(f\left ( x\right ) \) be the pdf of \(X,\) hence from definition of expected value of a random variable we write

\[ E\left ( X\right ) =\int _{-\infty }^{\infty }xf\left ( x\right ) dx \]

Now break the integral into the sum of integrals as follows

\[ E\left ( X\right ) =\cdots +\int _{\xi -2\delta }^{\xi -\delta }xf\left ( x\right ) dx+\int _{\xi -\delta }^{\xi }xf\left ( x\right ) dx+\int _{\xi }^{\xi +\delta }xf\left ( x\right ) dx+\int _{\xi +\delta }^{\xi +2\delta }xf\left ( x\right ) dx+\cdots \]

In the limit, as \(\delta \) is made very small, the above can be written as Riemann sums of areas each of width \(dx\rightarrow \delta \) as follows

\begin{align} E\left ( X\right ) & =\cdots +\left ( \xi -2\delta \right ) \ f\left ( \xi -2\delta \right ) \delta \ \ +\ \left ( \xi -\delta \right ) \ f\left ( \xi -\delta \right ) \delta \ +\ \ \xi f_{\xi }\delta \ +\nonumber \\ & \ \left ( \xi +\delta \right ) f\left ( \xi +\delta \right ) \delta +\ \left ( \xi +2\delta \right ) \ f\ \left ( \xi +2\delta \right ) \delta +\cdots \nonumber \\ & \nonumber \\ & =\delta \left [ \cdots \left ( \xi -2\delta \right ) \ f\left ( \xi -2\delta \right ) \ +\ \left ( \xi -\delta \right ) \ f\left ( \xi -\delta \right ) \ +\ \ \xi f_{\xi }\ +\right . \ \nonumber \\ & \left . \left ( \xi +\delta \right ) f\left ( \xi +\delta \right ) +\ \left ( \xi +2\delta \right ) \ f\ \left ( \xi +2\delta \right ) +\cdots \right ] \nonumber \\ & \nonumber \\ & =\delta \left [ \cdots +\left ( \xi f\left ( \xi -2\delta \right ) -2\delta f\left ( \xi -2\delta \right ) \right ) \ \ +\ \left ( \xi f\left ( \xi -\delta \right ) -\delta f\left ( \xi -\delta \right ) \right ) \ +\right . \nonumber \\ & \left . \ \xi f_{\xi }\ +\ \left ( \xi f\left ( \xi +\delta \right ) +\delta f\left ( \xi +\delta \right ) \right ) +\ \left ( \xi f\ \left ( \xi +2\delta \right ) +2\delta f\ \left ( \xi +2\delta \right ) \right ) \ +\cdots \right ] \tag {1}\end{align}

But due to symmetry around \(\xi \) then

\[ f\left ( \xi -i\delta \right ) =f\left ( \xi +i\delta \right ) \]

for any integer \(i\) in the above Riemann sum. This causes terms to cancel in the equation (1) above.

For example the term \(-\delta f\left ( \xi -\delta \right ) \) onthe left of the \(\xi f_{\xi }\) will cancel with the term \(+\delta f\left ( \xi -\delta \right ) \) on the right of \(\xi f_{\xi }\), and so on. Then we obtain the following sum

\[ E\left ( X\right ) =\delta \left [ \cdots +\xi f\left ( \xi -2\delta \right ) \ \ +\ \xi f\left ( \xi -\delta \right ) \ +\ \ \xi f_{\xi }\ +\ \xi f\left ( \xi +\delta \right ) +\ \xi f\ \left ( \xi +2\delta \right ) \ +\cdots \right ] \]

Take \(\xi \) as common factor

\begin{equation} E\left ( X\right ) =\xi \delta \left [ \cdots +\xi f\left ( \xi -2\delta \right ) \ \ +\ \xi f\left ( \xi -\delta \right ) \ +\ \ \xi f_{\xi }\ +\ \xi f\left ( \xi +\delta \right ) +\ \xi f\ \left ( \xi +2\delta \right ) \ +\cdots \right ] \tag {2}\end{equation}

But

\[ \delta \left [ \cdots +\xi f\left ( \xi -2\delta \right ) \ \ +\ \xi f\left ( \xi -\delta \right ) \ +\ \ \xi f_{\xi }\ +\ \xi f\left ( \xi +\delta \right ) +\ \xi f\ \left ( \xi +2\delta \right ) \ +\cdots \right ] \]

is just the total area under \(f\left ( x\right ) \) in the Riemann sum sense i.e. \(\int _{-\infty }^{\infty }f\left ( x\right ) dx\).

Hence (2) becomes

\[ E\left ( X\right ) =\xi \int _{-\infty }^{\infty }f\left ( x\right ) dx \]

But since \(f\left ( x\right ) \) is a density, this area is one. Hence

\[ \fbox {$E\left ( X\right ) =\xi $}\]

The density function of an exponential distribution with parameter \(\lambda \)is given by

\[ f\left ( x\right ) =\left \{ \begin {array} [c]{ccc}\lambda e^{-\lambda x} & & x\geq 0\\ 0 & & x<0 \end {array} \right . \]

First find the expected values of an exponential random variable \(X.\) From definition of expected value:

\begin{align*} E\left ( X\right ) & =\int _{0}^{\infty }xf\left ( x\right ) dx\\ & =\lambda \int _{0}^{\infty }xe^{-\lambda x}dx \end{align*}

integrate by parts gives

\begin{align*} E\left ( X\right ) & =\lambda \left ( \left [ \frac {xe^{-\lambda x}}{-\lambda }\right ] _{0}^{\infty }+\frac {1}{\lambda }\int _{0}^{\infty }e^{-\lambda x}dx\right ) \\ & =-\left [ xe^{-\lambda x}\right ] _{0}^{\infty }+\int _{0}^{\infty }e^{-\lambda x}dx\\ & =0-\frac {1}{\lambda }\left [ e^{-\lambda x}\right ] _{0}^{\infty }\\ & =\frac {1}{\lambda }\end{align*}

Hence \(E\left ( X\right ) =\frac {1}{\lambda }\), Hence we need to find \(\Delta =P\left ( \left \vert X-\frac {1}{\lambda }\right \vert >\frac {2}{\lambda }\right ) \), But this is the same as finding

\begin{align*} \Delta & =1-P\left ( \left \vert X-\frac {1}{\lambda }\right \vert \leq \frac {2}{\lambda }\right ) \\ & =1-P\left ( X<\frac {3}{\lambda }\right ) \\ & =1-\int _{0}^{\frac {3}{\lambda }}f\left ( x\right ) dx\\ & =1-\int _{0}^{\frac {3}{\lambda }}\lambda e^{-\lambda x}dx\\ & =1-\left ( 1-\frac {1}{e^{3}}\right ) =\frac {1}{e^{3}}=\fbox {$0.04\,\allowbreak 978\,7$}\end{align*}

Now compare to Chebyshev bound. Chebyshev bound says that

\begin{equation} P\left ( \left \vert X-E\left ( X\right ) \right \vert \geq t\right ) \leq \frac {Var\left ( X\right ) }{t^{2}}\tag {1}\end{equation}

Hence the upper bound by Chebyshev is \(\frac {Var\left ( X\right ) }{\left ( \frac {2}{\lambda }\right ) ^{2}}\). We now need to find \(Var\left ( X\right ) \) and this is given by

\[ Var\left ( X\right ) =E\left ( X^{2}\right ) -\left [ E\left ( X\right ) \right ] ^{2}\]

But

\begin{align*} E\left ( X^{2}\right ) & =\int _{0}^{\infty }x^{2}f\left ( x\right ) dx=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}dx\\ & =\lambda \left [ \frac {-1}{\lambda }\left [ x^{2}e^{-\lambda x}\right ] _{0}^{\infty }+\frac {2}{\lambda }\int _{0}^{\infty }xe^{-\lambda x}dx\right ] \\ & =\left [ -1\left [ 0\right ] +2\int _{0}^{\infty }xe^{-\lambda x}dx\right ] \\ & =2\int _{0}^{\infty }xe^{-\lambda x}dx\\ & =2\left [ \frac {-1}{\lambda }\left [ xe^{-\lambda x}\right ] _{0}^{\infty }+\frac {1}{\lambda }\int _{0}^{\infty }e^{-\lambda x}dx\right ] \\ & =2\left [ 0+\frac {1}{\lambda }\left [ \frac {e^{-\lambda x}}{-\lambda }\right ] _{0}^{\infty }\right ] \\ & =2\left [ -\frac {1}{\lambda ^{2}}\left [ e^{-\lambda \infty }-e^{0}\right ] \right ] \\ & =-\frac {2}{\lambda ^{2}}\left [ 0-1\right ] \\ & =\frac {2}{\lambda ^{2}}\end{align*}

so

\begin{align*} Var\left ( X\right ) & =\frac {2}{\lambda ^{2}}-\left [ \frac {1}{\lambda }\right ] ^{2}\\ & =\fbox {$\frac {1}{\lambda ^{2}}$}\end{align*}

Hence (1) becomes

\[ P\left ( \left \vert X-E\left ( X\right ) \right \vert \geq \frac {2}{\lambda }\right ) \leq \frac {\frac {1}{\lambda ^{2}}}{\frac {4}{\lambda ^{2}}}=\fbox {$0.25$}\]

Hence an upper bound for the probability by Chebyshev is \(0.25\), and the actual probability found was \(0.04\,\allowbreak 978\,7\) which is well within this bound.

Let \(\Delta ={\displaystyle \sum \limits _{k=1}^{\infty }} \ P\left ( X\geq K\right ) \), we need to show that this equals \(E\left ( X\right ) \)

\begin{align*} \Delta & ={\displaystyle \sum \limits _{k=1}^{\infty }} \ P\left ( X\geq K\right ) \\ & =P\left ( X\geq 1\right ) +P\left ( X\geq 2\right ) +P\left ( X\geq 3\right ) +\cdots \end{align*}

But

\[ P\left ( X\geq 1\right ) =P\left ( X=1\right ) +P\left ( X=2\right ) +P\left ( X=3\right ) +\cdots \]

and

\[ P\left ( X\geq 2\right ) =P\left ( X=2\right ) +P\left ( X=3\right ) +P\left ( X=4\right ) +\cdots \]

and

\[ P\left ( X\geq 3\right ) =P\left ( X=3\right ) +P\left ( X=4\right ) +P\left ( X=5\right ) +\cdots \]

and so on. Hence adding all the above we obtain repeated terms, which comes out as follows

\begin{align*} \Delta & =P\left ( X\geq 1\right ) +P\left ( X\geq 2\right ) +P\left ( X\geq 3\right ) +\cdots \\ & =\left [ P\left ( X=1\right ) +P\left ( X=2\right ) +P\left ( X=3\right ) +\cdots \right ] \\ & +\left [ P\left ( X=2\right ) +P\left ( X=3\right ) +P\left ( X=4\right ) +\cdots \right ] \\ & +\left [ P\left ( X=3\right ) +P\left ( X=4\right ) +P\left ( X=5\right ) +\cdots \right ] \\ & +\cdots \\ & =P\left ( X=1\right ) +2P\left ( X=2\right ) +3P\left ( X=3\right ) +4P\left ( X=4\right ) +\cdots \\ & ={\displaystyle \sum \limits _{k=1}^{\infty }} k\ P\left ( X=k\right ) \end{align*}

But this is the definition of \(E\left ( X\right ) \), hence \(\Delta =E\left ( X\right ) \)

\(X\) is Number of trials needed to obtain \(r\) successes, Each trial has \(p\) chance of success.

Let \(Y_{1}\) be a random variable which represents the number of trials to obtain a success (counting the success trial) (This will be the first success).

Let \(Y_{2}\) be a random variable which represents the number of trials to obtain a success (this will be the second success so far)

Let \(Y_{3}\) be a random variable which represents the number of trials to obtain a success (this will be the third success so far)

and so on. Hence

Let \(Y_{i}\) be a random variable which represents the number of trials to obtain the \(i^{th}\ \)success.

Therefore

\begin{align*} X & =Y_{1}+Y_{2}+\cdots +Y_{r}\\ & ={\displaystyle \sum \limits _{k=1}^{r}} Y_{r}\end{align*}

Hence

\begin{align} E\left ( X\right ) & =E\left ( {\displaystyle \sum \limits _{k=1}^{r}} Y_{r}\right ) \tag {1}\\ & ={\displaystyle \sum \limits _{k=1}^{r}} E\left ( Y_{r}\right ) \nonumber \end{align}

But a Geometric r.v. represents the number of trials needed to obtain a success (counting the success trial), with each trial having p chance of success. So we need to find \(E\left ( Y\right ) \) where \(Y\) is a Geometric r.v. with parameters \(p\)

\[ E\left ( Y\right ) ={\displaystyle \sum \limits _{k=1}^{\infty }} kP\left ( X=K\right ) \]

But

\[ P\left ( Y=K\right ) =p\left ( 1-p\right ) ^{k}\]

Hence

\begin{align} E\left ( Y\right ) & ={\displaystyle \sum \limits _{k=1}^{\infty }} kp\left ( 1-p\right ) ^{k}=p{\displaystyle \sum \limits _{k=1}^{\infty }} k\left ( 1-p\right ) ^{k}\nonumber \\ & =p\left ( \frac {1-p}{p^{2}}\right ) \nonumber \\ & =\fbox {$\frac {1-p}{p}$}\tag {2}\end{align}

Substitute (2) into (1)

\begin{align*} E\left ( X\right ) & ={\displaystyle \sum \limits _{k=1}^{r}} \frac {1-p}{p}\\ & =\frac {1-p}{p}{\displaystyle \sum \limits _{k=1}^{r}} 1\\ & =\fbox {$r\left ( \frac {1-p}{p}\right ) $}\end{align*}

pict

\begin{equation} \rho _{U,V}=\frac {Cov\left ( U,V\right ) }{\sqrt {Var\left ( U\right ) Var\left ( V\right ) }}\tag {1}\end{equation}

But

\[ Cov\left ( U,V\right ) =E\left ( UV\right ) -E\left ( U\right ) E\left ( V\right ) \]

and

\begin{align*} E\left ( U\right ) & =E\left ( a+bX\right ) =E\left ( a\right ) +E\left ( bX\right ) \\ & =a+bE\left ( X\right ) \end{align*}

and

\begin{align*} E\left ( V\right ) & =E\left ( c+dY\right ) =E\left ( c\right ) +E\left ( dY\right ) \\ & =c+dE\left ( Y\right ) \end{align*}

so

\begin{equation} Cov\left ( U,V\right ) =E\left [ \left ( a+bX\right ) \left ( c+dY\right ) \right ] -\left [ a+bE\left ( X\right ) \right ] \left [ c+dE\left ( Y\right ) \right ] \tag {2}\end{equation}

and

\begin{equation} Var\left ( U\right ) =Var\left ( a+bX\right ) =b^{2}Var\left ( X\right ) \tag {3}\end{equation}

and

\begin{equation} Var\left ( V\right ) =Var\left ( c+dY\right ) =d^{2}Var\left ( Y\right ) \tag {4}\end{equation}

Substitute (2),(3),(4) into (1) we obtain

\begin{align*} \rho _{U,V} & =\frac {E\left [ \left ( a+bX\right ) \left ( c+dY\right ) \right ] -\left [ a+bE\left ( X\right ) \right ] \left [ c+dE\left ( Y\right ) \right ] }{\sqrt {b^{2}Var\left ( X\right ) d^{2}Var\left ( Y\right ) }}\\ & \\ & =\frac {E\left [ ac+adY+cbX+bXdY\right ] -\left ( ac+adE\left ( Y\right ) +cbE\left ( X\right ) +bdE\left ( X\right ) E\left ( Y\right ) \right ) }{\left \vert bd\right \vert \sqrt {Var\left ( X\right ) Var\left ( Y\right ) }}\\ & \\ & =\frac {ac+adE\left ( Y\right ) +cbE\left ( X\right ) +bdE\left ( XY\right ) -ac-adE\left ( Y\right ) -cbE\left ( X\right ) -bdE\left ( X\right ) E\left ( Y\right ) }{\left \vert bd\right \vert \sqrt {Var\left ( X\right ) Var\left ( Y\right ) }}\\ & \\ & =\frac {bdE\left ( XY\right ) -bdE\left ( X\right ) E\left ( Y\right ) }{\left \vert bd\right \vert \sqrt {Var\left ( X\right ) Var\left ( Y\right ) }}\\ & =\frac {bd\left [ E\left ( XY\right ) -E\left ( X\right ) E\left ( Y\right ) \right ] }{\left \vert bd\right \vert \sqrt {Var\left ( X\right ) Var\left ( Y\right ) }}\end{align*}

Now cancel \(bd\) term. So depending if \(bd<0\) or \(bd>0\) we obtain \(-\rho _{X,Y}\) or \(+\rho _{X,Y}\)

Hence if we consider absolute sign of \(bd\) we write

\[ \left \vert \rho _{U,V}\right \vert =\left \vert \rho _{X,Y}\right \vert \]

4.4.1 Graded

19/20

PDF