4.3 Quiz 3

4.3.1 short version

(a)Problem review:

\(T_{1}\) is a random variable and \(T_{2}\) is a random variable, where \(T_{1}=\alpha e^{-\alpha t_{1}}\) and \(T_{2} =\beta e^{-\beta t_{2}}\)

\(\alpha \) and \(\beta \) can be thought of as the failure rate for each respective component. \(T_{i}\) is the lifetime of component \(i\). Hence \(P\left ( T_{1}=t_{1}\right ) \) means to ask for the probability of the first component to have a lifetime of \(t_{1}\) given that the failure rate of this kind of components is \(\alpha .\)

solution:

Now we know that

\[ P\left ( T_{1}>T_{2}\right ) =\int \int f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) dt_{2}dt_{1}\]

Looking at the following diagram to help determine the region to integrate:

Hence

\[ P\left ( T_{1}>T_{2}\right ) =\int _{t_{1}=0}^{t_{1}=\infty }\int _{t_{2}=0}^{t_{2}=t_{1}}f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) dt_{2}\ dt_{1}\]

But since \(T_{1}\perp T_{2},\) then the joint density is the product of the marginal densities.

Hence

\begin{align*} f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) & =f_{T_{1}}\left ( t_{1}\right ) f_{T_{2}}\left ( t_{2}\right ) \\ & =\alpha e^{-\alpha t_{1}}\beta e^{-\beta t_{2}}\end{align*}

Therefore

\begin{align*} P\left ( T_{1}>T_{2}\right ) & =\int _{0}^{\infty }\int _{0}^{t_{1}}\alpha e^{-\alpha t_{1}}\beta e^{-\beta t_{2}}\ dt_{2}\ dt_{1}\\ & =\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( \int _{0}^{t_{1}}e^{-\beta t_{2}}\ dt_{2}\right ) \ dt_{1}\\ & =\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( -\frac {1}{\beta }\left [ e^{-\beta t_{2}}\right ] _{t_{2}=0}^{t_{2}=t_{1}}\right ) \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left [ e^{-\beta t_{1}}-1\right ] \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( -\alpha -\beta \right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \left ( \left [ \frac {1}{\left ( -\alpha -\beta \right ) }e^{t_{1}\left ( -\alpha -\beta \right ) }\right ] _{0}^{\infty }-\frac {1}{-\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \end{align*}

We take \(\alpha ,\beta \geq 0\) since we expect the lifetime to go to zero eventually. Also this is a requirement for the integrals to not diverge.

Hence the above becomes

\begin{align*} P\left ( T_{1}>T_{2}\right ) & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ e^{t_{1}\left ( -\alpha -\beta \right ) }\right ] _{0}^{\infty }+\frac {1}{\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \\ & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ e^{-\infty }-1\right ] +\frac {1}{\alpha }\left [ e^{-\infty }-1\right ] \right ) \\ & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ 0-1\right ] +\frac {1}{\alpha }\left [ 0-1\right ] \right ) \\ & =-\alpha \left ( \frac {1}{\left ( \alpha +\beta \right ) }-\frac {1}{\alpha }\right ) \\ & =-\alpha \left ( \frac {\alpha -\left ( \alpha +\beta \right ) }{\alpha \left ( \alpha +\beta \right ) }\right ) \\ & =-\left ( \frac {\alpha -\alpha -\beta }{\left ( \alpha +\beta \right ) }\right ) \end{align*}

Hence

\[ \fbox {$P\left ( T_{1}>T_{2}\right ) =\frac {\beta }{\left ( \alpha +\beta \right ) }$}\]

(b)

\begin{align*} F_{W}\left ( w\right ) & =P\left ( W\leq w\right ) \\ & =P\left ( 2T_{2}\leq w\right ) \\ & =P\left ( T_{2}\leq \frac {w}{2}\right ) \\ & =F_{T_{2}}\left ( \frac {w}{2}\right ) \end{align*}

Hence

\[ f_{W}\left ( w\right ) =f_{T_{2}}\left ( \frac {w}{2}\right ) \times \frac {d}{dw}\left ( \frac {w}{2}\right ) \]

Hence

\[ \fbox {$f_{W}\left ( w\right ) =\frac {1}{2}f_{T_{2}}\left ( \frac {w}{2}\right ) $}\]

(c)Need to find \(P\left ( T_{1}>2T_{2}\right ) \) which is the same as \(P\left ( T_{1}>W\right ) \), hence this is the same as part(a) but replace \(T_{2}\) by \(W\) as show in the following diagram

Hence

\begin{align*} P\left ( T_{1}>W\right ) & =\int _{0}^{\infty }\int _{0}^{t_{1}}f_{T_{1}}\left ( t_{1}\right ) f_{W}\left ( w\right ) \ dw\ dt_{1}\\ & =\int _{0}^{\infty }\int _{0}^{t_{1}}f_{T_{1}}\left ( t_{1}\right ) \left [ \frac {1}{2}f_{T_{2}}\left ( \frac {w}{2}\right ) \right ] \ dw\ dt_{1}\\ & =\int _{0}^{\infty }\int _{0}^{t_{1}}\alpha e^{-\alpha t_{1}}\left [ \frac {1}{2}\beta e^{-\beta \left ( \frac {w}{2}\right ) }\right ] \ dw\ dt_{1}\\ & =\frac {1}{2}\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( \int _{0}^{t_{1}}e^{-\beta \left ( \frac {w}{2}\right ) }\ dw\right ) \ dt_{1}\\ & =\frac {1}{2}\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( -\frac {2}{\beta }\left [ e^{-\beta \left ( \frac {w}{2}\right ) }\right ] _{w=0}^{w=t_{1}}\right ) \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left [ e^{-\beta \left ( \frac {t_{1}}{2}\right ) }-1\right ] \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( -\alpha -\frac {\beta }{2}\right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( \frac {-2\alpha -\beta }{2}\right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \left ( \left [ \frac {2}{\left ( -2\alpha -\beta \right ) }e^{t_{1}\left ( \frac {-2\alpha -\beta }{2}\right ) }\right ] _{0}^{\infty }-\frac {1}{-\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \\ & =-\alpha \left ( \frac {2}{\left ( -2\alpha -\beta \right ) }\left [ 0-1\right ] +\frac {1}{\alpha }\left [ 0-1\right ] \right ) \\ & =-\alpha \left ( \frac {2}{\left ( 2\alpha +\beta \right ) }-\frac {1}{\alpha }\right ) \end{align*}

Hence

\begin{align*} P\left ( T_{1}>W\right ) & =-\left ( \frac {2\alpha -\left ( 2\alpha +\beta \right ) }{\left ( 2\alpha +\beta \right ) }\right ) \\ & =-\left ( \frac {2\alpha -2\alpha -\beta }{\left ( 2\alpha +\beta \right ) }\right ) \end{align*}

Then

\[ \fbox {$P\left ( T_{1}>W\right ) =\frac {\beta }{\left ( 2\alpha +\beta \right ) }$}\]

Problem review: Poisson probability density is a discrete probability function (We normally call it the probability mass function \(pmf\)). This means the random variable is a discrete random variable.

The random variable \(X\) in this case is the number of success in \(n\) trials where the probability of success in each one trial is \(p\) and the trials are independent from each others. The difference between Poisson and Binomial is that in Poisson we are looking at the problem as \(n\) becomes very large and \(p\) becomes very small in such a way that the product \(np\) goes to a fixed value which is called \(\lambda \), the Poisson parameter. And then we write \(P\left ( X=k\right ) =\frac {\lambda ^{k}}{k!}e^{-\lambda }\) where \(k=0,1,2,\cdots \) The following diagram illustrates this problem, showing the three r.v. we need to analyze and the time line.

But what is "trials" in this problem?  If we divide the time line itself into very small time intervals \(\delta t\) then the number of time intervals is the number of trials, and we assume that at most one event will occur in this time interval (since it is too small). The probability \(p\) of event occurring in this \(\delta t\) is the same in the interval \(\left [ t_{0},t_{1}\right ] \) and in the interval \(\left [ t_{1},t_{2}\right ] \). Now let us find \(\lambda \) for \(X\) and \(Y\) and \(Z\) based on this. Since \(\lambda =np\) where \(n\) is the number of trials, then for \(X\) we have \(\lambda _{x}=n_{x}p=\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\) where we divided the time interval by the time width \(\delta t\) to obtain the number of time slots for \(X\). We do the same for \(Y\) and obtain that \(\lambda _{y}=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\)

Similarly, \(\lambda _{Z}=\frac {\left ( t_{2}-t_{0}\right ) }{\delta t}p=\frac {\left ( t_{2}-t_{1}\right ) +\left ( t_{1}-t_{0}\right ) }{\delta t}p=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p+\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}P\), hence \(\lambda _{z}=\lambda _{x}+\lambda _{y}\)

Let us refer to the random variable \(N\left ( t_{1},t_{2}\right ) \) as \(Y\) and the r.v. \(N\left ( t_{0},t_{1}\right ) \) as \(X\) and the r.v. \(N\left ( t_{0},t_{2}\right ) \) as \(Z\)

The problem is then asking to find \(P\left ( X=x|Z=n\right ) \) and to identify \(pmf\left ( X|Z\right ) \)

To help in the solution, we first draw a diagram to make it more clear.

We take \(\lambda \) to the same for the \(3\) random variables \(X,Y,Z\).

\[ P\left ( X=x|Z=n\right ) =\frac {P\left ( X=x,Z=n\right ) }{P\left ( Z=n\right ) }\]

But \(Z=n\) is the same as \(X+Y=n\) hence

\begin{align*} P\left ( X=x|Z=n\right ) & =\frac {P\left ( X=x,\left ( X+Y\right ) =n\right ) }{P\left ( Z=n\right ) }\\ & =\frac {P\left ( X=x,Y=n-x\right ) }{P\left ( Z=n\right ) }\end{align*}

Now r.v. \(X\) \(\perp Y\), since the number of events in \(\left [ t_{0},t_{1}\right ] \) is independent from the number of events that could occur in \([t_{1},t_{2}]\).

Given this, we can now write the joint probability of \(X,Y\) as the product of the marginal probabilities. Hence the numerator in the above can be rewritten and we obtain

\begin{equation} P\left ( X=x|Z=n\right ) =\frac {P\left ( X=x\right ) P\left ( Y=n-x\right ) }{P\left ( Z=n\right ) } \tag {1}\end{equation}

Now since each of the above is a Poisson process, then

\begin{align*} P\left ( X=x\right ) & =\frac {\left ( \lambda _{x}\right ) ^{x}}{x!}e^{-\lambda _{x}}\\ P\left ( Y=n-x\right ) & =\frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\lambda _{y}}\\ P\left ( Z=n\right ) & =\frac {\left ( \lambda _{z}\right ) ^{n}}{n!}e^{-\lambda _{z}}\end{align*}

Hence (1) becomes

\begin{equation} P\left ( X=x|Z=n\right ) =\left ( \frac {\left ( \lambda _{x}\right ) ^{x}}{x!}e^{-\lambda _{x}}\right ) \left ( \frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\lambda _{y}}\right ) \frac {1}{\frac {\left ( \lambda _{z}\right ) ^{n}}{n!}e^{-\lambda _{z}}} \tag {2}\end{equation}

Hence

\[ P\left ( X=x|Z=n\right ) =\frac {n!}{x!\left ( n-x\right ) !}\left ( \left ( \lambda _{x}\right ) ^{x}e^{-\lambda _{x}}\right ) \left ( \left ( \lambda _{y}\right ) ^{n-x}e^{-\lambda _{y}}\right ) \frac {e^{\lambda _{z}}}{\left ( \lambda _{z}\right ) ^{n}}\]

But we found that \(\lambda _{z}=\lambda _{x}+\lambda _{y}\), hence the exponential term above vanish and we get

\begin{align*} P\left ( X=x|Z=n\right ) & =\frac {n!}{x!\left ( n-x\right ) !}\frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{z}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{z}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{x}\left ( \lambda _{x}+\lambda _{y}\right ) ^{n-x}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{x}}\frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{n-x}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( \frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{x}\left ( \frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\right ) ^{n-x}\end{align*}

Let \(k=\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\), then \(1-k=1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{x}+\lambda _{y}-\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\) hence the last line above can be written as

\begin{align*} P\left ( X=x|Z=n\right ) & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( \frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{x}\left ( 1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{n-x}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( k\right ) ^{x}\left ( 1-k\right ) ^{n-x}\end{align*}

But this is a Binomial with parameters \(n,k\), hence

\[ \fbox {$P\left ( X=x|Z=n\right ) = Binomial\left ( n,\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) $}\]

part (a)

Let \(\theta ,\) the probability of getting heads, be the specific value that the random number \(\Theta \) can take.

Let \(g\left ( \theta \right ) \) be the probability density of \(\Theta \), which we are told to be \(U\left [ 0,1\right ] \), and let \(pmf_{X}\left ( x\right ) \) be the probability mass function of the random variable \(X\) where \(X\) is the number of times until a head first comes up. \(X\) is then a geometric random variable with parameter \(\theta \), hence

\[ pmf_{X}\left ( N\right ) =P\left ( X=N\right ) =\left ( 1-\theta \right ) ^{N-1}\theta \ \ \ \ \ \ \ \ N=1,2,3,\cdots \]

The posterior density of \(\Theta \) given N is then

\[ \fbox {$h\left ( \Theta =\theta |X=N\right ) =\frac {pmf_{X}\left ( N|\Theta =\theta \right ) g\left ( \theta \right ) }{\int _{0}^{1}pmf_{X}\left ( N|\Theta =\theta \right ) g\left ( \theta \right ) d\theta }$}\]

But

\[ pmf_{X}\left ( N|\Theta =\theta \right ) =\left ( 1-\theta \right ) ^{N-1}\theta \]

and \(g\left ( \theta \right ) =1\) since \(\Theta = U\left [ 0,1\right ] \)

Hence

\begin{equation} h\left ( \Theta =\theta |X=N\right ) =\frac {\left ( 1-\theta \right ) ^{N-1}\theta }{\int _{0}^{1}\left ( 1-\theta \right ) ^{N-1}\theta \ d\theta } \tag {1}\end{equation}

But \(\Theta \) is a random continuous variable from \([0,1]\), so how to evaluate the above? I can evaluate the above for different values of \(\Theta \) on the real line from \([0,1]\), and the more values I take between \(0,1\) the more accurate \(h\left ( \Theta =\theta |X=N\right ) \) will become.

Part(b)

First let me evaluate eq (1) for \(N=1,N=2,N=6\)

For \(N=1\)

\[ h\left ( \Theta =\theta |X=1\right ) =\frac {\theta }{\int _{0}^{1}\theta \ d\theta }=\frac {\theta }{\left [ \frac {\theta ^{2}}{2}\right ] _{0}^{1}\ }=\fbox {$2\theta $}\]

For \(N=2\)

\begin{align*} h\left ( \Theta =\theta |X=2\right ) & =\frac {\left ( 1-\theta \right ) \theta }{\int _{0}^{1}\left ( 1-\theta \right ) \theta \ d\theta }=\frac {\left ( 1-\theta \right ) \theta }{\int _{0}^{1}\left ( \theta -\theta ^{2}\right ) \ d\theta \ }=\frac {\left ( 1-\theta \right ) \theta }{\left [ \frac {\theta ^{2}}{2}\right ] _{0}^{1}-\left [ \frac {\theta ^{3}}{3}\right ] _{0}^{1}\ }\\ & =\frac {\left ( 1-\theta \right ) \theta }{\frac {1}{2}-\frac {1}{3}\ }=\fbox {$6\left ( 1-\theta \right ) \theta $}\end{align*}

For \(N=6\)

\[ h\left ( \Theta =\theta |X=6\right ) =\frac {\left ( 1-\theta \right ) ^{6-1}\theta }{\int _{0}^{1}\left ( 1-\theta \right ) ^{6-1}\theta \ d\theta }=\frac {\left ( 1-\theta \right ) ^{5}\theta }{\int _{0}^{1}\left ( 1-\theta \right ) ^{5}\theta \ d\theta }\]

We can use integration by parts for the denominator, where \(u=\theta ,dv=\left ( 1-\theta \right ) ^{5}\), when we do this we obtain

\[ h\left ( \Theta =\theta |X=6\right ) =\fbox {$42\left ( 1-\theta \right ) ^{5}\theta $}\]

Now we plot the above 3 cases on the same plot:

What the above plot is saying is the following:

If it takes ’longer’ to see a head comes up (\(N=6\)), then the coin is taken as biased towards a tail, and the probability of getting a head becomes smaller, this is why we see that the most likely probability in this case to be around \(0.15\) (looking at the N=6 curve). We say that based on the observation of \(N=6\), then the coin has a higher probability of having its probability of getting a head to be about \(0.15\) than any other value. (The area around \(\theta =1.5\) is larger than any other area for the same \(\delta \theta \))

Now, when \(N=2\), i.e. we flipped the coin 2 times, and got a head on the second time, then we see from the \(N=2\) curve that the coin has a most likelihood of having a probability of getting a head to be 0.5\(.\)

This is what we would expect, since in an unbiased coin, the probability of getting a head is \(\frac {1}{2}\), and hence with a fair coin, we expect to see a head half of the times it is flipped, and since we flipped 2 times, and saw a head the second time, this posterior probability has its most likely value to be around .5 as well.

When \(N=1\), this says that we got a head in the first time we flipped the coin. We see that the posterior probability of getting a head now has it maximum around 1. This means the posterior probability is saying this coin is biased towards a head.

The above is a method to estimate the probability distribution of the probability itself of getting a head based on the observed events and based on the prior known probability of getting a head. Hence the events observed allow us to estimate the probability of getting a head. Hence the posterior probability is conditioned on each event as in this problem.

4.3.2 Graded

18/20

PDF

4.3.3 long version

(a)

Problem review:

\(T_{1}\) is a random variable and \(T_{2}\) is a random variable, where \(T_{1}=\alpha e^{-\alpha t_{1}}\) and \(T_{2}=\beta e^{-\beta t_{2}}\)

\(\alpha \) and \(\beta \) can be thought of as the failure rate for each respective component. \(T_{i}\) is the lifetime of component \(i\). Hence \(P\left ( T_{1}=t_{1}\right ) \) means to ask for the probability of the first component to have a lifetime of \(t_{1}\) given that the failure rate of this kind of components is \(\alpha .\)

solution:

Now we know that

\[ P\left ( T_{1}>T_{2}\right ) =\int \int f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) dt_{2}dt_{1}\]

Looking at the following diagram to help determine the region to integrate:

Hence

\[ P\left ( T_{1}>T_{2}\right ) =\int _{t_{1}=0}^{t_{1}=\infty }\int _{t_{2}=0}^{t_{2}=t_{1}}f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) dt_{2}\ dt_{1}\]

But since \(T_{1}\perp T_{2},\) then the joint density is the product of the marginal densites.

Hence

\begin{align*} f_{T_{1},T_{2}}\left ( t_{1},t_{2}\right ) & =f_{T_{1}}\left ( t_{1}\right ) f_{T_{2}}\left ( t_{2}\right ) \\ & =\alpha e^{-\alpha t_{1}}\beta e^{-\beta t_{2}}\end{align*}

Therefore

\begin{align*} P\left ( T_{1}>T_{2}\right ) & =\int _{0}^{\infty }\int _{0}^{t_{1}}\alpha e^{-\alpha t_{1}}\beta e^{-\beta t_{2}}\ dt_{2}\ dt_{1}\\ & =\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( \int _{0}^{t_{1}}e^{-\beta t_{2}}\ dt_{2}\right ) \ dt_{1}\\ & =\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( -\frac {1}{\beta }\left [ e^{-\beta t_{2}}\right ] _{t_{2}=0}^{t_{2}=t_{1}}\right ) \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left [ e^{-\beta t_{1}}-1\right ] \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( -\alpha -\beta \right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \left ( \left [ \frac {1}{\left ( -\alpha -\beta \right ) }e^{t_{1}\left ( -\alpha -\beta \right ) }\right ] _{0}^{\infty }-\frac {1}{-\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \end{align*}

We take \(\alpha ,\beta \geq 0\) since we expect the lifetime to go to zero eventually. Also this is a requirment for the integrals to not diverge.

Hence the above becomes

\begin{align*} P\left ( T_{1}>T_{2}\right ) & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ e^{t_{1}\left ( -\alpha -\beta \right ) }\right ] _{0}^{\infty }+\frac {1}{\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \\ & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ e^{-\infty }-1\right ] +\frac {1}{\alpha }\left [ e^{-\infty }-1\right ] \right ) \\ & =-\alpha \left ( \frac {1}{\left ( -\alpha -\beta \right ) }\left [ 0-1\right ] +\frac {1}{\alpha }\left [ 0-1\right ] \right ) \\ & =-\alpha \left ( \frac {1}{\left ( \alpha +\beta \right ) }-\frac {1}{\alpha }\right ) \\ & =-\alpha \left ( \frac {\alpha -\left ( \alpha +\beta \right ) }{\alpha \left ( \alpha +\beta \right ) }\right ) \\ & =-\left ( \frac {\alpha -\alpha -\beta }{\left ( \alpha +\beta \right ) }\right ) \end{align*}

Hence

\[ \fbox {$P\left ( T_{1}>T_{2}\right ) =\frac {\beta }{\left ( \alpha +\beta \right ) }$}\]

(b)

\begin{align*} F_{W}\left ( w\right ) & =P\left ( W\leq w\right ) \\ & =P\left ( 2T_{2}\leq w\right ) \\ & =P\left ( T_{2}\leq \frac {w}{2}\right ) \\ & =F_{T_{2}}\left ( \frac {w}{2}\right ) \end{align*}

Hence

\[ f_{W}\left ( w\right ) =f_{T_{2}}\left ( \frac {w}{2}\right ) \times \frac {d}{dw}\left ( \frac {w}{2}\right ) \]

Hence

\[ \fbox {$f_{W}\left ( w\right ) =\frac {1}{2}f_{T_{2}}\left ( \frac {w}{2}\right ) $}\]

(c)Need to find \(P\left ( T_{1}>2T_{2}\right ) \) which is the same as \(P\left ( T_{1}>W\right ) \), hence this is the same as part(a) but replace \(T_{2}\) by \(W\) as show in the following diagram

Hence

\begin{align*} P\left ( T_{1}>W\right ) & =\int _{0}^{\infty }\int _{0}^{t_{1}}f_{T_{1}}\left ( t_{1}\right ) f_{W}\left ( w\right ) \ dw\ dt_{1}\\ & =\int _{0}^{\infty }\int _{0}^{t_{1}}f_{T_{1}}\left ( t_{1}\right ) \left [ \frac {1}{2}f_{T_{2}}\left ( \frac {w}{2}\right ) \right ] \ dw\ dt_{1}\\ & =\int _{0}^{\infty }\int _{0}^{t_{1}}\alpha e^{-\alpha t_{1}}\left [ \frac {1}{2}\beta e^{-\beta \left ( \frac {w}{2}\right ) }\right ] \ dw\ dt_{1}\\ & =\frac {1}{2}\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( \int _{0}^{t_{1}}e^{-\beta \left ( \frac {w}{2}\right ) }\ dw\right ) \ dt_{1}\\ & =\frac {1}{2}\beta \alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left ( -\frac {2}{\beta }\left [ e^{-\beta \left ( \frac {w}{2}\right ) }\right ] _{w=0}^{w=t_{1}}\right ) \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{-\alpha t_{1}}\left [ e^{-\beta \left ( \frac {t_{1}}{2}\right ) }-1\right ] \ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( -\alpha -\frac {\beta }{2}\right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \int _{0}^{\infty }e^{t_{1}\left ( \frac {-2\alpha -\beta }{2}\right ) }-e^{-\alpha t_{1}}\ dt_{1}\\ & =-\alpha \left ( \left [ \frac {2}{\left ( -2\alpha -\beta \right ) }e^{t_{1}\left ( \frac {-2\alpha -\beta }{2}\right ) }\right ] _{0}^{\infty }-\frac {1}{-\alpha }\left [ e^{-\alpha t_{1}}\right ] _{0}^{\infty }\right ) \\ & =-\alpha \left ( \frac {2}{\left ( -2\alpha -\beta \right ) }\left [ 0-1\right ] +\frac {1}{\alpha }\left [ 0-1\right ] \right ) \\ & =-\alpha \left ( \frac {2}{\left ( 2\alpha +\beta \right ) }-\frac {1}{\alpha }\right ) \end{align*}

Hence

\begin{align*} P\left ( T_{1}>W\right ) & =-\left ( \frac {2\alpha -\left ( 2\alpha +\beta \right ) }{\left ( 2\alpha +\beta \right ) }\right ) \\ & =-\left ( \frac {2\alpha -2\alpha -\beta }{\left ( 2\alpha +\beta \right ) }\right ) \end{align*}

Then

\[ \fbox {$P\left ( T_{1}>W\right ) =\frac {\beta }{\left ( 2\alpha +\beta \right ) }$}\]

Problem review: Poisson probability density is a discrete probability function (We normally call it the probability mass function \(pmf\)). This means the random variable is a discrete random variable.

The random varible \(X\) in this case is the number of success in \(n\) trials where the probability of success in each one trial is \(p\) and the trials are independent from each others. The difference between Poisson and Binomial is that in Poisson we are looking at the problem as \(n\) becomes very large and \(p\) becomes very small in such a way that the product \(np\) goes to a fixed value which is called \(\lambda \), the Poisson parameter. And then we write \(P\left ( X=k\right ) =\frac {\lambda ^{k}}{k!}e^{-\lambda }\) \(\ \) where \(k=0,1,2,\cdots \) The following diagram illustrates this problem, showing the three r.v. we need to analyze and the time line.

But what is "trials" in this problem?  If we divide the time line itself into very small time intervals \(\delta t\) then the number of time intervals is the number of trials, and we assume that at most one event will occure in this time interval (since it is too small). The probability \(p\) of event occuring in this \(\delta t\) is the same in the interval \(\left [ t_{0},t_{1}\right ] \) and in the interval \(\left [ t_{1},t_{2}\right ] \). Now let us find \(\lambda \) for \(X\) and \(Y\) and \(Z\) based on this. Since \(\lambda =np\) where \(n\) is the number of trials, then for \(X\) we have \(\lambda _{x}=n_{x}p=\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\)\(\,\) where we divided the time interval by the time width \(\delta t\) to obtain the number of time slots for \(X\). We do the same for \(Y\) and obtain that \(\lambda _{y}=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\)

Similary, \(\lambda _{Z}=\frac {\left ( t_{2}-t_{0}\right ) }{\delta t}p=\frac {\left ( t_{2}-t_{1}\right ) +\left ( t_{1}-t_{0}\right ) }{\delta t}p=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p+\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}P\), hence \(\lambda _{z}=\lambda _{x}+\lambda _{y}\)

\begin{align*} \lambda _{x} & =\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\\ \lambda _{y} & =\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\\ \lambda _{z} & =\frac {\left ( t_{2}-t_{0}\right ) }{\delta t}p \end{align*}

Let us refer to the random variable \(N\left ( t_{1},t_{2}\right ) \) as \(Y\) and the r.v. \(N\left ( t_{0},t_{1}\right ) \) as \(X\) and the r.v. \(N\left ( t_{0},t_{2}\right ) \) as \(Z\)

The problem is then asking to find \(P\left ( X=x|Z=n\right ) \) and to identify \(pmf\left ( X|Z\right ) \)

To help in the solution, we first draw a diagram to make it more clear.

We take \(\lambda \) to the same for the \(3\) random variables \(X,Y,Z\).

\[ P\left ( X=x|Z=n\right ) =\frac {P\left ( X=x,Z=n\right ) }{P\left ( Z=n\right ) }\]

But \(Z=n\) is the same as \(X+Y=n\) hence

\begin{align*} P\left ( X=x|Z=n\right ) & =\frac {P\left ( X=x,\left ( X+Y\right ) =n\right ) }{P\left ( Z=n\right ) }\\ & =\frac {P\left ( X=x,Y=n-x\right ) }{P\left ( Z=n\right ) }\end{align*}

Now r.v. \(X\) \(\perp Y\), since the number of events in \(\left [ t_{0},t_{1}\right ] \) is indepenent from the number of events that could occur in \([t_{1},t_{2}]\).

Given this, we can now write the joint probability of \(X,Y\) as the product of the marginal probabilitites. Hence the numerator in the above can be rewritten and we obtain

\begin{equation} P\left ( X=x|Z=n\right ) =\frac {P\left ( X=x\right ) P\left ( Y=n-x\right ) }{P\left ( Z=n\right ) }\tag {1}\end{equation}

Now since each of the above is a poisson process, then

\begin{align*} P\left ( X=x\right ) & =\frac {\left ( \lambda _{x}\right ) ^{x}}{x!}e^{-\lambda _{x}}\\ P\left ( Y=n-x\right ) & =\frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\lambda _{y}}\\ P\left ( Z=n\right ) & =\frac {\left ( \lambda _{z}\right ) ^{n}}{n!}e^{-\lambda _{z}}\end{align*}

Hence (1) becomes

\begin{equation} P\left ( X=x|Z=n\right ) =\left ( \frac {\left ( \lambda _{x}\right ) ^{x}}{x!}e^{-\lambda _{x}}\right ) \left ( \frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\lambda _{y}}\right ) \frac {1}{\frac {\left ( \lambda _{z}\right ) ^{n}}{n!}e^{-\lambda _{z}}}\tag {2}\end{equation}

Now we simplify this further and try to idensity the resulting distribution\(.\) First we note

Hence (2) becomes

\[ P\left ( X=x|Z=n\right ) =\left ( \frac {\left ( \frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\right ) ^{x}}{x!}e^{-\left ( \frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\right ) }\right ) \left ( \frac {\left ( \frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\left ( \frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\right ) }\right ) \frac {1}{\frac {\left ( \frac {\left ( t_{2}-t_{0}\right ) }{\delta t}p\right ) ^{n}}{n!}e^{-\left ( \frac {\left ( t_{2}-t_{0}\right ) }{\delta t}p\right ) }}\]

Let \(\frac {p}{\delta t}=\varphi \) then the above becomes

\begin{align*} P\left ( X=x|Z=n\right ) & =\left ( \frac {\left ( \left ( t_{1}-t_{0}\right ) \varphi \right ) ^{x}}{x!}e^{-\left ( \left ( t_{1}-t_{0}\right ) \varphi \right ) }\right ) \left ( \frac {\left ( \left ( t_{2}-t_{1}\right ) \varphi \right ) ^{n-x}}{\left ( n-x\right ) !}e^{-\left ( \left ( t_{2}-t_{1}\right ) \varphi \right ) }\right ) \frac {n!}{\left ( \left ( t_{2}-t_{0}\right ) \varphi \right ) ^{n}e^{-\left ( \left ( t_{2}-t_{0}\right ) \varphi \right ) }}\\ & =\left ( \frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}}{x!}e^{-t_{1}\varphi +t_{0}\varphi }\right ) \left ( \frac {\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( n-x\right ) !}e^{-t_{2}\varphi +t_{1}\varphi }\right ) \frac {n!}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}e^{-t_{2}\varphi +t_{0}\varphi }}\\ & =\left ( \frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}}{x!}e^{-t_{1}\varphi +t_{0}\varphi }\right ) \left ( \frac {\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( n-x\right ) !}e^{-t_{2}\varphi +t_{1}\varphi }\right ) \frac {n!}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}}e^{t_{2}\varphi -t_{0}\varphi }\\ & =\left ( \frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}}{x!}\right ) \left ( \frac {\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( n-x\right ) !}\right ) \frac {n!}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}}e^{\left ( t_{2}\varphi -t_{0}\varphi -t_{1}\varphi +t_{0}\varphi -t_{2}\varphi +t_{1}\varphi \right ) }\\ & =\left ( \frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}}{x!}\right ) \left ( \frac {\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( n-x\right ) !}\right ) \frac {n!}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}}e^{\left ( 0\right ) }\\ & =\left ( \frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}}{x!}\right ) \left ( \frac {\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( n-x\right ) !}\right ) \frac {n!}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}}\\ & =\frac {n!}{x!\left ( n-x\right ) !}\frac {\left ( t_{1}\varphi -t_{0}\varphi \right ) ^{x}\left ( t_{2}\varphi -t_{1}\varphi \right ) ^{n-x}}{\left ( t_{2}\varphi -t_{0}\varphi \right ) ^{n}}\end{align*}

We see that the parameter \(\varphi \) will occure in the numerator and denomerator with the same powers, hence we can factor it out and cancel it. Hence we obtain

\[ P\left ( X=x|Z=n\right ) =\frac {n!}{x!\left ( n-x\right ) !}\frac {\left ( t_{1}-t_{0}\right ) ^{x}\left ( t_{2}-t_{1}\right ) ^{n-x}}{\left ( t_{2}-t_{0}\right ) ^{n}}\]

Hence

\[ \fbox {$P\left ( X=x|Z=n\right ) =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( t_{1}-t_{0}\right ) ^{x}\left ( t_{2}-t_{1}\right ) ^{n-x}}{\left ( t_{2}-t_{0}\right ) ^{n}}$}\]
\[ P\left ( X=x|Z=n\right ) =\frac {n!}{x!\left ( n-x\right ) !}\left ( \left ( \lambda _{x}\right ) ^{x}e^{-\lambda _{x}}\right ) \left ( \left ( \lambda _{y}\right ) ^{n-x}e^{-\lambda _{y}}\right ) \frac {e^{\lambda _{z}}}{\left ( \lambda _{z}\right ) ^{n}}\]

But we found that \(\lambda _{z}=\lambda _{x}+\lambda _{y}\), hence the exponential term above vanish and we get

\begin{align*} P\left ( X=x|Z=n\right ) & =\frac {n!}{x!\left ( n-x\right ) !}\frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{z}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{z}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{n}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{x}\left ( \lambda _{x}+\lambda _{y}\right ) ^{n-x}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \frac {\left ( \lambda _{x}\right ) ^{x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{x}}\frac {\left ( \lambda _{y}\right ) ^{n-x}}{\left ( \lambda _{x}+\lambda _{y}\right ) ^{n-x}}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( \frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{x}\left ( \frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\right ) ^{n-x}\end{align*}

Let \(k=\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\), then \(1-k=1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{x}+\lambda _{y}-\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\) hence the last line above can be written as

\begin{align*} P\left ( X=x|Z=n\right ) & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( \frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{x}\left ( 1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) ^{n-x}\\ & =\begin {pmatrix} n\\ x \end {pmatrix} \left ( k\right ) ^{x}\left ( 1-k\right ) ^{n-x}\end{align*}

But this is a Binomial with parameters \(n,k\), hence

\[ \fbox {$P\left ( X=x|Z=n\right ) = Binomial\left ( n,\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\right ) $}\]