\(T_{1}\) is a random variable and \(T_{2}\) is a random variable, where \(T_{1}=\alpha e^{-\alpha t_{1}}\) and \(T_{2} =\beta e^{-\beta t_{2}}\)
\(\alpha \) and \(\beta \) can be thought of as the failure rate for each respective component. \(T_{i}\) is the lifetime of
component \(i\). Hence \(P\left ( T_{1}=t_{1}\right ) \) means to ask for the probability of the first component to have a lifetime of \(t_{1}\)
given that the failure rate of this kind of components is \(\alpha .\)
(c)Need to find \(P\left ( T_{1}>2T_{2}\right ) \) which is the same as \(P\left ( T_{1}>W\right ) \), hence this is the same as part(a) but replace \(T_{2}\) by \(W\) as show in
the following diagram
Problem review: Poisson probability density is a discrete probability function (We normally call it
the probability mass function \(pmf\)). This means the random variable is a discrete random
variable.
The random variable \(X\) in this case is the number of success in \(n\) trials where the probability of
success in each one trial is \(p\) and the trials are independent from each others. The difference between
Poisson and Binomial is that in Poisson we are looking at the problem as \(n\) becomes
very large and \(p\) becomes very small in such a way that the product \(np\) goes to a fixed
value which is called \(\lambda \), the Poisson parameter. And then we write \(P\left ( X=k\right ) =\frac {\lambda ^{k}}{k!}e^{-\lambda }\) where \(k=0,1,2,\cdots \) The following
diagram illustrates this problem, showing the three r.v. we need to analyze and the time
line.
But what is "trials" in this problem? If we divide the time line itself into very small time intervals \(\delta t\)
then the number of time intervals is the number of trials, and we assume that at most one event
will occur in this time interval (since it is too small). The probability \(p\) of event occurring in this \(\delta t\) is
the same in the interval \(\left [ t_{0},t_{1}\right ] \) and in the interval \(\left [ t_{1},t_{2}\right ] \). Now let us find \(\lambda \) for \(X\) and \(Y\) and \(Z\) based on this. Since \(\lambda =np\)
where \(n\) is the number of trials, then for \(X\) we have \(\lambda _{x}=n_{x}p=\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\) where we divided the time interval by the time
width \(\delta t\) to obtain the number of time slots for \(X\). We do the same for \(Y\) and obtain that
\(\lambda _{y}=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\)
Let us refer to the random variable \(N\left ( t_{1},t_{2}\right ) \) as \(Y\) and the r.v. \(N\left ( t_{0},t_{1}\right ) \) as \(X\) and the r.v. \(N\left ( t_{0},t_{2}\right ) \) as \(Z\)
The problem is then asking to find \(P\left ( X=x|Z=n\right ) \) and to identify \(pmf\left ( X|Z\right ) \)
To help in the solution, we first draw a diagram to make it more clear.
We take \(\lambda \) to the same for the \(3\) random variables \(X,Y,Z\).
Now r.v. \(X\) \(\perp Y\), since the number of events in \(\left [ t_{0},t_{1}\right ] \) is independent from the number of events that could
occur in \([t_{1},t_{2}]\).
Given this, we can now write the joint probability of \(X,Y\) as the product of the marginal probabilities.
Hence the numerator in the above can be rewritten and we obtain
Let \(k=\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\), then \(1-k=1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{x}+\lambda _{y}-\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\) hence the last line above can be written as
Let \(\theta ,\) the probability of getting heads, be the specific value that the random number \(\Theta \) can
take.
Let \(g\left ( \theta \right ) \) be the probability density of \(\Theta \), which we are told to be \(U\left [ 0,1\right ] \), and let \(pmf_{X}\left ( x\right ) \) be the probability mass
function of the random variable \(X\) where \(X\) is the number of times until a head first comes up.
\(X\) is then a geometric random variable with parameter \(\theta \), hence
But \(\Theta \) is a random continuous variable from \([0,1]\), so how to evaluate the above? I can evaluate the
above for different values of \(\Theta \) on the real line from \([0,1]\), and the more values I take between \(0,1\) the more
accurate \(h\left ( \Theta =\theta |X=N\right ) \) will become.
If it takes ’longer’ to see a head comes up (\(N=6\)), then the coin is taken as biased towards a tail, and
the probability of getting a head becomes smaller, this is why we see that the most likely
probability in this case to be around \(0.15\) (looking at the N=6 curve). We say that based on the
observation of \(N=6\), then the coin has a higher probability of having its probability of getting a head to
be about \(0.15\) than any other value. (The area around \(\theta =1.5\) is larger than any other area for the same
\(\delta \theta \))
Now, when \(N=2\), i.e. we flipped the coin 2 times, and got a head on the second time, then we see from
the \(N=2\) curve that the coin has a most likelihood of having a probability of getting a head to be
0.5\(.\)
This is what we would expect, since in an unbiased coin, the probability of getting a head is \(\frac {1}{2}\), and
hence with a fair coin, we expect to see a head half of the times it is flipped, and since we flipped 2
times, and saw a head the second time, this posterior probability has its most likely value to be
around .5 as well.
When \(N=1\), this says that we got a head in the first time we flipped the coin. We see that the posterior
probability of getting a head now has it maximum around 1. This means the posterior probability
is saying this coin is biased towards a head.
The above is a method to estimate the probability distribution of the probability itself of
getting a head based on the observed events and based on the prior known probability
of getting a head. Hence the events observed allow us to estimate the probability of
getting a head. Hence the posterior probability is conditioned on each event as in this
problem.
\(T_{1}\) is a random variable and \(T_{2}\) is a random variable, where \(T_{1}=\alpha e^{-\alpha t_{1}}\) and \(T_{2}=\beta e^{-\beta t_{2}}\)
\(\alpha \) and \(\beta \) can be thought of as the failure rate for each respective component. \(T_{i}\) is the lifetime of
component \(i\). Hence \(P\left ( T_{1}=t_{1}\right ) \) means to ask for the probability of the first component to have a lifetime of \(t_{1}\)
given that the failure rate of this kind of components is \(\alpha .\)
(c)Need to find \(P\left ( T_{1}>2T_{2}\right ) \) which is the same as \(P\left ( T_{1}>W\right ) \), hence this is the same as part(a) but replace \(T_{2}\) by \(W\) as show in
the following diagram
Problem review: Poisson probability density is a discrete probability function (We normally call it
the probability mass function \(pmf\)). This means the random variable is a discrete random
variable.
The random varible \(X\) in this case is the number of success in \(n\) trials where the probability of success
in each one trial is \(p\) and the trials are independent from each others. The difference between
Poisson and Binomial is that in Poisson we are looking at the problem as \(n\) becomes very
large and \(p\) becomes very small in such a way that the product \(np\) goes to a fixed value
which is called \(\lambda \), the Poisson parameter. And then we write \(P\left ( X=k\right ) =\frac {\lambda ^{k}}{k!}e^{-\lambda }\) \(\ \) where \(k=0,1,2,\cdots \) The following
diagram illustrates this problem, showing the three r.v. we need to analyze and the time
line.
But what is "trials" in this problem? If we divide the time line itself into very small time intervals \(\delta t\)
then the number of time intervals is the number of trials, and we assume that at most one event
will occure in this time interval (since it is too small). The probability \(p\) of event occuring in this \(\delta t\) is
the same in the interval \(\left [ t_{0},t_{1}\right ] \) and in the interval \(\left [ t_{1},t_{2}\right ] \). Now let us find \(\lambda \) for \(X\) and \(Y\) and \(Z\) based on this. Since \(\lambda =np\)
where \(n\) is the number of trials, then for \(X\) we have \(\lambda _{x}=n_{x}p=\frac {\left ( t_{1}-t_{0}\right ) }{\delta t}p\)\(\,\) where we divided the time interval by the time
width \(\delta t\) to obtain the number of time slots for \(X\). We do the same for \(Y\) and obtain that
\(\lambda _{y}=\frac {\left ( t_{2}-t_{1}\right ) }{\delta t}p\)
Let us refer to the random variable \(N\left ( t_{1},t_{2}\right ) \) as \(Y\) and the r.v. \(N\left ( t_{0},t_{1}\right ) \) as \(X\) and the r.v. \(N\left ( t_{0},t_{2}\right ) \) as \(Z\)
The problem is then asking to find \(P\left ( X=x|Z=n\right ) \) and to identify \(pmf\left ( X|Z\right ) \)
To help in the solution, we first draw a diagram to make it more clear.
We take \(\lambda \) to the same for the \(3\) random variables \(X,Y,Z\).
Now r.v. \(X\) \(\perp Y\), since the number of events in \(\left [ t_{0},t_{1}\right ] \) is indepenent from the number of events that could occur
in \([t_{1},t_{2}]\).
Given this, we can now write the joint probability of \(X,Y\) as the product of the marginal probabilitites.
Hence the numerator in the above can be rewritten and we obtain
We see that the parameter \(\varphi \) will occure in the numerator and denomerator with the same powers,
hence we can factor it out and cancel it. Hence we obtain
Let \(k=\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}\), then \(1-k=1-\frac {\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{x}+\lambda _{y}-\lambda _{x}}{\lambda _{x}+\lambda _{y}}=\frac {\lambda _{y}}{\lambda _{x}+\lambda _{y}}\) hence the last line above can be written as