up

HW9, Math 307. CSUF. Spring 2007.

Nasser M. Abbasi

Spring 2007   Compiled on November 5, 2018 at 8:49am  [public]

Contents

1 Section 5.1, problem 1
2 Section 5.1, problem 2
3 Section 5.1, problem 4
4 Section 5.1, problem 11
5 Section 5.1, problem 24
6 Section 5.1, problem 25
7 Section 5.2, problem 1
8 Section 5.2, problem 2
9 Section 5.2, problem 8
10 Section 5.2, problem 19
11 Section 5.2, problem 22
12 Section 5.2, problem 33
13 Section 5.3, problem 1
14 Section 5.2, problem 2
15 Section 5.2, problem 3
16 Section 5.2, problem 8
17 Section 5.2, problem 10
18 Section 5.2, problem 17

1 Section 5.1, problem 1

\(A=\) \(\begin{pmatrix} 1 & -1\\ 2 & 4 \end{pmatrix} \)

Find the eigenvalues: \(\begin{vmatrix} 1-\lambda & -1\\ 2 & 4-\lambda \end{vmatrix} =0\rightarrow \left ( \lambda -3\right ) \left ( \lambda -2\right ) =0\rightarrow \)\(\lambda _{1}=2,\lambda _{2}=3\)

For eigenvectors, solve \(A\vec{x}=\lambda \vec{x}\Rightarrow \left ( A-\lambda I\right ) \vec{x}=\vec{0}\)

when \(\lambda _{1}=2\rightarrow \begin{pmatrix} 1-\lambda _{1} & -1\\ 2 & 4-\lambda _{1}\end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} -1 & -1\\ 2 & 2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

Hence \(\begin{pmatrix} -1 & -1\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \), pivot is \(x_{1}\), free is \(x_{2}\rightarrow -x_{1}-x_{2}=0\), hence \(x_{1}=-x_{2}\), so \(\vec{v}_{1}=\begin{pmatrix} -x_{2}\\ x_{2}\end{pmatrix} =\)\(\begin{pmatrix} -1\\ 1 \end{pmatrix} \)

when \(\lambda _{1}=3\rightarrow \begin{pmatrix} 1-\lambda _{1} & -1\\ 2 & 4-\lambda _{1}\end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} -2 & -1\\ 2 & 1 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

Hence \(\begin{pmatrix} -2 & -1\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \), pivot is \(x_{1}\), free is \(x_{2}\rightarrow -2x_{1}-x_{2}=0\), hence \(x_{1}=-0.5x_{2}\), so \(\vec{v}_{2}=\begin{pmatrix} -0.5x_{2}\\ x_{2}\end{pmatrix} =\)\(\begin{pmatrix} -0.5\\ 1 \end{pmatrix} \)

Trace of matrix \(A\) is the sum of its diagonal elements, which is 5. But \(\lambda _{1}=2,\lambda _{2}=3\), hence this is the same as the sum of the eigenvalues.

determinant of \(A\) is \(\begin{vmatrix} 1 & -1\\ 2 & 4 \end{vmatrix} =4+2=6\), which is the product of the eigenvalues. Hence verified.

2 Section 5.1, problem 2

Solving \(\frac{d\vec{u}}{dt}=A\vec{u}\), \(\vec{u}\left ( 0\right ) =\begin{pmatrix} 0\\ 6 \end{pmatrix} \)

The general solution is a linear combination of every solution corresponding to each eigenvalue.  Each solution corresponding to each eigenvalue is of the form \(\vec{v}_{i}e^{\lambda _{i}t}\), where \(\vec{v}_{i}\) is the eigenvector corresponding to eigenvalue \(\lambda _{i}\), hence the solution is \(\vec{u}\left ( t\right ) =c_{1}\vec{v}_{1}e^{\lambda _{1}t}+c_{2}\vec{v}_{2}e^{\lambda _{2}t}\)

But from problem 1, we found that \(\lambda _{1}=2,\lambda _{2}=3,v_{1}=\begin{pmatrix} -1\\ 1 \end{pmatrix} ,v_{2}=\begin{pmatrix} -0.5\\ 1 \end{pmatrix} \), hence the general solution is

\begin{equation} \vec{u}\left ( t\right ) =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} e^{2t}+c_{2}\begin{pmatrix} -0.5\\ 1 \end{pmatrix} e^{3t} \tag{1} \end{equation}

\(c_{1},c_{2}\) can be found from IC. Hence at \(t=0\) we have

\[\begin{pmatrix} 0\\ 6 \end{pmatrix} =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} +c_{2}\begin{pmatrix} -0.5\\ 1 \end{pmatrix} \]

Hence \begin{align*} \begin{pmatrix} -1 & -0.5\\ 1 & 1 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 6 \end{pmatrix} \\\begin{pmatrix} -1 & -0.5\\ 0 & 0.5 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 6 \end{pmatrix} \end{align*}

Hence \(c_{2}=12\)\(,\) and \(-c_{1}-6=0\rightarrow \)\(c_{1}=-6\)

Hence eq(1) becomes

\[ \fbox{$\vec{u}\left ( t\right ) =-6\begin{pmatrix} -1\\ 1 \end{pmatrix} e^{2t}+12\begin{pmatrix} -0.5\\ 1 \end{pmatrix} e^{3t}$}\]

or \begin{align*} u\left ( t\right ) & =6e^{2t}-6e^{3t}-6e^{2t}+12e^{3t}\\ & =6e^{3t} \end{align*}

Hence the pure exponential solutions are \(\left \{ -6e^{2t},12e^{3t}\right \} \)

3 Section 5.1, problem 4

\(\frac{du}{dt}=\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{pmatrix} u\), \(u\left ( 0\right ) =\begin{pmatrix} 5\\ 3 \end{pmatrix} \)

First find the eigenvalues and eigenvectors for \(A\)

\(A=\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{pmatrix} \rightarrow \begin{vmatrix} \frac{1}{2}-\lambda & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}-\lambda \end{vmatrix} =0\rightarrow \)\begin{align*} \left ( \frac{1}{2}-\lambda \right ) ^{2}-\frac{1}{4} & =0\\ \left ( \frac{1}{4}+\lambda ^{2}-\lambda \right ) -\frac{1}{4} & =0\\ \lambda ^{2}-\lambda & =0\\ \lambda \left ( \lambda -1\right ) & =0 \end{align*}

Hence \(\lambda _{1}=0,\lambda _{2}=1\)

for \(\lambda _{1}=0\,\ \)find eigenvector \(\vec{v}_{1}\), \(\begin{pmatrix} \frac{1}{2}-\lambda & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}-\lambda \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow x_{1}+x_{2}=0\), \(v_{1}=\begin{pmatrix} -x_{2}\\ x_{2}\end{pmatrix} =\)\(\begin{pmatrix} -1\\ 1 \end{pmatrix} \)

for \(\lambda _{1}=1\,\ \)find eigenvector \(\vec{v}_{1}\), \(\begin{pmatrix} \frac{1}{2}-\lambda & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}-\lambda \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & -\frac{1}{2}\end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\begin{pmatrix} -\frac{1}{2} & \frac{1}{2}\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow -x_{1}+x_{2}=0\), \(v_{1}=\begin{pmatrix} x_{2}\\ x_{2}\end{pmatrix} =\)\(\begin{pmatrix} 1\\ 1 \end{pmatrix} \)

Hence the solution is

\begin{align*} \vec{u}\left ( t\right ) & =c_{1}\vec{v}_{1}e^{\lambda _{1}t}+c_{2}\vec{v}_{2}e^{\lambda _{2}t}\\ & =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} e^{0t}+c_{2}\begin{pmatrix} 1\\ 1 \end{pmatrix} e^{t}\\ & =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} +c_{2}\begin{pmatrix} 1\\ 1 \end{pmatrix} e^{t} \end{align*}

Apply IC to find \(c_{1},c_{2}\), hence

\[\begin{pmatrix} 5\\ 3 \end{pmatrix} =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} +c_{2}\begin{pmatrix} 1\\ 1 \end{pmatrix} \]

or

\begin{align*} \begin{pmatrix} -1 & 1\\ 1 & 1 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\end{pmatrix} & =\begin{pmatrix} 5\\ 3 \end{pmatrix} \\\begin{pmatrix} -1 & 1\\ 0 & 2 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\end{pmatrix} & =\begin{pmatrix} 5\\ 8 \end{pmatrix} \end{align*}

Hence \(c_{2}=4\)\(,-c_{1}+4=5\rightarrow \)\(c_{1}=1\)

Hence \begin{align*} \vec{u}\left ( t\right ) & =c_{1}\begin{pmatrix} -1\\ 1 \end{pmatrix} +c_{2}\begin{pmatrix} 1\\ 1 \end{pmatrix} e^{t}\\\begin{pmatrix} u_{1}\left ( t\right ) \\ u_{2}\left ( t\right ) \end{pmatrix} & =\overset{\text{{\small does\ not\ depend\ on\ time}}}{\overbrace{\begin{pmatrix} -1\\ 1 \end{pmatrix} }}+4\begin{pmatrix} 1\\ 1 \end{pmatrix} e^{t} \end{align*}

Hence the general solution is \(u_{1}\left ( t\right ) +u_{2}\left ( t\right ) =-1+4e^{t}+1+4e^{t}=8e^{t}\)

4 Section 5.1, problem 11

The eigenvalues of \(A\) equal to eigenvalues of \(A^{T}\), this is because \(\det \left ( A-\lambda I\right ) =\det \left ( A^{T}-\lambda I\right ) \). That is true because (answer in back of book), but I also add that diagonal elements do not change when taking the transpose, hence \(a_{ii}-\lambda \) remain the same in both cases.

Now, Show by example that eigenvectors of \(A\) and \(A^{T}\) are not the same.

Let \(A=\begin{pmatrix} 2 & 3\\ 4 & 1 \end{pmatrix} \), then \(\begin{vmatrix} 2-\lambda & 3\\ 4 & 1-\lambda \end{vmatrix} =0\rightarrow \left ( 2-\lambda \right ) \left ( 1-\lambda \right ) -12=0\rightarrow 2-3\lambda +\lambda ^{2}-12=0\)

\(\lambda ^{2}-3\lambda -10=0\), Solution is: \(5,-2\) hence \(\lambda _{1}=5,\lambda _{2}=-2\)

to find eigenvectors: \(\lambda _{1}=5:\ \begin{pmatrix} 2-5 & 3\\ 4 & 1-5 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \ \begin{pmatrix} -3 & 3\\ 4 & -4 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\ \begin{pmatrix} -3 & 3\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow -3x_{1}+3x_{2}=0\rightarrow \)\(\vec{v}_{1}=\begin{pmatrix} 1\\ 1 \end{pmatrix} \)

to find eigenvectors: \(\lambda _{1}=-2:\ \begin{pmatrix} 2+2 & 3\\ 4 & 1+2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \ \begin{pmatrix} 4 & 3\\ 4 & 3 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\ \begin{pmatrix} 4 & 3\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow 4x_{1}+3x_{2}=0\rightarrow \)\(\vec{v}_{2}=\begin{pmatrix} -\frac{3}{4}\\ 1 \end{pmatrix} \)

Now \(B=A^{T}=\begin{pmatrix} 2 & 4\\ 3 & 1 \end{pmatrix} \rightarrow \)same eigenvalues which are \(\lambda _{1}=5,\lambda _{2}=-2\)\(,\)but now find eigenvectors

to find eigenvectors: \(\lambda _{1}=5:\ \begin{pmatrix} 2-5 & 4\\ 3 & 1-5 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \ \begin{pmatrix} -3 & 4\\ 3 & -4 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\ \begin{pmatrix} -3 & 4\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow -3x_{1}+4x_{2}=0\rightarrow \)\(\vec{v}_{1}=\begin{pmatrix} \frac{4}{3}\\ 1 \end{pmatrix} \)

to find eigenvectors: \(\lambda _{1}=-2:\ \begin{pmatrix} 2+2 & 4\\ 3 & 1+2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \ \begin{pmatrix} 4 & 4\\ 3 & 3 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\ \begin{pmatrix} 4 & 4\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow 4x_{1}+4x_{2}=0\rightarrow \)\(\vec{v}_{1}=\begin{pmatrix} -1\\ 1 \end{pmatrix} \)

Summary:

\(A=\begin{pmatrix} 2 & 3\\ 4 & 1 \end{pmatrix} \Rightarrow \)eigenvalues are \(\left \{ -2,5\right \} ,\)eigenvectors\(\left \{ \begin{pmatrix} -\frac{3}{4}\\ 1 \end{pmatrix} ,\begin{pmatrix} 1\\ 1 \end{pmatrix} \right \} \)

\(A^{T}=\begin{pmatrix} 2 & 4\\ 3 & 1 \end{pmatrix} \Rightarrow \)eigenvalues are \(\left \{ -2,5\right \} ,\)eigenvectors\(\left \{ \begin{pmatrix} -1\\ 1 \end{pmatrix} ,\begin{pmatrix} \frac{4}{3}\\ 1 \end{pmatrix} \right \} \)

Hence eigenvalues are the same, but eigenvectors are different

5 Section 5.1, problem 24

(a) \(A\vec{x}=\lambda \vec{x}\), pre multiply each sides by \(A\rightarrow \)\begin{align*} A^{2}\vec{x} & =A\left ( \lambda x\right ) \\ & =\lambda A\vec{x}\\ & =\lambda \left ( \lambda \vec{x}\right ) \\ & =\lambda ^{2}\vec{x} \end{align*}

Hence eigenvalue of \(A^{2}\) is \(\lambda ^{2}\)

(b)\(A\vec{x}=\lambda \vec{x}\), pre multiply each sides by \(A^{-1}\rightarrow \vec{x}=\lambda A^{-1}\vec{x}\), pre multipy each side again by \(A^{-1}\)\begin{equation} A^{-1}\vec{x}=\lambda A^{-1}A^{-1}\vec{x} \tag{1} \end{equation}

But by post multiply \(A\vec{x}=\lambda \vec{x}\) by \(\vec{x}^{-1}\) we have \(A=\lambda I\), hence \(A^{-1}=\left ( \lambda I\right ) ^{-1}\), so sub into (1) we get

\begin{align*} A^{-1}\vec{x} & =\lambda \left ( \lambda I\right ) ^{-1}\left ( \lambda I\right ) ^{-1}\vec{x}\\ A^{-1}\vec{x} & =\lambda ^{-1}\vec{x} \end{align*}

Hence \(\lambda ^{-1}\) is eigenvalue of \(A^{-1}\)

(c)\(\left ( A+I\right ) \vec{x}=A\vec{x}+\vec{x}\)

But \(A\vec{x}=\lambda \vec{x}\), hence the above becomes \(\left ( A+I\right ) \vec{x}=\lambda \vec{x}+\vec{x}\rightarrow \left ( \lambda +1\right ) \vec{x}\)

Hence \(\left ( A+I\right ) \) has eigenvalue \(\left ( \lambda +1\right ) \)

6 Section 5.1, problem 25

\(u=\frac{1}{6}\begin{pmatrix} 1\\ 1\\ 3\\ 5 \end{pmatrix} \), \(P=uu^{T}=\frac{1}{36}\begin{pmatrix} 1\\ 1\\ 3\\ 5 \end{pmatrix}\begin{pmatrix} 1 & 1 & 3 & 5 \end{pmatrix} \)\[ P=\frac{1}{36}\begin{pmatrix} 1 & 1 & 3 & 5\\ 1 & 1 & 3 & 5\\ 3 & 3 & 9 & 15\\ 5 & 5 & 15 & 25 \end{pmatrix} \]

(a) \(Pu=\frac{1}{36}\begin{pmatrix} 1 & 1 & 3 & 5\\ 1 & 1 & 3 & 5\\ 3 & 3 & 9 & 15\\ 5 & 5 & 15 & 25 \end{pmatrix}\begin{pmatrix} \frac{1}{6}\\ \frac{1}{6}\\ \frac{3}{6}\\ \frac{5}{6}\end{pmatrix} \) \(=\frac{1}{36}\begin{pmatrix} 6\\ 6\\ 18\\ 30 \end{pmatrix} =\allowbreak \begin{pmatrix} \frac{1}{6}\\ \frac{1}{6}\\ \frac{3}{6}\\ \frac{5}{6}\end{pmatrix} =u\)

Hence \(u\) is an eigenvector with \(\lambda =1\)

(b)Since \(P\vec{u}=\vec{u}\,,\) take the inner product of both sides w.r.t. \(\vec{v},\) we get \(\left \langle P\vec{u},\vec{v}\right \rangle =\left \langle \vec{u},\vec{v}\right \rangle \)

But \(\left \langle \vec{u},\vec{v}\right \rangle =0\) (given), hence \(\left \langle P\vec{u},\vec{v}\right \rangle =0\), or \(\left ( P\vec{u}\right ) ^{T}\vec{v}=0\) or \(\vec{u}^{T}P^{T}\vec{v}=0\), pre multiply both sides by \(\vec{u}\), we get \(\left \langle \vec{u}\vec{u}^{T}\right \rangle P^{T}\vec{v}=\left ( 0\right ) \left ( \vec{u}\right ) \) or \(\left \Vert \vec{u}\right \Vert ^{2}P^{T}\vec{v}=\vec{0}\), since \(\vec{u}\) assumed not zero vector, we must have \(P^{T}\vec{v}=\vec{0}\), but \(P^{T}=P\) since projection matrix is symmetric, hence \(P\vec{v}=\vec{0}\)

(c)Find the basis for the subspace which is perpendicular to \(\vec{u}\)

i.e \(\begin{pmatrix} v_{1} & v_{2} & v_{3} & v_{4}\end{pmatrix} \allowbreak \begin{pmatrix} \frac{1}{6}\\ \frac{1}{6}\\ \frac{3}{6}\\ \frac{5}{6}\end{pmatrix} =0\), i.e. \(\frac{1}{6}v_{1}+\frac{1}{6}v_{2}+\frac{3}{6}v_{3}+\frac{5}{6}v_{4}=0\)

Hence \(v_{1}=6\left ( -\frac{1}{6}v_{2}-\frac{3}{6}v_{3}-\frac{5}{6}v_{4}\right ) =-v_{2}-3v_{3}-5v_{4}\Rightarrow \)\begin{align*} \vec{v} & =\allowbreak \begin{pmatrix} -v_{2}-3v_{3}-5v_{4}\\ v_{2}\\ v_{3}\\ v_{4}\end{pmatrix} \\ & =v_{2}\begin{pmatrix} -1\\ 1\\ 0\\ 0 \end{pmatrix} +v_{3}\allowbreak \begin{pmatrix} -3\\ 0\\ 1\\ 0 \end{pmatrix} +v_{4}\allowbreak \begin{pmatrix} -5\\ 0\\ 0\\ 1 \end{pmatrix} \end{align*}

Hence the 3 indep. vectors needed are

\[ \left \{ \begin{pmatrix} -1\\ 1\\ 0\\ 0 \end{pmatrix} ,\allowbreak \begin{pmatrix} -3\\ 0\\ 1\\ 0 \end{pmatrix} \allowbreak ,\begin{pmatrix} -5\\ 0\\ 0\\ 1 \end{pmatrix} \right \} \]

7 Section 5.2, problem 1

Factor \(A=\begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix} \) and \(B=\begin{pmatrix} 2 & 1\\ 0 & 0 \end{pmatrix} \) into \(S\Lambda S^{-1}\)

For A

Start by finding the eigenvalue and then the eigenvectors. For \(A\), \(\begin{vmatrix} 1-\lambda & 1\\ 1 & 1-\lambda \end{vmatrix} =0\rightarrow \left ( 1-\lambda \right ) ^{2}-1=0\)

\(1+\lambda ^{2}-2\lambda -1=0\rightarrow \lambda \left ( \lambda -2\right ) =0\rightarrow \lambda _{1}=0,\lambda _{2}=2\)

For \(\lambda _{1}=0:\ \left ( A-\lambda _{1}I\right ) \vec{x}=0\rightarrow \begin{pmatrix} 1-0 & 1\\ 1 & 1-0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\rightarrow \begin{pmatrix} 1 & 1\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow x_{1}+x_{2}=0\rightarrow x_{1}=-x_{2}\rightarrow \)\(\vec{v}_{1}=\begin{pmatrix} -1\\ 1 \end{pmatrix} \)

For \(\lambda _{1}=2:\ \left ( A-\lambda _{1}I\right ) \vec{x}=0\rightarrow \begin{pmatrix} 1-2 & 1\\ 1 & 1-2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\begin{pmatrix} -1 & 1\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow -x_{1}+x_{2}=0\rightarrow x_{1}=x_{2}\rightarrow \)\(\vec{v}_{2}=\begin{pmatrix} 1\\ 1 \end{pmatrix} \)

Hence \(S=\begin{pmatrix} -1 & 1\\ 1 & 1 \end{pmatrix} ,S^{-1}=\frac{1}{-2}\begin{pmatrix} 1 & -1\\ -1 & -1 \end{pmatrix} ,\Lambda =\begin{pmatrix} 0 & 0\\ 0 & 2 \end{pmatrix} \)

Hence \[ \fbox{$\begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix} $=$\begin{pmatrix} -1 & 1\\ 1 & 1 \end{pmatrix}\begin{pmatrix} 0 & 0\\ 0 & 2 \end{pmatrix}\begin{pmatrix} \frac{-1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{pmatrix} $}\]

For B \(\begin{pmatrix} 2 & 1\\ 0 & 0 \end{pmatrix} \)

Start by finding the eigenvalue and then the eigenvectors. For \(A\), \(\begin{vmatrix} 2-\lambda & 1\\ 0 & -\lambda \end{vmatrix} =0\rightarrow \left ( 2-\lambda \right ) \lambda =0\rightarrow \lambda _{1}=0,\lambda _{2}=2\)

For \(\lambda _{1}=0:\ \left ( A-\lambda _{1}I\right ) \vec{x}=0\rightarrow \begin{pmatrix} 2-0 & 1\\ 0 & -2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} 2 & 1\\ 0 & -2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\rightarrow x_{2}=-2,\) \(2x_{1}+x_{2}=0\rightarrow x_{1}=-\frac{x_{2}}{2}=1\rightarrow \vec{v}_{1}=\begin{pmatrix} 1\\ -2 \end{pmatrix} =\)\(\begin{pmatrix} 0.5\\ -1 \end{pmatrix} \)

For \(\lambda _{1}=2:\ \left ( A-\lambda _{1}I\right ) \vec{x}=0\rightarrow \begin{pmatrix} 2-2 & 1\\ 0 & 0-2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \begin{pmatrix} 0 & 1\\ 0 & -2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \)

\(\rightarrow 0x_{1}+x_{2}=0\rightarrow x_{1}=any,x_{2}=0\rightarrow \)\(\vec{v}_{2}=\begin{pmatrix} 1\\ 0 \end{pmatrix} \)

Hence \(S=\begin{pmatrix} 0.5 & 1\\ -1 & 0 \end{pmatrix} ,S^{-1}=\begin{pmatrix} 0 & -1\\ 1 & 0.5 \end{pmatrix} ,\Lambda =\begin{pmatrix} 0 & 0\\ 0 & 2 \end{pmatrix} \)

Hence \[ \fbox{$\begin{pmatrix} 2 & 1\\ 0 & 0 \end{pmatrix} $=$\begin{pmatrix} 0.5 & 1\\ -1 & 0 \end{pmatrix}\begin{pmatrix} 0 & 0\\ 0 & 2 \end{pmatrix}\begin{pmatrix} 0 & -1\\ 1 & 0.5 \end{pmatrix} $}\]

8 Section 5.2, problem 2

problem: Find the matrix  A whose eigenvalues are 1 and 4 and whose eigenvectors are  \(\begin{pmatrix} 3\\ 1 \end{pmatrix} \) and  \(\begin{pmatrix} 2\\ 1 \end{pmatrix} \)

solution: Let this matrix \(A=S\Lambda S^{-1}\), where \(S=\begin{pmatrix} 3 & 2\\ 1 & 1 \end{pmatrix} ,S^{-1}=\begin{pmatrix} 1 & -2\\ -1 & 3 \end{pmatrix} ,\Lambda =\begin{pmatrix} 1 & 0\\ 0 & 4 \end{pmatrix} ,\) hence

\begin{align*} A & =\begin{pmatrix} 3 & 2\\ 1 & 1 \end{pmatrix}\begin{pmatrix} 1 & 0\\ 0 & 4 \end{pmatrix}\begin{pmatrix} 1 & -2\\ -1 & 3 \end{pmatrix} \\ & =\begin{pmatrix} -5 & 18\\ -3 & 10 \end{pmatrix} \end{align*}

9 Section 5.2, problem 8

problem: Suppose \(A=\vec{u}\vec{v}^{T}\), is a column times a row (rank 1 matrix). (a) by multiplying \(A\) times \(\vec{u}\), show that \(\vec{u}\) is an eigenvector. What is \(\lambda ?\)

(b)What are the other eigenvalues of A and why? (c)Compare trace(A) from the sum on the diagonal and the sum of \(\lambda ^{\prime }s\)

answer:

(a) Given \(A=\vec{u}\vec{v}^{T}\), post multiply both sides by \(\vec{u}\), hence

\begin{align*} A\vec{u} & =\vec{u}\vec{v}^{T}\vec{u}\\ & =\vec{u}\left ( \vec{v}^{T}\vec{u}\right ) \end{align*}

But \(\vec{v}^{T}\vec{u}\) is a number, since this is the dot product of 2 vectors, call this number \(\lambda \), hence the above becomes

\[ A\vec{u}=\vec{u}\lambda \]

Since \(\lambda \) is a number, it can be moved to the left of \(\vec{u}\)

\[ A\vec{u}=\lambda \vec{u}\]

Hence \(\vec{u}\) is an eigenvalue of \(A\) and \(\lambda =\left \langle \vec{v},\vec{u}\right \rangle \)

(b)Let \(\vec{u}=\left ( u_{1},u_{2},\cdots ,u_{n}\right ) ,\vec{v}=\left ( v_{1},v_{2},\cdots ,v_{n}\right ) \), hence

\begin{align*} A & =\vec{u}\vec{v}^{T}\\ & =\begin{pmatrix} u_{1}\\ u_{1}\\ \vdots \\ u_{1}\end{pmatrix} \left ( v_{1},v_{2},\cdots ,v_{n}\right ) \\ & =\begin{pmatrix} u_{1}v_{1} & u_{1}v_{2} & \cdots & u_{1}v_{n}\\ u_{2}v_{1} & u_{2}v_{2} & \cdots & u_{2}v_{n}\\ \vdots & \vdots & \ddots & \vdots \\ u_{n}v_{1} & u_{n}v_{2} & \cdots & u_{n}v_{n}\end{pmatrix} \end{align*}

We see that the diagonal elements on A sum to the eigenvalue we found above, which is \(\lambda _{1}=\left \langle \vec{v},\vec{u}\right \rangle \), since \(\left \langle \vec{v},\vec{u}\right \rangle =u_{1}v_{1}+u_{2}v_{2}+\cdots +u_{n}v_{n}\) by definition. And since the sum of all eigenvalues must equal the trace of A, hence all other \(n-1\) eigenvalues must be each zero, otherwise the sum will not remain the same as the trace.

(c)trace of A is the sum of all eigenvalues, hence trace(A)=\(\lambda _{1}\) as explained in part (b) above.

10 Section 5.2, problem 19

Problem: true of false If the n columns of  S (eigenvectors of A) are independent, then (a) A invertible, (b) A is diagonalizable (c)S is invertible (d) S is diagonalizable

answer:

(a) FALSE. counter example \(A=\begin{pmatrix} 1 & 2\\ 1 & 2 \end{pmatrix} \), this is singular (2 rows are the same, hence \(\det \left ( A\right ) =0\)), but it has different eigenvalues \(\lambda _{1}=0\), \(\lambda _{2}=3\), so its eigenvectors are linearly independent, and they are \(\left \{ \begin{pmatrix} -1\\ 0.5 \end{pmatrix} ,\begin{pmatrix} -1\\ -1 \end{pmatrix} \right \} .\) Invertibility depends on nonzero eigenvalues, while diagonalization depends on having enough independent eigenvectors.

(b)TRUE. Since \(S\) is given, and it is n independent columns, then we have enough eigenvectors, and can find \(S^{-1}\), and since there exist \(\Lambda \) (even though we do not know the eigenvalues, we can find \(n\) of them), then we could write \(A=S\) \(\Lambda S^{-1}\)

(c)TRUE. We are told the n columns are independent. An \(n\times n\) matrix has with \(n\) linearly independent columns if full rank and is invertible.

(d)FALSE. Matrix must be NORMAL for it to be diagonalizabe.  Which means \(SS^{H}=S^{H}S\), but we do not know that in this case, even though Matrix \(S\) is invertible, it is not enough.

11 Section 5.2, problem 22

problem: Write the most general matrix that has eigenvectors \(\begin{pmatrix} 1\\ 1 \end{pmatrix} \) and \(\begin{pmatrix} 1\\ -1 \end{pmatrix} \)

answer: Since \(A\) has the above eigenvectors, then \(S=\begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} ,S^{-1}=\frac{-1}{2}\begin{pmatrix} -1 & -1\\ -1 & 1 \end{pmatrix} =\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-1}{2}\end{pmatrix} \)

Hence \begin{align*} A & =S\Lambda S^{-1}\\ & =\begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} \lambda _{1} & 0\\ 0 & \lambda _{2}\end{pmatrix}\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-1}{2}\end{pmatrix} \\ & =\frac{1}{2}\begin{pmatrix} \lambda _{1}+\lambda _{2} & \lambda _{1}-\lambda _{2}\\ \lambda _{1}-\lambda _{2} & \lambda _{1}+\lambda _{2}\end{pmatrix} \end{align*}

This is the most general expression of \(A\), it is in terms of its eigenvalues.

12 Section 5.2, problem 33

problem: diagonalize \(B\) and compute \(S\Lambda ^{k}S^{-1}\) to prove this formula for \(B^{k}\)

\(B=\begin{pmatrix} 3 & 1\\ 0 & 2 \end{pmatrix} ,B^{k}=\begin{pmatrix} 3^{k} & 3^{k}-2^{k}\\ 0 & 2^{k}\end{pmatrix} \)

answer:

find eigenvalues of \(B\) \(\rightarrow \begin{vmatrix} 3-\lambda & 1\\ 0 & 2-\lambda \end{vmatrix} =0\rightarrow \left ( 3-\lambda \right ) \left ( 2-\lambda \right ) =0\rightarrow \)\(\lambda =3,\lambda =2\)

Find eigenvectors, \(B\vec{x}=\lambda \vec{x}\)

for \(\lambda =3:\) \(\begin{pmatrix} 3-3 & 1\\ 0 & 2-3 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \) \(\begin{pmatrix} 0 & 1\\ 0 & -1 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow 0x_{1}+x_{2}=0\rightarrow x_{1}=any,x_{2}=0\)

hence \(\vec{v}_{1}=\begin{pmatrix} 1\\ 0 \end{pmatrix} \)

for \(\lambda =2:\) \(\begin{pmatrix} 3-2 & 1\\ 0 & 2-2 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow \) \(\begin{pmatrix} 1 & 1\\ 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \rightarrow x_{1}+x_{2}=0\rightarrow x_{1}=-x_{2}\)

hence \(\vec{v}_{2}=\begin{pmatrix} -1\\ 1 \end{pmatrix} \)

Hence \(B=S\Lambda S^{-1}\rightarrow S=\begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix} ,S^{-1}=\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} ,\Lambda =\begin{pmatrix} 3 & 0\\ 0 & 2 \end{pmatrix} \)

so \[ \fbox{$B=\begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\begin{pmatrix} 3 & 0\\ 0 & 2 \end{pmatrix}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} $ }\]

Hence \(B^{k}=\left ( S\Lambda S^{-1}\right ) ^{k}=S\Lambda ^{k}\left ( S^{-1}\right ) \) (we proved this formula in class)

Hence \begin{align*} B^{k} & =\begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix} \Lambda ^{k}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \\ & =\begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\begin{pmatrix} 3 & 0\\ 0 & 2 \end{pmatrix} ^{k}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \\ & =\begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\begin{pmatrix} 3^{k} & 0\\ 0 & 2^{k}\end{pmatrix}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \\ & =\begin{pmatrix} 3^{k} & -2^{k}\\ 0 & 2^{k}\end{pmatrix}\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \\ & =\begin{pmatrix} 3^{k} & 3^{k}-2^{k}\\ 0 & 2^{k}\end{pmatrix} \end{align*}

13 Section 5.3, problem 1

problem: prove that every third Fibonacci number is even.

Using the fact that odd+odd=even, and that odd+even=odd, and that even+odd=odd.

Starting the count from \(1,1,2\rightarrow \) we see that \(2\) is the sum of 2 odd numbers (1,1), hence it is even, the number after 2 will be the sum of an odd and even numbers, hence it is odd, and the number after that will be the sum of an even and odd numbers, hence it is odd, now that we have 2 odd numbers generated, the number next must be even since we are adding 2 odd numbers. Hence we see that it takes 2 steps to generate 2 odd numbers, and one step to make an even number. Hence every third number must be even, with odd numbers in between the even numbers (of course).

14 Section 5.2, problem 2

Let \(N_{1}\) be population at start of first year, hence population at end of third year is \(N_{3}=6\left ( \frac{1}{3}\left ( \frac{1}{2}N_{1}\right ) \right ) \)

Now, we use this population again to run it for 3 more years: \(N_{3\times 2}=6\left ( \frac{1}{3}\left ( \frac{1}{2}\left ( N_{3}\right ) \right ) \right ) =6\left ( \frac{1}{3}\left ( \frac{1}{2}\left ( \overset{N_{3\times 1}}{\overbrace{6\left ( \frac{1}{3}\left ( \frac{1}{2}N_{1}\right ) \right ) }}\right ) \right ) \right ) \)

Hence we see that after \(k\) number of 3 years periods, we have \(N_{3k}=6\left ( \frac{1}{3}\left ( \frac{1}{2}N_{3\left ( k-1\right ) }\right ) \right ) =6\left ( \frac{1}{3}\left ( \frac{1}{2}\left ( 6\left ( \frac{1}{3}\left ( \frac{1}{2}N_{3\left ( k-2\right ) }\right ) \right ) \right ) \right ) \right ) =\cdots =6\left ( \frac{1}{3}\left ( \frac{1}{2}\left ( 6\left ( \frac{1}{3}\left ( \frac{1}{2}\left ( \cdots \left ( 6\left ( \frac{1}{3}\left ( \frac{1}{2}N_{1}\right ) \right ) \right ) \right ) \right ) \right ) \right ) \right ) \right ) \)

Hence \begin{align*} N_{3k} & =6^{k}\frac{1}{3^{k}}\frac{1}{2^{k}}N_{1}\\ & =N_{1} \end{align*}

Hence the population remain the same at the end of each 3 years intervals. So, for \(k=2\), ie after 6 years, the population will remain at \(3000\) beetles.

We also see the above, since \begin{align*} A^{3} & =\begin{pmatrix} 0 & 0 & 6\\ \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{3} & 0 \end{pmatrix} ^{3}=\begin{pmatrix} 0 & 0 & 6\\ \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{3} & 0 \end{pmatrix}\begin{pmatrix} 0 & 0 & 6\\ \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{3} & 0 \end{pmatrix}\begin{pmatrix} 0 & 0 & 6\\ \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{3} & 0 \end{pmatrix} \\ & =\begin{pmatrix} 0 & 2 & 0\\ 0 & 0 & 3\\ \frac{1}{6} & 0 & 0 \end{pmatrix}\begin{pmatrix} 0 & 0 & 6\\ \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{3} & 0 \end{pmatrix} \\ & =\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \end{align*}

Hence for \(A^{3n}=I^{n}=I\), the system does not change.

15 Section 5.2, problem 3

problem: For Fibonacci matrix \(A=\begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix} \) compute \(A^{2},A^{3},A^{4}\) then calculate \(F_{20}\)

answer: \(A^{2}=\begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix}\begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix} =\allowbreak \begin{pmatrix} 2 & 1\\ 1 & 1 \end{pmatrix} \)

\(A^{3}=\allowbreak \begin{pmatrix} 2 & 1\\ 1 & 1 \end{pmatrix}\begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix} =\allowbreak \begin{pmatrix} 3 & 2\\ 2 & 1 \end{pmatrix} \)

\(A^{4}=\allowbreak \begin{pmatrix} 3 & 2\\ 2 & 1 \end{pmatrix}\begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix} =\allowbreak \begin{pmatrix} 5 & 3\\ 3 & 2 \end{pmatrix} \)

To find \(F_{20}\) then use the formula derived in class and find \(F_{20}=\frac{1}{\sqrt{5}}\lambda _{1}^{20}=\frac{1}{\sqrt{5}}\left ( \frac{1+\sqrt{5}}{2}\right ) ^{20}=\allowbreak 6765.0000295639\)

Hence the nearest digit (floor) is \(\allowbreak \)\(6765\)

16 Section 5.2, problem 8

Markov transition is \(\begin{pmatrix} d_{k+1}\\ s_{k+1}\\ w_{k+1}\end{pmatrix} =\begin{pmatrix} 1 & \frac{1}{4} & 0\\ 0 & \frac{3}{4} & \frac{1}{2}\\ 0 & 0 & \frac{1}{2}\end{pmatrix}\begin{pmatrix} d_{k}\\ s_{k}\\ w_{k}\end{pmatrix} \)

The eigenvalues for \(A\) are \(1,\frac{3}{4},\frac{1}{2}\), and the eigenvectors are

eigenvectors: \(\left \{ \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix} \right \} \leftrightarrow 1,\left \{ \begin{pmatrix} 1\\ -2\\ 1 \end{pmatrix} \right \} \leftrightarrow \frac{1}{2},\left \{ \begin{pmatrix} -1\\ 1\\ 0 \end{pmatrix} \right \} \leftrightarrow \frac{3}{4}\),

hence since the eigenvalues that are less than 1 mean these solutions are stable. For eigenvalue 1, it is neutral stable.  Solution is \(u_{k}=c_{1}\lambda _{1}^{k}\vec{x}_{1}+c_{2}\lambda _{2}^{k}\vec{x}_{2}+c_{3}\lambda _{3}^{k}\vec{x}_{3}\), then all terms with \(\lambda <1\,\)vanish and we are left with \(u_{\infty }=c_{1}1_{1}^{k}\vec{x}_{1}=c_{1}\vec{x}_{1}=c_{1}\begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix} \)

Hence population will all die.

17 Section 5.2, problem 10

Find limit values of \(y_{k}\) and \(z_{k}\left ( k\rightarrow \infty \right ) \) if

\(y_{k+1}=.8y_{k}+.3z_{k}\)

\(z_{k+1}=.2y_{k}+.7z_{k}\)

\(y_{0}=0,z_{0}=5\)

Set up Markov system

\[\begin{pmatrix} y_{k+1}\\ z_{k+1}\end{pmatrix} =\begin{pmatrix} .8 & .3\\ .2 & .7 \end{pmatrix}\begin{pmatrix} y_{k}\\ z_{k}\end{pmatrix} \] For A we have eigenvalues/eigenvectors: \(\left \{ \begin{pmatrix} 0.832\,05\\ 0.554\,7 \end{pmatrix} \right \} \leftrightarrow 1.0,\allowbreak \left \{ \begin{pmatrix} 0.707\,11\\ -0.707\,11 \end{pmatrix} \right \} \leftrightarrow 0.5\allowbreak \)

Hence the solution is \(\begin{pmatrix} y_{k}\\ z_{k}\end{pmatrix} =c_{1}\lambda _{1}^{k}\vec{v}_{1}+c_{1}\lambda _{2}^{k}\vec{v}_{2}=c_{1}1^{k}\begin{pmatrix} 0.832\,05\\ 0.554\,7 \end{pmatrix} +c_{2}\left ( 0.5\right ) ^{k}\begin{pmatrix} 0.707\,11\\ -0.707\,11 \end{pmatrix} \)

As \(k\rightarrow \infty \) we have

\[\begin{pmatrix} y_{\infty }\\ z_{\infty }\end{pmatrix} =c_{1}\begin{pmatrix} 0.832\,05\\ 0.554\,7 \end{pmatrix} \]

To find \(c_{1}\) use initial conditions. \(y_{0}=0,z_{0}=5\)

At \(k=0,\begin{pmatrix} 0\\ 5 \end{pmatrix} =c_{1}\begin{pmatrix} 0.832\,05\\ 0.554\,7 \end{pmatrix} +c_{2}\begin{pmatrix} 0.707\,11\\ -0.707\,11 \end{pmatrix} \)

Hence \(0=0.832\,05\ c_{1}+0.707\,11\ c_{2}\) and \(5=0.554\,7\ c_{1}-0.707\,11\ c_{2}\)

Solve for \(c_{1},c_{2}\) we get \(c_{1}=3.6056,c_{2}=-4.2426\)hence steady state solution is \begin{align*} \begin{pmatrix} y_{\infty }\\ z_{\infty }\end{pmatrix} & =3.6056\begin{pmatrix} 0.832\,05\\ 0.554\,7 \end{pmatrix} \\ & =\begin{pmatrix} 3.0\\ 2.0 \end{pmatrix} \end{align*}

\begin{align*} \begin{pmatrix} y_{k+1}\\ z_{k+1}\end{pmatrix} & =A^{k}u_{0}\\ & =\left ( S\Lambda S^{-1}\right ) ^{k}u_{0}\\ & =\begin{pmatrix} 0.832\,05 & 0.707\,11\\ 0.554\,7 & -0.707\,11 \end{pmatrix}\begin{pmatrix} 1 & 0\\ 0 & 0.5 \end{pmatrix} ^{k}\begin{pmatrix} 0.832\,05 & 0.707\,11\\ 0.554\,7 & -0.707\,11 \end{pmatrix} ^{-1}\begin{pmatrix} 0\\ 5 \end{pmatrix} \\ & =\begin{pmatrix} 0.832\,05 & 0.707\,11\\ 0.554\,7 & -0.707\,11 \end{pmatrix}\begin{pmatrix} 1 & 0\\ 0 & 0.5^{k}\end{pmatrix}\begin{pmatrix} 0.721\,11 & 0.721\,11\\ 0.565\,68 & -0.848\,52 \end{pmatrix} \allowbreak \begin{pmatrix} 0\\ 5 \end{pmatrix} \\ & =\begin{pmatrix} 0.832\,05 & 0.707\,11\times 0.5^{k}\\ 0.554\,7 & -0.707\,11\times 0.5^{k}\end{pmatrix}\begin{pmatrix} 3.\,\allowbreak 605\,6\\ -4.\,\allowbreak 242\,6 \end{pmatrix} \\ & =\begin{pmatrix} 3-3\times 0.5^{k}\\ 3\times 0.5^{k}+2.0 \end{pmatrix} \end{align*}

18 Section 5.2, problem 17

What values produce instability in \(v_{n+1}=\alpha \left ( v_{n}+\omega _{n}\right ) ,\omega _{n+1}=\alpha \left ( v_{n}+\omega _{n}\right ) \)

Solution:

Set up the markov system

\[\begin{pmatrix} v_{n+1}\\ \omega _{n+1}\end{pmatrix} =\begin{pmatrix} \alpha & \alpha \\ \alpha & \alpha \end{pmatrix}\begin{pmatrix} v_{n}\\ \omega _{n}\end{pmatrix} \] eigenvalues: \(2\alpha ,0\), hence an eigenvalue \(>\)1  will produce instability. Hence \(2\alpha >1\) or \(\alpha >\frac{1}{2}\)