2.8 HW 8

  2.8.1 Problems listing
  2.8.2 Problem 7, section 6.2
  2.8.3 Problem 15 section 6.2
  2.8.4 Problem 19 section 6.2
  2.8.5 Problem 7 section 6.3
  2.8.6 Problem 13 section 6.3
  2.8.7 Problem 25 section 6.3
  2.8.8 Additional problem 1
  2.8.9 Additional problem 2
  2.8.10 key solution for HW8

2.8.1 Problems listing

PDF

PDF (letter size)
PDF (legal size)

2.8.2 Problem 7, section 6.2

In Problems 1 through 28, determine whether or not the given matrix \(A\) is diagonalizable. If it is, find a diagonalizing matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\)

\[ \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] \] solution The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] -\lambda \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{cc}6-\lambda & -10\\ 2 & -3-\lambda \end {array} \right ] & =0\\ \left ( 6-\lambda \right ) \left ( -3-\lambda \right ) +20 & =0\\ \lambda ^{2}-3\lambda +2 & =0\\ \left ( \lambda -2\right ) \left ( \lambda -1\right ) & =0 \end {align*}

The eigenvalues are the roots of the above characteristic polynomial. From the above, these are \begin {align*} \lambda _{1} & =2\\ \lambda _{2} & =1 \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(1\) \(1\) real eigenvalue



\(2\) \(1\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda = 1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] -(1)\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] -\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}5 & -10\\ 2 & -4 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}5 & -10 & 0\\ 2 & -4 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {2R_{1}}{5}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}5 & -10 & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}5 & -10\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. First row gives \(5v_{1}=10t\) or \(v_{1}=2t\). Hence the eigenvector for this eigenvalue is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}2t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}2\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}2\\ 1 \end {array} \right ] \] \(\lambda =2\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] -(2)\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] -\left [ \begin {array} [c]{cc}2 & 0\\ 0 & 2 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}4 & -10\\ 2 & -5 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}4 & -10 & 0\\ 2 & -5 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {R_{1}}{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}4 & -10 & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}4 & -10\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. First row gives \(4v_{1}=10v_{2}\) or \(v_{1}=\frac {5t}{2}\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {5t}{2}\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}\frac {5}{2}\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {5}{2}\\ 1 \end {array} \right ] \] Which can be normalized to \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}5\\ 2 \end {array} \right ] \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}2\\ 1 \end {array} \right ] \)





\(2\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}5\\ 2 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{cc}\lambda _{1} & 0\\ 0 & \lambda _{2}\end {array} \right ] =\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 2 \end {array} \right ] \\ P & =\left [ \begin {array} [c]{cc}2 & 5\\ 1 & 2 \end {array} \right ] \end {align*}

Therefore \begin {align*} A & =PDP^{-1}\\ \left [ \begin {array} [c]{cc}6 & -10\\ 2 & -3 \end {array} \right ] & =\left [ \begin {array} [c]{cc}2 & 5\\ 1 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}2 & 5\\ 1 & 2 \end {array} \right ] ^{-1} \end {align*}

2.8.3 Problem 15 section 6.2

In Problems 1 through 28, determine whether or not the given matrix \(A\) is diagonalizable. If it is, find a diagonalizing matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\) \[ \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] \] Solution

The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] -\lambda \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{ccc}3-\lambda & -3 & 1\\ 2 & -2-\lambda & 1\\ 0 & 0 & 1-\lambda \end {array} \right ] & =0 \end {align*}

Expanding along last row gives\begin {align*} \left ( -1\right ) ^{3+3}\left ( 1-\lambda \right ) \begin {vmatrix} 3-\lambda & -3\\ 2 & -2-\lambda \end {vmatrix} & =0\\ \left ( 1-\lambda \right ) \left ( \left ( 3-\lambda \right ) \left ( -2-\lambda \right ) +6\right ) & =0\\ \left ( 1-\lambda \right ) \left ( \lambda ^{2}-\lambda \right ) & =0\\ \left ( 1-\lambda \right ) \lambda \left ( \lambda -1\right ) & =0 \end {align*}

The eigenvalues are the roots of the above characteristic polynomial. These are seen to be \begin {align*} \lambda _{1} & =0\\ \lambda _{2} & =1\\ \lambda _{3} & =1 \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(0\) \(1\) real eigenvalue



\(1\) \(2\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda =0\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] -(0)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}3 & -3 & 1 & 0\\ 2 & -2 & 1 & 0\\ 0 & 0 & 1 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {2R_{1}}{3}\Longrightarrow \left [ \begin {array} [c]{@{}ccc|c}3 & -3 & 1 & 0\\ 0 & 0 & {\frac {1}{3}} & 0\\ 0 & 0 & 1 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-3R_{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}3 & -3 & 1 & 0\\ 0 & 0 & {\frac {1}{3}} & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 0 & 0 & \frac {1}{3}\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1},v_{3}\}\). Let \(v_{2}=t\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. Second row gives \(v_{3}=0\). First row gives \(3v_{1}-3v_{2}=0\) or \(v_{1}=t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t\\ 0 \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] \(\lambda =1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] -(1)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}2 & -3 & 1\\ 2 & -3 & 1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}2 & -3 & 1 & 0\\ 2 & -3 & 1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}2 & -3 & 1 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}2 & -3 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2},v_{3}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Let \(v_{3}=s\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. First row gives \(2v_{1}-3v_{2}+v_{3}=0\) or \(v_{1}=\frac {3t}{2}-\frac {s}{2}\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {3t}{2}-\frac {s}{2}\\ t\\ s \end {array} \right ] \] Since there are two free Variable, we have found two eigenvectors associated with this eigenvalue. The above can be written as \begin {align*} \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}\frac {3t}{2}\\ t\\ 0 \end {array} \right ] +\left [ \begin {array} [c]{c}-\frac {s}{2}\\ 0\\ s \end {array} \right ] \\ & =t\left [ \begin {array} [c]{c}\frac {3}{2}\\ 1\\ 0 \end {array} \right ] +s\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \end {align*}

By letting \(t=1\) and \(s=1\) then the above becomes \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {3}{2}\\ 1\\ 0 \end {array} \right ] +\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \] Hence the two eigenvectors associated with this eigenvalue are \[ \left ( \left [ \begin {array} [c]{c}\frac {3}{2}\\ 1\\ 0 \end {array} \right ] ,\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \right ) \] Which can be normalized to \[ \left ( \left [ \begin {array} [c]{c}3\\ 2\\ 0 \end {array} \right ] ,\left [ \begin {array} [c]{c}-1\\ 0\\ 2 \end {array} \right ] \right ) \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(0\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \)





\(1\) \(2\) \(2\) No \(\left [ \begin {array} [c]{cc}3 & -1\\ 2 & 0\\ 0 & 2 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvalues found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{ccc}0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \\ P & =\left [ \begin {array} [c]{ccc}1 & 3 & -1\\ 1 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] =\left [ \begin {array} [c]{ccc}1 & 3 & -1\\ 1 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \left [ \begin {array} [c]{ccc}0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 3 & -1\\ 1 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] ^{-1}\]

2.8.4 Problem 19 section 6.2

In Problems 1 through 28, determine whether or not the given matrix \(A\) is diagonalizable. If it is, find a diagonalizing matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\) \[ \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] \] Solution

The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -\lambda \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{ccc}1-\lambda & 1 & -1\\ -2 & 4-\lambda & -1\\ -4 & 4 & 1-\lambda \end {array} \right ] & =0\\ -\lambda ^{3}+6\lambda ^{2}-11\lambda +6 & =0 \end {align*}

Expanding along first row gives

\begin {align*} \left ( 1-\lambda \right ) \begin {vmatrix} 4-\lambda & -1\\ 4 & 1-\lambda \end {vmatrix} -\begin {vmatrix} -2 & -1\\ -4 & 1-\lambda \end {vmatrix} -\begin {vmatrix} -2 & 4-\lambda \\ -4 & 4 \end {vmatrix} & =0\\ \left ( 1-\lambda \right ) \left ( \left ( 4-\lambda \right ) \left ( 1-\lambda \right ) +4\right ) -\left ( -2\left ( 1-\lambda \right ) -4\right ) -\left ( -8+4\left ( 4-\lambda \right ) \right ) & =0\\ -\lambda ^{3}+6\lambda ^{2}-13\lambda +8-\left ( 2\lambda -6\right ) -\left ( 8-4\lambda \right ) & =0\\ -\lambda ^{3}+6\lambda ^{2}-11\lambda +6 & =0\\ \lambda ^{3}-6\lambda ^{2}+11\lambda -6 & =0 \end {align*}

Trying \(\lambda =1\)\begin {align*} 1^{3}-6+11-6 & =0\\ 0 & =0 \end {align*}

Hence \(\left ( \lambda -1\right ) \) is a factor. Doing long division \(\frac {\lambda ^{3}-6\lambda ^{2}+11\lambda -6}{\left ( \lambda -1\right ) }=\lambda ^{2}-5\lambda +6\). This can be factored as \(\left ( \lambda -2\right ) \left ( \lambda -3\right ) \). Therefore\[ \lambda ^{3}-6\lambda ^{2}+11\lambda -6=\left ( \lambda -1\right ) \left ( \lambda -2\right ) \left ( \lambda -3\right ) \] Hence the eigenvalues are\begin {align*} \lambda _{1} & =1\\ \lambda _{2} & =2\\ \lambda _{3} & =3 \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(1\) \(1\) real eigenvalue



\(2\) \(1\) real eigenvalue



\(3\) \(1\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda =1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -(1)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}0 & 1 & -1\\ -2 & 3 & -1\\ -4 & 4 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}0 & 1 & -1 & 0\\ -2 & 3 & -1 & 0\\ -4 & 4 & 0 & 0 \end {array} \right ] \] current pivot \(A(1,1)\) is zero. Hence we need to replace current pivot row with one non-zero. Replacing row \(1\) with row \(2\) gives \[ \left [ \begin {array} [c]{@{}ccc|c}-2 & 3 & -1 & 0\\ 0 & 1 & -1 & 0\\ -4 & 4 & 0 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-2 & 3 & -1 & 0\\ 0 & 1 & -1 & 0\\ 0 & -2 & 2 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}+2R_{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-2 & 3 & -1 & 0\\ 0 & 1 & -1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}-2 & 3 & -1\\ 0 & 1 & -1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{3}\}\) and the leading variables are \(\{v_{1},v_{2}\}\). Let \(v_{3}=t\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. Second row gives \(v_{2}=v_{3}=t\). First row gives \(-2v_{1}+3v_{2}-v_{3}=0\) or \(-2v_{1}=-3t+t=-2t\). Hence \(v_{1}=t\). Therefore the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \] \(\lambda =2\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -(2)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}-1 & 1 & -1\\ -2 & 2 & -1\\ -4 & 4 & -1 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}-1 & 1 & -1 & 0\\ -2 & 2 & -1 & 0\\ -4 & 4 & -1 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-1 & 1 & -1 & 0\\ 0 & 0 & 1 & 0\\ -4 & 4 & -1 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-4R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-1 & 1 & -1 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 3 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-3R_{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-1 & 1 & -1 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}-1 & 1 & -1\\ 0 & 0 & 1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1},v_{3}\}\). Let \(v_{2}=t\). Third row gives \(v_{3}=0\). First row gives \(-v_{1}+v_{2}=0\) or \(v_{1}=t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t\\ 0 \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] \(\lambda =3\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -(3)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}3 & 0 & 0\\ 0 & 3 & 0\\ 0 & 0 & 3 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}-2 & 1 & -1\\ -2 & 1 & -1\\ -4 & 4 & -2 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}-2 & 1 & -1 & 0\\ -2 & 1 & -1 & 0\\ -4 & 4 & -2 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-2 & 1 & -1 & 0\\ 0 & 0 & 0 & 0\\ -4 & 4 & -2 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}-2 & 1 & -1 & 0\\ 0 & 0 & 0 & 0\\ 0 & 2 & 0 & 0 \end {array} \right ] \] current pivot \(A(2,2)\) is zero. Hence we need to replace current pivot row with one non-zero. Replacing row \(2\) with row \(3\) gives \[ \left [ \begin {array} [c]{@{}ccc|c}-2 & 1 & -1 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}-2 & 1 & -1\\ 0 & 2 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{3}\}\) and the leading variables are \(\{v_{1},v_{2}\}\). Let \(v_{3}=t\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables. Second row gives \(v_{2}=0\). First row gives \(-2v_{1}=v_{3}=t\). Hence \(v_{1}=-\frac {t}{2}\). Therefore\[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-\frac {t}{2}\\ 0\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \] Which can be normalized to \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-1\\ 0\\ 2 \end {array} \right ] \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \)





\(2\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \)





\(3\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}-1\\ 0\\ 2 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \end {array} \right ] \\ P & =\left [ \begin {array} [c]{ccc}1 & 1 & -1\\ 1 & 1 & 0\\ 1 & 0 & 2 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ -2 & 4 & -1\\ -4 & 4 & 1 \end {array} \right ] =\left [ \begin {array} [c]{ccc}1 & 1 & -1\\ 1 & 1 & 0\\ 1 & 0 & 2 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 1 & -1\\ 1 & 1 & 0\\ 1 & 0 & 2 \end {array} \right ] ^{-1}\]

2.8.5 Problem 7 section 6.3

In Problems 1 through 10, a matrix \(A\) is given. Use the method of Example 1 to compute \(A^{5}\). \[ \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \] Solution

If \(A\) is diagonalizable, then by first writing \(A=PDP^{-1}\) then \(A^{5}=PD^{5}P^{-1}\). And since \(D\) is diagonal matrix, it is easy to raise it to power. So the first step is to diagonalize \(A\) as we did in the above problems.

The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] -\lambda \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{ccc}1-\lambda & 3 & 0\\ 0 & 2-\lambda & 0\\ 0 & 0 & 2-\lambda \end {array} \right ] & =0 \end {align*}

Expansion along the first column gives\begin {align*} \left ( 1-\lambda \right ) \begin {vmatrix} 2-\lambda & 0\\ 0 & 2-\lambda \end {vmatrix} & =0\\ \left ( 1-\lambda \right ) \left ( 2-\lambda \right ) \left ( 2-\lambda \right ) & =0 \end {align*}

Therefore the eigenvalues are\begin {align*} \lambda _{1} & =1\\ \lambda _{2} & =2\\ \lambda _{3} & =2 \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(1\) \(1\) real eigenvalue



\(2\) \(2\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda =1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where

\begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] -(1)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] -\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}0 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}0 & 3 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {R_{1}}{3}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}0 & 3 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \end {array} \right ] \] current pivot \(A(2,3)\) is zero. Hence we need to replace current pivot row with one non-zero. Replacing row \(2\) with row \(3\) gives \[ \left [ \begin {array} [c]{@{}ccc|c}0 & 3 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}0 & 3 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{1}\}\) and the leading variables are \(\{v_{2},v_{3}\}\). Let \(v_{1}=t\). Now we start back substitution. Second row gives \(v_{3}=0\). First row also gives \(v_{2}=0\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ 0\\ 0 \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 0\\ 0 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 0\\ 0 \end {array} \right ] \] \(\lambda =2\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] -(2)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] -\left [ \begin {array} [c]{ccc}2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}-1 & 3 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}-1 & 3 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}-1 & 3 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2},v_{3}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Let \(v_{3}=s\). Now we start back substitution. First row gives \(-v_{1}=-3v_{2}=-3t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}3t\\ t\\ s \end {array} \right ] \] Since there are two free Variable, we have found two eigenvectors associated with this eigenvalue. The above can be written as \begin {align*} \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}3t\\ t\\ 0 \end {array} \right ] +\left [ \begin {array} [c]{c}0\\ 0\\ s \end {array} \right ] \\ & =t\left [ \begin {array} [c]{c}3\\ 1\\ 0 \end {array} \right ] +s\left [ \begin {array} [c]{c}0\\ 0\\ 1 \end {array} \right ] \end {align*}

By letting \(t=1\) and \(s=1\) then the above becomes \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}3\\ 1\\ 0 \end {array} \right ] +\left [ \begin {array} [c]{c}0\\ 0\\ 1 \end {array} \right ] \] Hence the two eigenvectors associated with this eigenvalue are \[ \left ( \left [ \begin {array} [c]{c}3\\ 1\\ 0 \end {array} \right ] ,\left [ \begin {array} [c]{c}0\\ 0\\ 1 \end {array} \right ] \right ) \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 0\\ 0 \end {array} \right ] \)





\(2\) \(2\) \(2\) No \(\left [ \begin {array} [c]{c}3\\ 1\\ 0 \end {array} \right ] ,\left [ \begin {array} [c]{c}0\\ 0\\ 1 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \\ P & =\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] =\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1}\] Now that we have diagonalized \(A\), we can finally answer the question.\begin {align*} \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] ^{5} & =\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] ^{5}\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 2^{5} & 0\\ 0 & 0 & 2^{5}\end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1} \end {align*}

But\[ \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] =\left [ \begin {array} [c]{ccc}1 & 96 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] \] Therefore\begin {equation} \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] ^{5}=\left [ \begin {array} [c]{ccc}1 & 96 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1} \tag {1} \end {equation} We know need to find \(\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1}\). The augmented matrix is\[ \left [ \begin {array} [c]{cccccc}1 & 3 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 \end {array} \right ] \] \(R_{1}\rightarrow R_{1}-3R_{2}\)\[ \left [ \begin {array} [c]{cccccc}1 & 0 & 0 & 1 & -3 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 \end {array} \right ] \] Since left half is now \(I\) then the right half is the inverse. Therefore \(\left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] ^{-1}=\left [ \begin {array} [c]{ccc}1 & -3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \). Hence (1) becomes\begin {align*} \left [ \begin {array} [c]{ccc}1 & 3 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {array} \right ] ^{5} & =\left [ \begin {array} [c]{ccc}1 & 96 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & -3 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \\ & =\left [ \begin {array} [c]{ccc}1 & 93 & 0\\ 0 & 32 & 0\\ 0 & 0 & 32 \end {array} \right ] \end {align*}

2.8.6 Problem 13 section 6.3

Find \(A^{10}\). \[ \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] \] Solution

If \(A\) is diagonalizable, then by first writing \(A=PDP^{-1}\) then \(A^{10}=PD^{10}P^{-1}\). And since \(D\) is diagonal matrix, it is easy to raise it to power. So the first step is to diagonalize \(A\) as we did in the above problems.

Find the eigenvalues and associated eigenvectors of the matrix \[ \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] \] The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -\lambda \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{ccc}1-\lambda & -1 & 1\\ 2 & -2-\lambda & 1\\ 4 & -4 & 1-\lambda \end {array} \right ] & =0\\ \left ( 1-\lambda \right ) \begin {vmatrix} -2-\lambda & 1\\ -4 & 1-\lambda \end {vmatrix} +\begin {vmatrix} 2 & 1\\ 4 & 1-\lambda \end {vmatrix} +\begin {vmatrix} 2 & -2-\lambda \\ 4 & -4 \end {vmatrix} & =0\\ \left ( 1-\lambda \right ) \left ( \left ( -2-\lambda \right ) \left ( 1-\lambda \right ) +4\right ) +2\left ( 1-\lambda \right ) -4+\left ( -8\right ) -4\left ( -2-\lambda \right ) & =0\\ -\lambda ^{3}-\lambda +2-2\lambda -2+4\lambda & =\\ \lambda -\lambda ^{3} & =0\\ \lambda \left ( 1-\lambda ^{2}\right ) & =0 \end {align*}

Therefore the eigenvalues are\begin {align*} \lambda _{1} & =0\\ \lambda _{2} & =1\\ \lambda _{3} & =-1 \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(-1\) \(1\) real eigenvalue



\(0\) \(1\) real eigenvalue



\(1\) \(1\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda =-1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -(-1)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}-1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}2 & -1 & 1\\ 2 & -1 & 1\\ 4 & -4 & 2 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}2 & -1 & 1 & 0\\ 2 & -1 & 1 & 0\\ 4 & -4 & 2 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}2 & -1 & 1 & 0\\ 0 & 0 & 0 & 0\\ 4 & -4 & 2 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}2 & -1 & 1 & 0\\ 0 & 0 & 0 & 0\\ 0 & -2 & 0 & 0 \end {array} \right ] \] current pivot \(A(2,2)\) is zero. Hence we need to replace current pivot row with one non-zero. Replacing row \(2\) with row \(3\) gives \[ \left [ \begin {array} [c]{@{}ccc|c}2 & -1 & 1 & 0\\ 0 & -2 & 0 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}2 & -1 & 1\\ 0 & -2 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{3}\}\) and the leading variables are \(\{v_{1},v_{2}\}\). Let \(v_{3}=t\). Now we start back substitution. Second row gives \(v_{2}=0\). First row gives \(2v_{1}+t=0\) or \(v_{1}=-\frac {t}{2}\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-\frac {t}{2}\\ 0\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-\frac {1}{2}\\ 0\\ 1 \end {array} \right ] \] Which can be normalized to \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}-1\\ 0\\ 2 \end {array} \right ] \] \(\lambda =0\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -(0)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}1 & -1 & 1 & 0\\ 2 & -2 & 1 & 0\\ 4 & -4 & 1 & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}1 & -1 & 1 & 0\\ 0 & 0 & -1 & 0\\ 4 & -4 & 1 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-4R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}1 & -1 & 1 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & -3 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-3R_{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}1 & -1 & 1 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 0 & 0 & -1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1},v_{3}\}\). Let \(v_{2}=t\). Now we start back substitution. Second row gives \(v_{3}=0\). First row give \(v_{1}-t=0\) or \(v_{1}=t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t\\ 0 \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \] \(\lambda =1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -(1)\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] -\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{ccc}0 & -1 & 1\\ 2 & -3 & 1\\ 4 & -4 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}ccc|c}0 & -1 & 1 & 0\\ 2 & -3 & 1 & 0\\ 4 & -4 & 0 & 0 \end {array} \right ] \] current pivot \(A(1,1)\) is zero. Hence we need to replace current pivot row with one non-zero. Replacing row \(1\) with row \(2\) gives \[ \left [ \begin {array} [c]{@{}ccc|c}2 & -3 & 1 & 0\\ 0 & -1 & 1 & 0\\ 4 & -4 & 0 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}-2R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}2 & -3 & 1 & 0\\ 0 & -1 & 1 & 0\\ 0 & 2 & -2 & 0 \end {array} \right ] \]\[ R_{3}=R_{3}+2R_{2}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}ccc|c}2 & -3 & 1 & 0\\ 0 & -1 & 1 & 0\\ 0 & 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{ccc}2 & -3 & 1\\ 0 & -1 & 1\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{3}\}\) and the leading variables are \(\{v_{1},v_{2}\}\). Let \(v_{3}=t\). Now we start back substitution. From second row \(-v_{2}+t=0\) or \(v_{2}=t\). First row gives \(2v_{1}-3v_{2}+t=0\) or \(2v_{1}=3v_{2}-t\) or \(v_{1}=\frac {3t-t}{2}=t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\\ v_{3}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(-1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}-1\\ 0\\ 2 \end {array} \right ] \)





\(0\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1\\ 0 \end {array} \right ] \)





\(1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1\\ 1 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end {array} \right ] \\ P & =\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] =\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] ^{-1}\] Now that we have diagonalized \(A\), we can finally answer the question.\begin {align*} \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] ^{10} & =\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end {array} \right ] ^{10}\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] ^{-1} \end {align*}

But \(\left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end {array} \right ] =\allowbreak \left [ \begin {array} [c]{ccc}1 & -1 & 0\\ 1 & 0 & 0\\ 1 & 2 & 0 \end {array} \right ] \). The above becomes\begin {equation} \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] ^{10}=\left [ \begin {array} [c]{ccc}1 & -1 & 0\\ 1 & 0 & 0\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 1 & 0 & 1\\ 1 & 2 & 0 \end {array} \right ] ^{-1} \tag {1} \end {equation} We now just need to find \(P^{-1}\). Augmented matrix is \[ \left [ \begin {array} [c]{cccccc}1 & -1 & 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 1 & 2 & 0 & 0 & 0 & 1 \end {array} \right ] \] \(R_{2}\rightarrow R_{2}-R_{1}\)\[ \left [ \begin {array} [c]{cccccc}1 & -1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & -1 & 1 & 0\\ 1 & 2 & 0 & 0 & 0 & 1 \end {array} \right ] \] \(R_{3}\rightarrow R_{3}-R_{1}\)\[ \left [ \begin {array} [c]{cccccc}1 & -1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & -1 & 1 & 0\\ 0 & 3 & -1 & -1 & 0 & 1 \end {array} \right ] \] \(R_{3}\rightarrow R_{3}-3R_{2}\)\[ \left [ \begin {array} [c]{cccccc}1 & -1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & -1 & 1 & 0\\ 0 & 0 & -1 & 2 & -3 & 1 \end {array} \right ] \] Now we start the reduced Echelon phase.

\(R_{3}\rightarrow -R_{3}\)\[ \left [ \begin {array} [c]{cccccc}1 & -1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & -1 & 1 & 0\\ 0 & 0 & 1 & -2 & 3 & -1 \end {array} \right ] \] \(R_{1}\rightarrow R_{1}-R_{3}\)\[ \left [ \begin {array} [c]{cccccc}1 & -1 & 0 & 3 & -3 & 1\\ 0 & 1 & 0 & -1 & 1 & 0\\ 0 & 0 & 1 & -2 & 3 & -1 \end {array} \right ] \] \(R_{1}\rightarrow R_{1}+R_{2}\)\[ \left [ \begin {array} [c]{cccccc}1 & 0 & 0 & 2 & -2 & 1\\ 0 & 1 & 0 & -1 & 1 & 0\\ 0 & 0 & 1 & -2 & 3 & -1 \end {array} \right ] \] Since left half is now \(I\) then the inverse is the right half of the above augmented matrix. Hence \[ P^{-1}=\left [ \begin {array} [c]{ccc}2 & -2 & 1\\ -1 & 1 & 0\\ -2 & 3 & -1 \end {array} \right ] \] Substituting the above in (1) gives\begin {align*} \left [ \begin {array} [c]{ccc}1 & -1 & 1\\ 2 & -2 & 1\\ 4 & -4 & 1 \end {array} \right ] ^{10} & =\left [ \begin {array} [c]{ccc}1 & -1 & 0\\ 1 & 0 & 0\\ 1 & 2 & 0 \end {array} \right ] \left [ \begin {array} [c]{ccc}2 & -2 & 1\\ -1 & 1 & 0\\ -2 & 3 & -1 \end {array} \right ] \\ & =\left [ \begin {array} [c]{ccc}3 & -3 & 1\\ 2 & -2 & 1\\ 0 & 0 & 1 \end {array} \right ] \end {align*}

2.8.7 Problem 25 section 6.3

In Problems 25 through 30, a city-suburban population transition matrix \(A\) (as in Example 2) is given. Find the resulting long-term distribution of a constant total population between

the city and its suburbs. \[ A=\left [ \begin {array} [c]{cc}0.9 & 0.1\\ 0.1 & 0.9 \end {array} \right ] \] Solution

The first step is diagonalize \(A=PDP^{-1}\) and then evaluate \(A^{k}\) in the limit as \(k\rightarrow \infty \). Writing \(A\) as\[ A=\left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] \] The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] -\lambda \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{cc}\frac {9}{10}-\lambda & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}-\lambda \end {array} \right ] & =0\\ \left ( \frac {9}{10}-\lambda \right ) \left ( \frac {9}{10}-\lambda \right ) -\frac {1}{100} & =0\\ \frac {1}{100}\left ( 10\lambda -9\right ) ^{2}-\frac {1}{100} & =0\\ \frac {1}{100}\left ( \left ( 10\lambda -9\right ) ^{2}-1\right ) & =0\\ \left ( 10\lambda -9\right ) ^{2}-1 & =0\\ 100\lambda ^{2}-180\lambda +80 & =0\\ \lambda ^{2}-\frac {18}{10}\lambda +\frac {8}{10} & =0\\ \left ( \lambda -1\right ) \left ( \lambda -\frac {8}{10}\right ) & =0 \end {align*}

Hence the eigenvalues are\begin {align*} \lambda _{1} & =1\\ \lambda _{2} & ={\frac {4}{5}} \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(1\) \(1\) real eigenvalue



\(\frac {4}{5}\) \(1\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda = 1\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] -(1)\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] -\left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}-\frac {1}{10} & \frac {1}{10}\\ \frac {1}{10} & -\frac {1}{10}\end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}-{\frac {1}{10}} & {\frac {1}{10}} & 0\\ {\frac {1}{10}} & -{\frac {1}{10}} & 0 \end {array} \right ] \]\[ R_{2}=R_{2}+R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}-{\frac {1}{10}} & {\frac {1}{10}} & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}-\frac {1}{10} & \frac {1}{10}\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. First row gives \(v_{1}=t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}1\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}1\\ 1 \end {array} \right ] \] \(\lambda ={\frac {4}{5}}\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] -\left ( {\frac {4}{5}}\right ) \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] -\left [ \begin {array} [c]{cc}\frac {4}{5} & 0\\ 0 & \frac {4}{5}\end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}\frac {1}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {1}{10}\end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}{\frac {1}{10}} & {\frac {1}{10}} & 0\\ {\frac {1}{10}} & {\frac {1}{10}} & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-R_{1}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}{\frac {1}{10}} & {\frac {1}{10}} & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}\frac {1}{10} & \frac {1}{10}\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. First row gives \(v_{1}=-t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}-t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}-1\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}-1\\ 1 \end {array} \right ] \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(1\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1\\ 1 \end {array} \right ] \)





\(\frac {4}{5}\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}-1\\ 1 \end {array} \right ] \)





Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{cc}1 & 0\\ 0 & \frac {4}{5}\end {array} \right ] \\ P & =\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] =\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & 0\\ 0 & \frac {4}{5}\end {array} \right ] \left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1}\] And \begin {align*} \left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] ^{k} & =\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & 0\\ 0 & \frac {4}{5}\end {array} \right ] ^{k}\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & 0\\ 0 & \left ( \frac {4}{5}\right ) ^{k}\end {array} \right ] \left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1} \end {align*}

As \(k\rightarrow \infty \) the term \(\left ( \frac {4}{5}\right ) ^{k}\rightarrow 0\). Hence in the limit the above becomes\begin {align*} \lim _{k\rightarrow \infty }\left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] ^{k} & =\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{cc}1 & 0\\ 1 & 0 \end {array} \right ] \left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1} \end {align*}

But \(\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{-1}=\frac {1}{\det \left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] }\left [ \begin {array} [c]{cc}1 & -1\\ 1 & 1 \end {array} \right ] ^{T}=\frac {1}{2}\left [ \begin {array} [c]{cc}1 & 1\\ -1 & 1 \end {array} \right ] =\left [ \begin {array} [c]{cc}\frac {1}{2} & \frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2}\end {array} \right ] \). The above becomes\begin {align*} \lim _{k\rightarrow \infty }\left [ \begin {array} [c]{cc}\frac {9}{10} & \frac {1}{10}\\ \frac {1}{10} & \frac {9}{10}\end {array} \right ] ^{k} & =\left [ \begin {array} [c]{cc}1 & 0\\ 1 & 0 \end {array} \right ] \left [ \begin {array} [c]{cc}\frac {1}{2} & \frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2}\end {array} \right ] \\ & =\left [ \begin {array} [c]{cc}\frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & \frac {1}{2}\end {array} \right ] \end {align*}

Therefore\begin {align*} \boldsymbol {x}_{k} & =A^{k}\boldsymbol {x}_{0}\\ \lim _{k\rightarrow \infty }\boldsymbol {x}_{k} & =\lim _{k\rightarrow \infty }A^{k}\boldsymbol {x}_{0}\\ & =\left [ \begin {array} [c]{cc}\frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & \frac {1}{2}\end {array} \right ] \left [ \begin {array} [c]{c}C_{0}\\ S_{0}\end {array} \right ] \\ & =\left [ \begin {array} [c]{c}\frac {1}{2}C_{0}+\frac {1}{2}S_{0}\\ \frac {1}{2}C_{0}+\frac {1}{2}S_{0}\end {array} \right ] \\ & =\left ( C_{0}+S_{0}\right ) \left [ \begin {array} [c]{c}\frac {1}{2}\\ \frac {1}{2}\end {array} \right ] \end {align*}

This means in long term each city will have half of the initial total population.

2.8.8 Additional problem 1

   2.8.8.1 Part (a)
   2.8.8.2 Part (b)
   2.8.8.3 Part (c)

Solution

2.8.8.1 Part (a)

To show \(A\) is similar to itself, we need to show there exist \(P\), such that \(A=PAP^{-1}\), where \(P\) is matrix whose columns are linearly independent and hence invertible. Let \(P=I\) (the identity matrix of same size as \(A\)). Hence \(A=IAI^{-1}\). Since (a) \(I\) has linearly independent columns (basis vectors) and (b) \(I\) is clearly invertible and (c) \(A=IAI^{-1}\) is true: Post multiplying both sides by \(I\) gives \(AI^{-1}=IA\). But \(AI^{-1}=AI\) and \(IA=AI\) which means \(AI=AI\) or \(A=A\) which is true.

2.8.8.2 Part (b)

We are given that \begin {equation} A=PBP^{-1} \tag {1} \end {equation} We need to show that \(B=PAP^{-1}\). Starting with (1) given relation, and post multiplying both sides by \(P\) gives\begin {align*} AP & =PBP^{-1}P\\ AP & =PB \end {align*}

Since \(P^{-1}P=I\). pre multiplying both sides by \(P^{-1}\) gives\begin {align*} P^{-1}AP & =B\\ P^{-1}AP & =B \end {align*}

Let \(P^{-1}=Q\). Then the above can also be written as\[ B=QAQ^{-1}\] Hence \(B\) is similar to \(A\).

2.8.8.3 Part (c)

We are given that \begin {equation} A=PBP^{-1} \tag {1} \end {equation} And that \begin {equation} B=QCQ^{-1} \tag {2} \end {equation} We need to show that \(A=VCV^{-1}\) for some invertible matrix \(V\).  Substituting (2) into (1) gives\begin {align*} A & =P\left ( QCQ^{-1}\right ) P^{-1}\\ & =\left ( PQ\right ) C\left ( Q^{-1}P^{-1}\right ) \end {align*}

But \(Q^{-1}P^{-1}=\left ( PQ\right ) ^{-1}\). The above becomes\[ A=\left ( PQ\right ) C\left ( QP\right ) ^{-1}\] Let \(PQ=V\). The above becomes\[ A=VCV^{-1}\] Hence \(A\) is similar to \(C\).

2.8.9 Additional problem 2

   2.8.9.1 Part (a)
   2.8.9.2 Part(b)
   2.8.9.3 Part (c)

Solution

\begin {align*} \boldsymbol {x}_{n} & =A^{n}\boldsymbol {x}_{0}\\ & =\begin {bmatrix} 1 & 1\\ 1 & 0 \end {bmatrix} ^{n}\begin {bmatrix} 1\\ 0 \end {bmatrix} \end {align*}

2.8.9.1 Part (a)

To find eigenvalues and eigenvectors of \(A\).\[\begin {bmatrix} 1 & 1\\ 1 & 0 \end {bmatrix} \] The first step is to determine the characteristic polynomial of the matrix in order to find the eigenvalues of the matrix \(A\). This is given by \begin {align*} \det (A-\lambda I) & =0\\ \det \left ( \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] -\lambda \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) & =0\\ \det \left [ \begin {array} [c]{cc}1-\lambda & 1\\ 1 & -\lambda \end {array} \right ] & =0\\ \lambda ^{2}-\lambda -1 & =0 \end {align*}

The eigenvalues are the roots of the above characteristic polynomial. Using the quadratic formula \(\lambda =\frac {-b}{2a}\pm \frac {1}{2a}\sqrt {b^{2}-4ac}=\frac {1}{2}\pm \frac {1}{2}\sqrt {\left ( -1\right ) ^{2}-4\left ( -1\right ) }=\frac {1}{2}\pm \frac {1}{2}\sqrt {1+4}=\frac {1}{2}\pm \frac {1}{2}\sqrt {5}\). Hence \begin {align*} \lambda _{1} & =\frac {1}{2}+\frac {\sqrt {5}}{2}\\ \lambda _{2} & =\frac {1}{2}-\frac {\sqrt {5}}{2} \end {align*}

This table summarizes the result




eigenvalue algebraic multiplicity type of eigenvalue



\(\frac {1}{2}+\frac {\sqrt {5}}{2}\) \(1\) real eigenvalue



\(\frac {1}{2}-\frac {\sqrt {5}}{2}\) \(1\) real eigenvalue



For each eigenvalue \(\lambda \) found above, we now find the corresponding eigenvector.

\(\lambda = \frac {1}{2}+\frac {\sqrt {5}}{2}\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] -\left ( \frac {1}{2}+\frac {\sqrt {5}}{2}\right ) \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] -\left [ \begin {array} [c]{cc}\frac {1}{2}+\frac {\sqrt {5}}{2} & 0\\ 0 & \frac {1}{2}+\frac {\sqrt {5}}{2}\end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}\frac {1}{2}-\frac {\sqrt {5}}{2} & 1\\ 1 & -\frac {1}{2}-\frac {\sqrt {5}}{2}\end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}\frac {1}{2}-\frac {\sqrt {5}}{2} & 1 & 0\\ 1 & -\frac {1}{2}-\frac {\sqrt {5}}{2} & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {R_{1}}{\frac {1}{2}-\frac {\sqrt {5}}{2}}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}\frac {1}{2}-\frac {\sqrt {5}}{2} & 1 & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}\frac {1}{2}-\frac {\sqrt {5}}{2} & 1\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. First row gives \(\left ( \frac {1}{2}-\frac {\sqrt {5}}{2}\right ) v_{1}=-t\). Hence \(\frac {1-\sqrt {5}}{2}v_{1}=-t\). or \(v_{1}=\frac {-2}{1-\sqrt {5}}t\) or \(v_{1}=\frac {2}{\sqrt {5}-1}t\). Or \(v_{1}=\frac {2\left ( \sqrt {5}+1\right ) }{\left ( \sqrt {5}-1\right ) \left ( \sqrt {5}+1\right ) }t=\frac {2\left ( \sqrt {5}+1\right ) }{4}t=\frac {\sqrt {5}+1}{2}t\). Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {\sqrt {5}+1}{2}t\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}\frac {\sqrt {5}+1}{2}\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {\sqrt {5}+1}{2}\\ 1 \end {array} \right ] \] Which can be normalized to \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\sqrt {5}+1\\ 1 \end {array} \right ] \] \(\lambda =\frac {1}{2}-\frac {\sqrt {5}}{2}\)

We need now to determine the eigenvector \(\boldsymbol {v}\) where \begin {align*} A\boldsymbol {v} & =\lambda \boldsymbol {v}\\ A\boldsymbol {v}-\lambda \boldsymbol {v} & =\boldsymbol {0}\\ (A-\lambda I)\boldsymbol {v} & =\boldsymbol {0}\\ \left ( \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] -\left ( \frac {1}{2}-\frac {\sqrt {5}}{2}\right ) \left [ \begin {array} [c]{cc}1 & 0\\ 0 & 1 \end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left ( \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] -\left [ \begin {array} [c]{cc}\frac {1}{2}-\frac {\sqrt {5}}{2} & 0\\ 0 & \frac {1}{2}-\frac {\sqrt {5}}{2}\end {array} \right ] \right ) \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \\ \left [ \begin {array} [c]{cc}\frac {1}{2}+\frac {\sqrt {5}}{2} & 1\\ 1 & \frac {\sqrt {5}}{2}-\frac {1}{2}\end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] & =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \end {align*}

We will now do Gaussian elimination in order to solve for the eigenvector. The augmented matrix is \[ \left [ \begin {array} [c]{@{}cc|c}\frac {1}{2}+\frac {\sqrt {5}}{2} & 1 & 0\\ 1 & \frac {\sqrt {5}}{2}-\frac {1}{2} & 0 \end {array} \right ] \]\[ R_{2}=R_{2}-\frac {R_{1}}{\frac {1}{2}+\frac {\sqrt {5}}{2}}\Longrightarrow \hspace {5pt}\left [ \begin {array} [c]{@{}cc|c}\frac {1}{2}+\frac {\sqrt {5}}{2} & 1 & 0\\ 0 & 0 & 0 \end {array} \right ] \] Therefore the system in Echelon form is \[ \left [ \begin {array} [c]{cc}\frac {1}{2}+\frac {\sqrt {5}}{2} & 1\\ 0 & 0 \end {array} \right ] \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}0\\ 0 \end {array} \right ] \] The free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\). Let \(v_{2}=t\). Now we start back substitution. First row gives \(\left ( \frac {1+\sqrt {5}}{2}\right ) v_{1}=-t\) or \(v_{1}=\frac {-2}{1+\sqrt {5}}t=\frac {-2\left ( 1-\sqrt {5}\right ) }{\left ( 1+\sqrt {5}\right ) \left ( 1-\sqrt {5}\right ) }t\) which simplifies to \(v_{1}=\frac {-2\left ( 1-\sqrt {5}\right ) t}{-4}=\frac {\left ( 1-\sqrt {5}\right ) t}{2}\) Hence the solution is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {\left ( 1-\sqrt {5}\right ) t}{2}\\ t \end {array} \right ] \] Since there is one free Variable, we have found one eigenvector associated with this eigenvalue. The above can be written as \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =t\left [ \begin {array} [c]{c}\frac {\left ( 1-\sqrt {5}\right ) }{2}\\ 1 \end {array} \right ] \] Or, by letting \(t=1\) then the eigenvector is \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}\frac {\left ( 1-\sqrt {5}\right ) }{2}\\ 1 \end {array} \right ] \] Which can be normalized to \[ \left [ \begin {array} [c]{c}v_{1}\\ v_{2}\end {array} \right ] =\left [ \begin {array} [c]{c}1-\sqrt {5}\\ 2 \end {array} \right ] \] The following table summarizes the result found above.






\(\lambda \) algebraic geometric defective associated
multiplicity multiplicity eigenvalue? eigenvectors





\(\frac {1+\sqrt {5}}{2}\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1+\sqrt {5}\\ 2 \end {array} \right ] \)





\(\frac {1-\sqrt {5}}{2}\) \(1\) \(1\) No \(\left [ \begin {array} [c]{c}1-\sqrt {5}\\ 2 \end {array} \right ] \)





2.8.9.2 Part(b)

Since the matrix is not defective, then it is diagonalizable. Let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write \[ A=PDP^{-1}\] Where \begin {align*} D & =\left [ \begin {array} [c]{cc}\frac {1+\sqrt {5}}{2} & 0\\ 0 & \frac {1-\sqrt {5}}{2}\end {array} \right ] \\ P & =\left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] \end {align*}

Therefore \[ \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] =\left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}\frac {1+\sqrt {5}}{2} & 0\\ 0 & \frac {1-\sqrt {5}}{2}\end {array} \right ] \left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] ^{-1}\] And now we can write\begin {align*} \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] ^{n} & =\left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}\frac {1+\sqrt {5}}{2} & 0\\ 0 & \frac {1-\sqrt {5}}{2}\end {array} \right ] ^{n}\left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] ^{-1}\\ & =\left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}\left ( \frac {1+\sqrt {5}}{2}\right ) ^{n} & 0\\ 0 & \left ( \frac {1-\sqrt {5}}{2}\right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}1+\sqrt {5} & 1-\sqrt {5}\\ 2 & 2 \end {array} \right ] ^{-1} \end {align*}

Using hint, let \(\frac {1+\sqrt {5}}{2}=\varphi \approx 1.61803\) and \(\frac {1-\sqrt {5}}{2}=1-\varphi \approx -0.61803\). The above becomes\[ \left [ \begin {array} [c]{cc}1 & 1\\ 1 & 0 \end {array} \right ] ^{n}=\left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}\varphi ^{n} & 0\\ 0 & \left ( 1-\varphi \right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] ^{-1}\]

2.8.9.3 Part (c)

Since \begin {align*} \boldsymbol {x}_{n} & =A^{n}\boldsymbol {x}_{0}\\ & =\begin {bmatrix} 1 & 1\\ 1 & 0 \end {bmatrix} ^{n}\begin {bmatrix} 1\\ 0 \end {bmatrix} \end {align*}

Then using result from part b, we can now write\begin {align} \boldsymbol {x}_{n} & =A^{n}\boldsymbol {x}_{0}\nonumber \\ & =\overset {A^{n}}{\overbrace {\left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] \left [ \begin {array} [c]{cc}\varphi ^{n} & 0\\ 0 & \left ( 1-\varphi \right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] ^{-1}}}\begin {bmatrix} 1\\ 0 \end {bmatrix} \nonumber \\ & =\left [ \begin {array} [c]{cc}2\varphi \varphi ^{n} & 2\left ( 1-\varphi \right ) \left ( 1-\varphi \right ) ^{n}\\ 2\varphi ^{n} & 2\left ( 1-\varphi \right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] ^{-1}\begin {bmatrix} 1\\ 0 \end {bmatrix} \nonumber \\ & =\left [ \begin {array} [c]{cc}2\varphi ^{n+1} & 2\left ( 1-\varphi \right ) ^{n+1}\\ 2\varphi ^{n} & 2\left ( 1-\varphi \right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] ^{-1}\begin {bmatrix} 1\\ 0 \end {bmatrix} \tag {1} \end {align}

But \begin {align*} \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] ^{-1} & =\frac {1}{\det \left [ \begin {array} [c]{cc}2\varphi & 2\left ( 1-\varphi \right ) \\ 2 & 2 \end {array} \right ] }\left [ \begin {array} [c]{cc}2 & -2\left ( 1-\varphi \right ) \\ -2 & 2\varphi \end {array} \right ] \\ & =\frac {1}{4\varphi -4\left ( 1-\varphi \right ) }\left [ \begin {array} [c]{cc}2 & -2\left ( 1-\varphi \right ) \\ -2 & 2\varphi \end {array} \right ] \\ & =\frac {1}{8\varphi -4}\left [ \begin {array} [c]{cc}2 & -2\left ( 1-\varphi \right ) \\ -2 & 2\varphi \end {array} \right ] \\ & =\frac {1}{4\varphi -2}\left [ \begin {array} [c]{cc}1 & \varphi -1\\ -1 & \varphi \end {array} \right ] \end {align*}

Hence (1) becomes\begin {align*} \boldsymbol {x}_{n} & =\frac {1}{4\varphi -2}\left [ \begin {array} [c]{cc}2\varphi ^{n+1} & 2\left ( 1-\varphi \right ) ^{n+1}\\ 2\varphi ^{n} & 2\left ( 1-\varphi \right ) ^{n}\end {array} \right ] \left [ \begin {array} [c]{cc}1 & \varphi -1\\ -1 & \varphi \end {array} \right ] \begin {bmatrix} 1\\ 0 \end {bmatrix} \\ & =\frac {1}{4\varphi -2}\left [ \begin {array} [c]{cc}2\varphi ^{n+1}-2\left ( 1-\varphi \right ) ^{n+1} & 2\varphi \left ( 1-\varphi \right ) ^{n+1}-2\varphi ^{n+1}+2\varphi ^{n+2}\\ 2\varphi ^{n}-2\left ( 1-\varphi \right ) ^{n} & 2\varphi \left ( 1-\varphi \right ) ^{n}-2\varphi ^{n}+2\varphi \varphi ^{n}\end {array} \right ] \begin {bmatrix} 1\\ 0 \end {bmatrix} \\ & =\frac {1}{4\varphi -2}\begin {bmatrix} 2\varphi ^{n+1}-2\left ( 1-\varphi \right ) ^{n+1}\\ 2\varphi ^{n}-2\left ( 1-\varphi \right ) ^{n}\end {bmatrix} \\ & =\frac {1}{2\varphi -1}\begin {bmatrix} \varphi ^{n+1}-\left ( 1-\varphi \right ) ^{n+1}\\ \varphi ^{n}-\left ( 1-\varphi \right ) ^{n}\end {bmatrix} \end {align*}

But \(\boldsymbol {x}_{n}=\begin {bmatrix} f_{n+1}\\ f_{n}\end {bmatrix} \), hence\begin {align*} f_{n} & =\frac {\varphi ^{n}-\left ( 1-\varphi \right ) ^{n}}{2\varphi -1}\\ & \approx \frac {1.61803^{n}-\left ( -0.61803\right ) ^{n}}{2\left ( 1.61803\right ) -1}\\ & \approx \frac {1.61803^{n}-\left ( -0.61803\right ) ^{n}}{2.236\,1} \end {align*}

Check: we see from problem statement that \(f_{0}=0,f_{1}=1,\cdots f_{12}=144.\) Let us check the formula above for \(f_{12}\)\begin {align*} f_{12} & =\frac {\varphi ^{12}-\left ( 1-\varphi \right ) ^{12}}{2\varphi -1}\\ & =\frac {\left ( \frac {1+\sqrt {5}}{2}\right ) ^{12}-\left ( 1-\frac {1+\sqrt {5}}{2}\right ) ^{12}}{2\left ( \frac {1+\sqrt {5}}{2}\right ) -1}\\ & =\frac {144\sqrt {5}}{\sqrt {5}}\\ & =144 \end {align*}

Verified OK.

2.8.10 key solution for HW8

PDF