2.4 HW 4

  2.4.1 Problems listing
  2.4.2 Problem 1
  2.4.3 Problem 2
  2.4.4 Problem 3
  2.4.5 Problem 4
  2.4.6 Problem 5
  2.4.7 Problem 6
  2.4.8 Problem 7
  2.4.9 Problem 8
  2.4.10 Problem 9
  2.4.11 Problem 10
  2.4.12 Problem 11
  2.4.13 Problem 12

2.4.1 Problems listing

PDF

PDF (letter size)

PDF (legal size)

2.4.2 Problem 1

Determine the null space of \(A\) and verify the Rank-Nullity Theorem\[ A=\begin {bmatrix} 1 & 2 & 1 & 4\\ 3 & 8 & 7 & 20\\ 2 & 7 & 9 & 23 \end {bmatrix} \] Solution

The null space of \(A\) is the solution \(A\vec {x}=\vec {0}\). Therefore\begin {equation} \begin {bmatrix} 1 & 2 & 1 & 4\\ 3 & 8 & 7 & 20\\ 2 & 7 & 9 & 23 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \tag {1} \end {equation} The augmented matrix is\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 3 & 8 & 7 & 20 & 0\\ 2 & 7 & 9 & 23 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-3R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 0 & 2 & 4 & 8 & 0\\ 2 & 7 & 9 & 23 & 0 \end {bmatrix} \] \(R_{3}=R_{3}-2R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 0 & 2 & 4 & 8 & 0\\ 0 & 3 & 7 & 15 & 0 \end {bmatrix} \] \(R_{2}=\frac {R_{2}}{2}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 0 & 1 & 2 & 4 & 0\\ 0 & 3 & 7 & 15 & 0 \end {bmatrix} \] \(R_{3}=R_{3}-3R_{2}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 0 & 1 & 2 & 4 & 0\\ 0 & 0 & 1 & 3 & 0 \end {bmatrix} \] Now the reduced echelon phase starts.

\(R_{2}=R_{2}-2R_{3}\)\[\begin {bmatrix} 1 & 2 & 1 & 4 & 0\\ 0 & 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 3 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-R_{3}\)\[\begin {bmatrix} 1 & 2 & 0 & 1 & 0\\ 0 & 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 3 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-2R_{2}\)\[\begin {bmatrix} 1 & 0 & 0 & 5 & 0\\ 0 & 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 3 & 0 \end {bmatrix} \] The above in RREF form. There are 3 pivots. They are \(A\left (1,1\right ) ,A\left (2,2\right ) ,A\left (3,3\right ) \). Hence original system (1) becomes\[\begin {bmatrix} 1 & 0 & 0 & 5\\ 0 & 1 & 0 & -2\\ 0 & 0 & 1 & 3 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end {bmatrix} =\begin {bmatrix} 1\\ 0\\ 0 \end {bmatrix} \] The base variables are \(x_{1},x_{2},x_{3}\) and the free variable is \(x_{4}=s\). Last row gives \(x_{3}+3x_{4}=0\) or \(x_{3}=-3s\). Second row gives \(x_{2}-2x_{4}=0\) or \(x_{2}=2s\). First row gives \(x_{1}+5x_{4}=0\) or \(x_{1}=-5s\). Hence the solution is\begin {align*} \begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end {bmatrix} & =\begin {bmatrix} -5s\\ 2s\\ -3s\\ s \end {bmatrix} \\ & =s\begin {bmatrix} -5\\ 2\\ -3\\ 1 \end {bmatrix} \end {align*}

It is one parameter solution. Hence the dimension of the null space is \(1\). (it is subspace of \(\mathbb {R} ^{n}\) or \(\mathbb {R} ^{4}\) in this case). Any scalar multiple of \(\begin {bmatrix} -5\\ 2\\ -3\\ 1 \end {bmatrix} \) is basis for the null space. For verification, using the Rank–nullity theorem (4.9.1, in textbook at page 325) which says, for matrix \(A\) of dimensions \(m\times n\)\[ Rank\relax (A) +nullity\relax (A) =n \] Therefore, since \(n=4\) in this case (it is the number of columns), and rank is \(3\) (since there are 3 pivots) then\[ 3+nullity\relax (A) =4 \] Hence\begin {align*} nullity\relax (A) & =4-3\\ & =1 \end {align*}

This means the dimension of the null space of \(A\) is 1. The \(nullity\left ( A\right ) \) is the dimension of null-space\(\relax (A) \), which is also the number of free variables at the end of the RREF phase. This verifies the result found above.

2.4.3 Problem 2

Using the definition of linear transformation, verify that the given transformation is linear. \(\ T:\mathbb {R} ^{2}\rightarrow \mathbb {R} ^{2}\) defined by \(T\left (x,y\right ) =\left (x+2y,2x-y\right ) \)

Solution

The mapping is linear if it satisfies the following two properties\begin {align*} T\left (\vec {u}+\vec {v}\right ) & =T\left (\vec {u}\right ) +T\left ( \vec {v}\right ) \qquad \text {for all }\vec {u},\vec {v}\in V\\ T\left (c\vec {u}\right ) & =cT\left (\vec {u}\right ) \qquad \text {for all }\vec {u}\in V\text { and all scalars }c \end {align*}

\(T\) above is the linear mapping that assigns each vector \(\vec {v}\in V\) one vector \(w\in W\), where \(V,W\) are vector spaces. \(V\) is called the domain of \(T\) and \(W\) is called the codomain of \(T\). The range of \(T\) is the subset of vectors in \(W\) which can be reached by the mapping \(T\) applied to all vectors in \(V\). i.e. \(Rng\relax (T) =\left \{ T\left (\vec {v}\right ) :\vec {v}\in V\right \} \,\). To find if \(T\) is linear, we need to check both properties above. Let \(\vec {u}=\begin {bmatrix} x_{1}\\ y_{1}\end {bmatrix} ,\vec {v}=\begin {bmatrix} x_{2}\\ y_{2}\end {bmatrix} \). Then (Please note that \(=\) below is used as a place holder since we do not know yet if LHS is equal to RHS. It should really be \(\overset {?}{=}\) but this gives a Latex issue when used)\begin {align*} T\left (\vec {u}+\vec {v}\right ) & =T\left (\vec {u}\right ) +T\left ( \vec {v}\right ) \\ T\left ( \begin {bmatrix} x_{1}\\ y_{1}\end {bmatrix} +\begin {bmatrix} x_{2}\\ y_{2}\end {bmatrix} \right ) & =T\left ( \begin {bmatrix} x_{1}\\ y_{1}\end {bmatrix} \right ) +T\left ( \begin {bmatrix} x_{2}\\ y_{2}\end {bmatrix} \right ) \\ T\left ( \begin {bmatrix} x_{1}+x_{2}\\ y_{1}+y_{2}\end {bmatrix} \right ) & =\begin {bmatrix} x_{1}+2y_{1}\\ 2x_{1}-y_{1}\end {bmatrix} +\begin {bmatrix} x_{2}+2y_{2}\\ 2x_{2}-y_{2}\end {bmatrix} \\\begin {bmatrix} \left (x_{1}+x_{2}\right ) +2\left (y_{1}+y_{2}\right ) \\ 2\left (x_{1}+x_{2}\right ) -\left (y_{1}+y_{2}\right ) \end {bmatrix} & =\begin {bmatrix} x_{1}+2y_{1}+x_{2}+2y_{2}\\ 2x_{1}-y_{1}+2x_{2}-y_{2}\end {bmatrix} \\\begin {bmatrix} x_{1}+x_{2}+2y_{1}+2y_{2}\\ 2x_{1}+2x_{2}-y_{1}-y_{2}\end {bmatrix} & =\begin {bmatrix} x_{1}+x_{2}+2y_{1}+2y_{2}\\ 2x_{1}+2x_{2}-y_{1}-y_{2}\end {bmatrix} \end {align*}

Comparing both sides shows they are indeed the same. Hence the first property is satisfied. Now the second property is checked. Let \(c\) be scalar and let \(\vec {u}=\begin {bmatrix} x\\ y \end {bmatrix} \) then\begin {align*} T\left (c\vec {u}\right ) & =cT\left (\vec {u}\right ) \\ T\left (c\begin {bmatrix} x\\ y \end {bmatrix} \right ) & =cT\left ( \begin {bmatrix} x\\ y \end {bmatrix} \right ) \\ T\left ( \begin {bmatrix} cx\\ cy \end {bmatrix} \right ) & =c\begin {bmatrix} x+2y\\ 2x-y \end {bmatrix} \\\begin {bmatrix} cx+2cy\\ 2cx-cy \end {bmatrix} & =\begin {bmatrix} cx+2cy\\ 2cx-cy \end {bmatrix} \end {align*}

Comparing both sides shows they are the same. Hence the second property is satisfied. This verifies that the given transformation \(T\) is linear

2.4.4 Problem 3

Determine the matrix of the given linear transformation\[ T:\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{2}\text {\qquad defined by }T\left (x,y,z\right ) =\left (x-y+z,z-x\right ) \] Solution

Let the matrix of the transformation be \(A=\begin {bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\end {bmatrix} \) and let \(\vec {v}=\begin {bmatrix} x\\ y\\ z \end {bmatrix} \) be some vector in the domain of \(T\), then we need to solve\[\begin {bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\end {bmatrix}\begin {bmatrix} x\\ y\\ z \end {bmatrix} =\begin {bmatrix} x-y+z\\ z-x \end {bmatrix} \] For the unknowns \(a_{11},a_{12},a_{13},a_{21},a_{22},a_{23}\). The first row equation is\begin {equation} a_{11}x+a_{12}y+a_{13}=x-y+z\tag {1} \end {equation} Comparing coefficients for each of the variables \(x,y,x\) gives \(a_{11}=1,a_{12}=-1,a_{13}=1\). The second row equation is\begin {equation} a_{21}x+a_{22}y+a_{23}z=z-x\tag {2} \end {equation} Comparing coefficients again gives \(a_{21}=-1,a_{22}=0,a_{23}=1\). Hence the matrix \(A\) is\[ A=\begin {bmatrix} 1 & -1 & 1\\ -1 & 0 & 1 \end {bmatrix} \]

2.4.5 Problem 4

Let \(T:\mathbb {R} ^{2}\rightarrow \mathbb {R} ^{2}\) be a linear transformation that maps \(\vec {u}=\begin {bmatrix} 5\\ 2 \end {bmatrix} \) into \(\begin {bmatrix} 2\\ 1 \end {bmatrix} \) and\(\ \vec {v}=\begin {bmatrix} 1\\ 3 \end {bmatrix} \) into \(\begin {bmatrix} -1\\ 3 \end {bmatrix} \). Use the fact that \(T\) is linear to find the image under \(T\) of \(3\vec {u}+2\vec {v}\)

Solution

The mapping is linear if it satisfies the following two properties\begin {align*} T\left (\vec {u}+\vec {v}\right ) & =T\left (\vec {u}\right ) +T\left ( \vec {v}\right ) \qquad \text {for all }\vec {u},\vec {v}\in V\\ T\left (c\vec {u}\right ) & =cT\left (\vec {u}\right ) \qquad \text {for all }\vec {u}\in V\text { and all scalars }c \end {align*}

By using first property above we can then write\[ T\left (3\vec {u}+2\vec {v}\right ) =T\left (3\vec {u}\right ) +T\left ( 2\vec {v}\right ) \] And by using the second property the RHS above can be written as\[ T\left (3\vec {u}+2\vec {v}\right ) =3T\left (\vec {u}\right ) +2T\left ( \vec {v}\right ) \] But we are given that \(T\left (\vec {u}\right ) =\begin {bmatrix} 2\\ 1 \end {bmatrix} ,T\left (\vec {v}\right ) =\begin {bmatrix} -1\\ 3 \end {bmatrix} \). Substituting these in the above gives\begin {align*} T\left (3\vec {u}+2\vec {v}\right ) & =3\begin {bmatrix} 2\\ 1 \end {bmatrix} +2\begin {bmatrix} -1\\ 3 \end {bmatrix} \\ & =\begin {bmatrix} 6\\ 6 \end {bmatrix} +\begin {bmatrix} -2\\ 6 \end {bmatrix} \\ & =\begin {bmatrix} 6-2\\ 6+6 \end {bmatrix} \end {align*}

Hence the image under \(T\) of \(3\vec {u}+2\vec {v}\) is\[ T\left (3\vec {u}+2\vec {v}\right ) =\begin {bmatrix} 4\\ 12 \end {bmatrix} \]

2.4.6 Problem 5

Assume that \(T\) defines a linear transformation and use the given information to find the matrix of \(T\).\[ T:\mathbb {R} ^{2}\rightarrow \mathbb {R} ^{4}\] Such that \(T\left (0,1\right ) =\left (1,0,-2,2\right ) \) and \(T\left ( 1,2\right ) =\left (-3,1,1,1\right ) \)

Solution

Let \(A\) be the representation of the linear transformation and let \(\vec {x}\) vector in the domain of \(T\). Hence \[ A\vec {x}=\vec {b}\] Where \(b\in \mathbb {R} ^{4}\), hence it has dimensions \(4\times 1\) and since \(\vec {x}\in \mathbb {R} ^{2}\) then it has dimensions \(2\times 1\). Therefore\[ \left (m\times n\right ) \left (2\times 1\right ) =\left (4\times 1\right ) \] Since inner dimensions between \(A\) and \(\vec {x}\) must be the same for the multiplication to be valid, then \(n=2\). Therefore \(m=4\). Hence \(A\) must have dimensions \(4\times 2\). Let \(A\) be\[ A=\begin {bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ a_{31} & a_{32}\\ a_{41} & a_{42}\end {bmatrix} \] Using \(T\left (0,1\right ) =\left (1,0,-2,2\right ) \), then we can write\[\begin {bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ a_{31} & a_{32}\\ a_{41} & a_{42}\end {bmatrix}\begin {bmatrix} 0\\ 1 \end {bmatrix} =\begin {bmatrix} 1\\ 0\\ -2\\ 2 \end {bmatrix} \] or by carrying out the multiplication \begin {align} \begin {bmatrix} a_{11}\relax (0) +a_{12}\relax (1) \\ a_{21}\relax (0) +a_{22}\relax (1) \\ a_{31}\relax (0) +a_{32}\relax (1) \\ a_{41}\relax (0) +a_{42}\relax (1) \end {bmatrix} & =\begin {bmatrix} 1\\ 0\\ -2\\ 2 \end {bmatrix} \nonumber \\\begin {bmatrix} a_{12}\\ a_{22}\\ a_{32}\\ a_{42}\end {bmatrix} & =\begin {bmatrix} 1\\ 0\\ -2\\ 2 \end {bmatrix} \tag {1} \end {align}

And using the second relation \(T\left (1,2\right ) =\left (-3,1,1,1\right ) \) gives\begin {align*} \begin {bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ a_{31} & a_{32}\\ a_{41} & a_{42}\end {bmatrix}\begin {bmatrix} 1\\ 2 \end {bmatrix} & =\begin {bmatrix} -3\\ 1\\ 1\\ 1 \end {bmatrix} \\\begin {bmatrix} a_{11}\relax (1) +a_{12}\relax (2) \\ a_{21}\relax (1) +a_{22}\relax (2) \\ a_{31}\relax (1) +a_{32}\relax (2) \\ a_{41}\relax (1) +a_{42}\relax (2) \end {bmatrix} & =\begin {bmatrix} -3\\ 1\\ 1\\ 1 \end {bmatrix} \\\begin {bmatrix} a_{11}+2a_{12}\\ a_{21}+2a_{22}\\ a_{31}+2a_{32}\\ a_{41}+2a_{42}\end {bmatrix} & =\begin {bmatrix} -3\\ 1\\ 1\\ 1 \end {bmatrix} \end {align*}

Substituting values found in (1) into the above gives\begin {align*} \begin {bmatrix} a_{11}+2\relax (1) \\ a_{21}+2\relax (0) \\ a_{31}+2\left (-2\right ) \\ a_{41}+2\relax (2) \end {bmatrix} & =\begin {bmatrix} -3\\ 1\\ 1\\ 1 \end {bmatrix} \\\begin {bmatrix} a_{11}+2\\ a_{21}\\ a_{31}-4\\ a_{41}+4 \end {bmatrix} & =\begin {bmatrix} -3\\ 1\\ 1\\ 1 \end {bmatrix} \\\begin {bmatrix} a_{11}\\ a_{21}\\ a_{31}\\ a_{41}\end {bmatrix} & =\begin {bmatrix} -3-2\\ 1\\ 1+4\\ 1-4 \end {bmatrix} \\ & =\begin {bmatrix} -5\\ 1\\ 5\\ -3 \end {bmatrix} \end {align*}

All entries of \(A\) are now found. Therefore the matrix representation of \(T\) is\[ A=\begin {bmatrix} -5 & 1\\ 1 & 0\\ 5 & -2\\ -3 & 2 \end {bmatrix} \]

2.4.7 Problem 6

Find the \(\ker (T)\) and \(Rng(T)\) and their dimensions. \(T:\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{2}\) defined by \(T\relax (x) =Ax\) where\[ A=\begin {bmatrix} 1 & -1 & 2\\ -3 & 3 & -6 \end {bmatrix} \] Solution

\(Rng(T)\) are all vectors in \(\mathbb {R} ^{2}\) (subspace of \(\mathbb {R} ^{m}\)) which can be reached by \(T\) for every vector in domain of \(T\) which is \(\mathbb {R} ^{3}\). It is the same as the column space of \(A\).

\(Ker(T)\) are all vectors in \(\mathbb {R} ^{3}\) which map to the zero vector in \(\mathbb {R} ^{2}\). They are the solution of \(A\vec {x}=\vec {0}\). \(Ker(T)\) is the same as null-space of \(A\) where \(A\) is the matrix representation of the linear mapping \(T\).  To find  \(Ker(T)\), we then need to solve the system \(A\vec {x}=\vec {0}\). \begin {equation} \begin {bmatrix} 1 & -1 & 2\\ -3 & 3 & -6 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \tag {1} \end {equation} The augmented matrix is\[\begin {bmatrix} 1 & -1 & 2 & 0\\ -3 & 3 & -6 & 0 \end {bmatrix} \] \(R_{2}=R_{2}+3R_{1}\) gives\[\begin {bmatrix} 1 & -1 & 2 & 0\\ 0 & 0 & 0 & 0 \end {bmatrix} \] Base variable is \(x_{1}\). Free variables are \(x_{2}=s,x_{3}=t\). Pivot column is the first column.  Hence (1) becomes\[\begin {bmatrix} 1 & -1 & 2\\ 0 & 0 & 0 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] First row gives \(x_{1}-s+2t=0\) or \(x_{1}=s-2t\). Hence the solution is\begin {align*} \begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} & =\begin {bmatrix} s-2t\\ s\\ t \end {bmatrix} \\ & =\begin {bmatrix} s\\ s\\ 0 \end {bmatrix} +\begin {bmatrix} -2t\\ 0\\ t \end {bmatrix} \\ & =s\begin {bmatrix} 1\\ 1\\ 0 \end {bmatrix} +t\begin {bmatrix} -2\\ 0\\ 1 \end {bmatrix} \end {align*}

It is two parameters system.  The dimension of the null-space is therefore \(2\). (it is also the number of the free variables). The null-space is subspace of \(\mathbb {R} ^{3}\). Hence\[ \ker \relax (T) =\left \{ \vec {v}\in \mathbb {R} ^{3}:\vec {v}=s\begin {bmatrix} 1\\ 1\\ 0 \end {bmatrix} +t\begin {bmatrix} -2\\ 0\\ 1 \end {bmatrix} ,s,t\in \mathbb {R} \right \} \] Now \(Rng(T)\) is the column space. From above we found that the first column was the pivot column. This corresponds to the first column in \(A\) given by \(\begin {bmatrix} 1\\ -3 \end {bmatrix} \). Therefore\[ Rng(T)=\left \{ \vec {v}\in \mathbb {R} ^{2}:\vec {v}=s\begin {bmatrix} 1\\ -3 \end {bmatrix} ,s\in \mathbb {R} \right \} \] It is one dimension subspace of \(\mathbb {R} ^{2}\).

2.4.8 Problem 7

Let \(T:\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}\) be linear transformation defined by \(Tx=Ax\) where\[ A=\begin {bmatrix} 3 & 5 & 1\\ 1 & 2 & 1\\ 2 & 6 & 7 \end {bmatrix} \] Show that \(T\) is both one-to-one and onto.

Solution

Using Theorem 6.4.8 which says, the linear transformation \(T:V\rightarrow W\) is

  1. one-to-one iff \(\ker \relax (T) =\left \{ \vec {0}\right \} \)

  2. onto iff \(Rng\relax (T) =W\)

To show one-to-one, we need to find \(\ker \relax (T) \) by solving the system \(A\vec {x}=\vec {0}\). \begin {equation} \begin {bmatrix} 3 & 5 & 1\\ 1 & 2 & 1\\ 2 & 6 & 7 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \tag {1} \end {equation} Augmented matrix is\[\begin {bmatrix} 3 & 5 & 1 & 0\\ 1 & 2 & 1 & 0\\ 2 & 6 & 7 & 0 \end {bmatrix} \] Swapping \(R_{2},R_{1}\) gives (it is simpler to have the pivot be \(1\) to avoid fractions)\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 3 & 5 & 1 & 0\\ 2 & 6 & 7 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-3R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 0 & -1 & -2 & 0\\ 2 & 6 & 7 & 0 \end {bmatrix} \] \(R_{3}=R_{3}-2R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 0 & -1 & -2 & 0\\ 0 & 2 & 5 & 0 \end {bmatrix} \] \(R_{3}=R_{3}+2R_{2}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 0 & -1 & -2 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=-R_{2}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 0 & 1 & 2 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-2R_{3}\) gives\[\begin {bmatrix} 1 & 2 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-R_{3}\) gives\[\begin {bmatrix} 1 & 2 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-2R_{2}\) gives\[\begin {bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] There are no free variables. Number of pivots is \(3\). The system (1) becomes\[\begin {bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \] Which shows that the solution is \(x_{1}=0,x_{2}=x_{3}=0\). Hence \(\ker \left ( T\right ) =\left \{ \vec {0}\right \} \). Since number of free variables is zero, then we see that the dimension of the null space is zero. Therefore \(T\) is one-to-one.

Now we need to show if it is onto. The matrix \(A\) is \(3\times 3\). Therefore the mapping is \(\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}\). Hence \(W\) is \(\mathbb {R} ^{3}\). But \(Rng\relax (T) \) is the column space of \(A\). From above, we find that there are 3 pivots. So the 3 columns of \(A\) are pivots columns. Hence \[ Rng\relax (T) =\left \{ \vec {v}\in \mathbb {R} ^{3}:\vec {v}=c_{1}\begin {bmatrix} 3\\ 1\\ 2 \end {bmatrix} +c_{2}\begin {bmatrix} 5\\ 2\\ 6 \end {bmatrix} +c_{3}\begin {bmatrix} 1\\ 1\\ 7 \end {bmatrix} ,c_{1},c_{2},c_{3}\in \mathbb {R} \right \} \] Which is all of \(W\), since there are \(3\) independent basis vectors which span all of \(\mathbb {R} ^{3}\) and \(W\) is \(\mathbb {R} ^{3}\). Hence onto.

2.4.9 Problem 8

   2.4.9.1 Part 1
   2.4.9.2 Part 2
   2.4.9.3 Part 3

Determine all eigenvalues and corresponding eigenvectors of the given matrix 1) \(\begin {bmatrix} 5 & -4\\ 8 & -7 \end {bmatrix} \), 2) \(\begin {bmatrix} 7 & 4\\ -1 & 3 \end {bmatrix} \,\), 3) \(\begin {bmatrix} 7 & 3\\ -6 & 1 \end {bmatrix} \)

Solution

2.4.9.1 Part 1

\[ A=\begin {bmatrix} 5 & -4\\ 8 & -7 \end {bmatrix} \] The eigenvalues are found by solving \begin {align*} \left \vert A-\lambda I\right \vert & =0\\ \det \left ( \begin {bmatrix} 5 & -4\\ 8 & -7 \end {bmatrix} -\begin {bmatrix} \lambda & 0\\ 0 & \lambda \end {bmatrix} \right ) & =0\\\begin {vmatrix} 5-\lambda & -4\\ 8 & -7-\lambda \end {vmatrix} & =0\\ \left (5-\lambda \right ) \left (-7-\lambda \right ) -\left (-4\right ) \relax (8) & =0\\ \left (5-\lambda \right ) \left (-7-\lambda \right ) +32 & =0\\ \lambda ^{2}+2\lambda -35+32 & =0\\ \lambda ^{2}+2\lambda -3 & =0\\ \left (\lambda -1\right ) \left (\lambda +3\right ) & =0 \end {align*}

Hence the eigenvalues are \(\lambda _{1}=1,\lambda _{2}=-3\). For each eigenvalues, we now find the corresponding eigenvector.

\(\lambda _{1}=1\)

We need to solve \(A\vec {v}=\lambda _{1}\vec {v}\) for vector \(\vec {v}\). This gives\begin {align*} \begin {bmatrix} 5 & -4\\ 8 & -7 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\lambda _{1}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} \\\begin {bmatrix} 5 & -4\\ 8 & -7 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} -\lambda _{1}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 5-\lambda _{1} & -4\\ 8 & -7-\lambda _{1}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \end {align*}

But \(\lambda _{1}=1\). The above becomes\[\begin {bmatrix} 4 & -4\\ 8 & -8 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(R_{2}=R_{2}-2R_{1}\) gives \(\begin {bmatrix} 4 & -4\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is the base variable and \(v_{2}=t\) is the free variable. Therefore the system becomes\[\begin {bmatrix} 4 & -4\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] Using first row gives\begin {align*} 4v_{1}-4v_{2} & =0\\ v_{1} & =v_{2}\\ & =t \end {align*}

Then the eigenvector is \[ \vec {v}_{\lambda _{1}=1}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} t\\ t \end {bmatrix} =t\begin {bmatrix} 1\\ 1 \end {bmatrix} \] Choosing \(t=1\). (any arbitrary value will work), then the eigenvector is\[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} 1\\ 1 \end {bmatrix} \] \(\lambda _{2}=-3\)

We need to solve \(A\vec {v}=\lambda _{2}\vec {v}\) for vector \(\vec {v}\). This gives (as was done above)\begin {align*} \begin {bmatrix} 5-\lambda _{2} & -4\\ 8 & -7-\lambda _{2}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 8 & -4\\ 8 & -4 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \end {align*}

\(R_{2}=R_{2}-R_{1}\) gives \(\begin {bmatrix} 8 & -4\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is the base variable and \(v_{2}=t\) is the free variable. Therefore the system becomes

\[\begin {bmatrix} 8 & -4\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] Using first row gives\begin {align*} 8v_{1}-4v_{2} & =0\\ v_{1} & =\frac {1}{2}v_{2}=\frac {1}{2}t \end {align*}

Therefore the eigenvector is \[ \vec {v}_{\lambda _{2}=3}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} \frac {1}{2}t\\ t \end {bmatrix} =t\begin {bmatrix} \frac {1}{2}\\ 1 \end {bmatrix} =t\begin {bmatrix} 1\\ 2 \end {bmatrix} \] Choosing \(t=1\). (any arbitrary value will work), then the eigenvector is\[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} 1\\ 2 \end {bmatrix} \] Summary table






eigenvalue Algebraic multiplicity Geometric multiplicity defective? eigenvector





\(\lambda _{1}=1\) \(1\) \(1\) No \(\begin {bmatrix} 1\\ 1 \end {bmatrix} \)





\(\lambda _{2}=-3\) \(1\) \(1\) No \(\begin {bmatrix} 1\\ 2 \end {bmatrix} \)





2.4.9.2 Part 2

\[ A=\begin {bmatrix} 7 & 4\\ -1 & 3 \end {bmatrix} \] The eigenvalues are found by solving \begin {align*} \left \vert A-\lambda I\right \vert & =0\\ \det \left ( \begin {bmatrix} 7 & 4\\ -1 & 3 \end {bmatrix} -\begin {bmatrix} \lambda & 0\\ 0 & \lambda \end {bmatrix} \right ) & =0\\\begin {vmatrix} 7-\lambda & 4\\ -1 & 3-\lambda \end {vmatrix} & =0\\ \left (7-\lambda \right ) \left (3-\lambda \right ) +4 & =0\\ \lambda ^{2}-10\lambda +21+4 & =0\\ \lambda ^{2}-10\lambda +25 & =0\\ \left (\lambda -5\right ) \left (\lambda -5\right ) & =0 \end {align*}

Hence the roots is \(\lambda =5\) which is a repeated root. (its algebraic multiplicity is \(2\))

\(\lambda =5\)

We need to solve \(A\vec {v}=\lambda _{1}\vec {v}\) for vector \(\vec {v}\). This gives\begin {align*} \begin {bmatrix} 7-\lambda & 4\\ -1 & 3-\lambda \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 7-5 & 4\\ -1 & 3-5 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 2 & 4\\ -1 & -2 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \end {align*}

\(R_{2}=R_{2}+\frac {1}{2}R_{1}\) gives \(\begin {bmatrix} 2 & 4\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is base variable and \(v_{2}=t\) is free variable. Therefore the system becomes

\[\begin {bmatrix} 2 & 4\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] The first row gives\begin {align*} 2v_{1}+4v_{2} & =0\\ 2v_{1} & =-4v_{2}\\ v_{1} & =-2v_{2}\\ & =-2t \end {align*}

Therefore the first eigenvector is \[ \vec {v}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} -2t\\ t \end {bmatrix} =t\begin {bmatrix} -2\\ 1 \end {bmatrix} \] Choosing \(t=1\). (any arbitrary value will work), then the eigenvector is\[ \vec {v}=\begin {bmatrix} -2\\ 1 \end {bmatrix} \] Since we are able to obtain only one eigenvector from \(\lambda =5\), then this is a defective eigenvalue.  It has an algebraic multiplicity of \(2\) but its geometric multiplicity is only \(1\). When the geometric multiplicity is less than the algebraic multiplicity then the eigenvalue is defective. Summary table






eigenvalue Algebraic multiplicity Geometric multiplicity defective? eigenvector





\(\lambda =5\) \(2\) \(1\) yes \(\begin {bmatrix} -2\\ 1 \end {bmatrix} \)





The matrix is defective and hence not diagonalizable.

2.4.9.3 Part 3

\[ A=\begin {bmatrix} 7 & 3\\ -6 & 1 \end {bmatrix} \] The eigenvalues are found by solving \begin {align*} \left \vert A-\lambda I\right \vert & =0\\ \det \left ( \begin {bmatrix} 7 & 3\\ -6 & 1 \end {bmatrix} -\begin {bmatrix} \lambda & 0\\ 0 & \lambda \end {bmatrix} \right ) & =0\\\begin {vmatrix} 7-\lambda & 3\\ -6 & 1-\lambda \end {vmatrix} & =0\\ \left (7-\lambda \right ) \left (1-\lambda \right ) +18 & =0\\ \lambda ^{2}-8\lambda +7+18 & =0\\ \lambda ^{2}-8\lambda +25 & =0 \end {align*}

Using quadratic formula \(\lambda =-\frac {b}{2a}\pm \frac {1}{2a}\sqrt {b^{2}-4ac}\) gives \begin {align*} \lambda & =\frac {8}{2}\pm \frac {1}{2}\sqrt {64-4\left (25\right ) }\\ & =4\pm \frac {1}{2}\sqrt {64-100}\\ & =4\pm \frac {1}{2}\sqrt {-36}\\ & =4\pm \frac {6}{2}i\\ & =4\pm 3i \end {align*}

Hence the eigenvalues are complex conjugates of each other. They are \(\lambda _{1}=4+3i,\lambda _{2}=4-3i\,.\)For each eigenvalues, we now find the corresponding eigenvector.

\(\lambda _{1}=4+3i\)

We need to solve \(A\vec {v}=\lambda _{1}\vec {v}\) for vector \(\vec {v}\). This gives\[\begin {bmatrix} 7-\lambda _{1} & 3\\ -6 & 1-\lambda _{1}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] But \(\lambda _{1}=4+3i\). The above becomes\begin {align*} \begin {bmatrix} 7-\left (4+3i\right ) & 3\\ -6 & 1-\left (4+3i\right ) \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 3-3i & 3\\ -6 & -3-3i \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} & =\begin {bmatrix} 0\\ 0 \end {bmatrix} \end {align*}

\(R_{1}=R_{1}\left (\frac {1}{6}+\frac {1}{6}i\right ) \) gives \(\begin {bmatrix} \left (3-3i\right ) \left (\frac {1}{6}+\frac {1}{6}i\right ) & 3\left ( \frac {1}{6}+\frac {1}{6}i\right ) \\ -6 & -3-3i \end {bmatrix} =\begin {bmatrix} 1 & \frac {1}{2}+\frac {1}{2}i\\ -6 & -3-3i \end {bmatrix} \) and now \(R_{2}=R_{2}+6R_{1}\) gives\[\begin {bmatrix} 1 & \frac {1}{2}+\frac {1}{2}i\\ 0 & \left (-3-3i\right ) +6\left (\frac {1}{2}+\frac {1}{2}i\right ) \end {bmatrix} =\begin {bmatrix} 1 & \frac {1}{2}+\frac {1}{2}i\\ 0 & 0 \end {bmatrix} \] Hence the system using RREF becomes\[\begin {bmatrix} 1 & \frac {1}{2}+\frac {1}{2}i\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(v_{1}\) is the base variable and \(v_{2}=t\) is the free variable. First row gives\begin {align*} v_{1}+\left (\frac {1}{2}+\frac {1}{2}i\right ) v_{2} & =0\\ v_{1} & =\left (-\frac {1}{2}-\frac {1}{2}i\right ) v_{2}\\ & =\left (-\frac {1}{2}-\frac {1}{2}i\right ) t \end {align*}

Therefore the eigenvector is \[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =t\begin {bmatrix} \frac {-1}{2}-\frac {1}{2}i\\ 1 \end {bmatrix} =t\begin {bmatrix} -1-i\\ 2 \end {bmatrix} \] Choosing \(t=1\). (any arbitrary value will work), then the eigenvector is\[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} -1-i\\ 2 \end {bmatrix} \] \(\lambda _{2}=4-3i\)

We need to solve \(A\vec {v}=\lambda _{2}\vec {v}\) for vector \(\vec {v}\). We could follow the same steps above to find the second eigenvector, but since the eigenvectors are complex, then they must come as complex conjugate pairs. Hence \(\vec {v}_{\lambda _{2}}\) can directly be found using\begin {align*} \vec {v}_{\lambda _{2}} & =\left (\vec {v}_{\lambda _{2}}\right ) ^{\ast }\\ & =\begin {bmatrix} -1+i\\ 2 \end {bmatrix} \end {align*}

Summary table






eigenvalue Algebraic multiplicity Geometric multiplicity defective? eigenvector





\(\lambda _{1}=4+3i\) \(1\) \(1\) No \(\begin {bmatrix} -1-i\\ 2 \end {bmatrix} \)





\(\lambda _{2}=4-3i\) \(1\) \(1\) No \(\begin {bmatrix} -1+i\\ 2 \end {bmatrix} \)





2.4.10 Problem 9

If \(v_{1}=\begin {bmatrix} 1\\ -1 \end {bmatrix} \) and \(v_{2}=\begin {bmatrix} 2\\ 1 \end {bmatrix} \) eigenvectors of the matrix \(A\) corresponding to the eigenvalues \(\lambda _{1}=2,\lambda _{2}=-3\) respectively. Find \(A\left (3v_{1}-v_{2}\right ) \)

Solution

By definition \[ Av=\lambda v \] Where \(\lambda \) is the eigenvalue and \(v\) is the corresponding eigenvector. Therefore by linearity of operator \(A\)\begin {align*} A\left (3v_{1}-v_{2}\right ) & =A\left (3v_{1}\right ) -Av_{2}\\ & =3Av_{1}-Av_{2}\\ & =3\left (\lambda _{1}v_{1}\right ) -\left (\lambda _{2}v_{2}\right ) \\ & =3\left (2\begin {bmatrix} 1\\ -1 \end {bmatrix} \right ) -\left (-3\begin {bmatrix} 2\\ 1 \end {bmatrix} \right ) \\ & =3\left ( \begin {bmatrix} 2\\ -2 \end {bmatrix} \right ) +\begin {bmatrix} 6\\ 3 \end {bmatrix} \\ & =\begin {bmatrix} 6\\ -6 \end {bmatrix} +\begin {bmatrix} 6\\ 3 \end {bmatrix} \\ & =\begin {bmatrix} 6+6\\ -6+3 \end {bmatrix} \\ & =\begin {bmatrix} 12\\ -3 \end {bmatrix} \end {align*}

2.4.11 Problem 10

Determine the multiplicity of each eigenvalue and a basis for each eigenspace of the given matrix \(A\). Determine the dimension of each eigenspace and state whether the matrix is defective or nondefective.\[ A=\begin {bmatrix} 1 & 4\\ 2 & 3 \end {bmatrix} \] Solution

The eigenvalues are found by solving \begin {align*} \left \vert A-\lambda I\right \vert & =0\\ \det \left ( \begin {bmatrix} 1 & 4\\ 2 & 3 \end {bmatrix} -\begin {bmatrix} \lambda & 0\\ 0 & \lambda \end {bmatrix} \right ) & =0\\\begin {vmatrix} 1-\lambda & 4\\ 2 & 3-\lambda \end {vmatrix} & =0\\ \left (1-\lambda \right ) \left (3-\lambda \right ) -8 & =0\\ \lambda ^{2}-4\lambda +3-8 & =0\\ \lambda ^{2}-4\lambda -5 & =0\\ \left (\lambda -5\right ) \left (\lambda +1\right ) & =0 \end {align*}

Hence the eigenvalues are \(\lambda _{1}=5\) with multiplicity \(1\), and \(\lambda _{2}=-1\) with multiplicity \(1\). For each eigenvalues, we now find the corresponding eigenvector.

\(\lambda _{1}=5\)

We need to solve \(A\vec {v}=\lambda _{1}\vec {v}\) for vector \(\vec {v}\). This gives\[\begin {bmatrix} 1-\lambda _{1} & 4\\ 2 & 3-\lambda _{1}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] But \(\lambda _{1}=5\). The above becomes\[\begin {bmatrix} -4 & 4\\ 2 & -2 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(R_{2}=R_{2}+\frac {1}{2}R_{1}\) gives \(\begin {bmatrix} -4 & 4\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is base variable and \(v_{2}=t\) is free variable. Therefore the system becomes\[\begin {bmatrix} -4 & 4\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] Using first row gives\begin {align*} -4v_{1}+4v_{2} & =0\\ v_{1} & =v_{2}\\ & =t \end {align*}

\[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} t\\ t \end {bmatrix} =t\begin {bmatrix} 1\\ 1 \end {bmatrix} \] By choosing \(t=1\)\[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} 1\\ 1 \end {bmatrix} \] \(\lambda _{2}=-1\)

We need to solve \(A\vec {v}=\lambda _{2}\vec {v}\) for vector \(\vec {v}\). This gives (as was done above)\[\begin {bmatrix} 1-\lambda _{1} & 4\\ 2 & 3-\lambda _{1}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] But \(\lambda _{2}=-1\). The above becomes\[\begin {bmatrix} 2 & 4\\ 2 & 4 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(R_{2}=R_{2}-R_{1}\) gives \(\begin {bmatrix} 2 & 4\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is base variable and \(v_{2}=t\) is free variable. Therefore the system becomes\[\begin {bmatrix} 2 & 4\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] First row gives\begin {align*} 2v_{1}+4v_{2} & =0\\ v_{1} & =-2v_{2}\\ & =-2t \end {align*}

Choosing \(t=1\) the eigenvector is \[ \vec {v}_{\lambda _{2}=3}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} -2\\ 1 \end {bmatrix} \] Summary table



eigenvalue eigenvector


\(\lambda _{1}=5\) \(\begin {bmatrix} 1\\ 1 \end {bmatrix} \)


\(\lambda _{2}=-1\) \(\begin {bmatrix} -2\\ 1 \end {bmatrix} \)


The matrix is not defective because we are able to find two unique eigenvalues for a \(2\times 2\) matrix. The dimension of eigenspace corresponding to each eigenvalue is given by the dimension of the null space of \(A-\lambda I\) where \(\lambda \) is the eigenvalue and \(I\) is the identity matrix. For \(\lambda _{1}=5\), since there was one free variable, then the dimension of this eigenspace is one.

Similarly for \(\lambda _{2}=-1\) since there was one free variable, then the dimension of this eigenspace is one.

2.4.12 Problem 11

Determine whether the given matrix A is diagonalizable\[ A=\begin {bmatrix} -1 & -2\\ -2 & 2 \end {bmatrix} \] Solution

A matrix is diagonalizable if it is not defective. The eigenvalues are found by solving \begin {align*} \left \vert A-\lambda I\right \vert & =0\\ \det \left ( \begin {bmatrix} -1 & -2\\ -2 & 2 \end {bmatrix} -\begin {bmatrix} \lambda & 0\\ 0 & \lambda \end {bmatrix} \right ) & =0\\\begin {vmatrix} -1-\lambda & -2\\ -2 & 2-\lambda \end {vmatrix} & =0\\ \left (-1-\lambda \right ) \left (2-\lambda \right ) -4 & =0\\ \lambda ^{2}-\lambda -2-4 & =0\\ \lambda ^{2}-\lambda -6 & =0\\ \left (\lambda -3\right ) \left (\lambda +2\right ) & =0 \end {align*}

Hence the eigenvalues are \(\lambda _{1}=3,\lambda _{2}=-2\). For each eigenvalues, we now find the corresponding eigenvector.

\(\lambda _{1}=3\)

We need to solve \(A\vec {v}=\lambda _{1}\vec {v}\) for vector \(\vec {v}\). This gives\[\begin {bmatrix} -1-\lambda _{1} & -2\\ -2 & 2-\lambda _{1}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] But \(\lambda _{1}=3\). The above becomes\[\begin {bmatrix} -4 & -2\\ -2 & -1 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(R_{2}=R_{2}-\frac {1}{2}R_{1}\) gives \(\begin {bmatrix} -4 & -2\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is base variable and \(v_{2}=t\) is free variable. Therefore the system becomes\[\begin {bmatrix} -4 & -2\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] First row gives\begin {align*} -4v_{1}-2v_{2} & =0\\ v_{2} & =-\frac {1}{2}v_{2}\\ & =-\frac {1}{2}t \end {align*}

Therefore the eigenvector is \[ \vec {v}_{\lambda _{1}}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =t\begin {bmatrix} -\frac {1}{2}\\ 1 \end {bmatrix} \] Choosing \(t=1\) then\[ \vec {v}_{\lambda _{1}}==\begin {bmatrix} -1\\ 2 \end {bmatrix} \] \(\lambda _{2}=-2\)

We need to solve \(A\vec {v}=\lambda _{2}\vec {v}\) for vector \(\vec {v}\). This gives\[\begin {bmatrix} -1-\lambda _{2} & -2\\ -2 & 2-\lambda _{2}\end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] But \(\lambda _{2}=-2\). The above becomes\[\begin {bmatrix} 1 & -2\\ -2 & 4 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] \(R_{2}=R_{2}+2R_{1}\) gives \(\begin {bmatrix} 1 & -2\\ 0 & 0 \end {bmatrix} \). Hence \(v_{1}\) is base variable and \(v_{2}=t\) is free variable. Therefore the system becomes\[\begin {bmatrix} 1 & -2\\ 0 & 0 \end {bmatrix}\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 0\\ 0 \end {bmatrix} \] First row gives\begin {align*} v_{1}-2v_{2} & =0\\ v_{1} & =2v_{2}\\ & =2t \end {align*}

Therefore the eigenvector is \[ \vec {v}_{\lambda _{2}}=\begin {bmatrix} v_{1}\\ v_{2}\end {bmatrix} =\begin {bmatrix} 2t\\ t \end {bmatrix} =t\begin {bmatrix} 2\\ 1 \end {bmatrix} \] Choosing \(t=1\) gives\[ \vec {v}_{\lambda _{2}}=\begin {bmatrix} 2\\ 1 \end {bmatrix} \] Summary table



eigenvalue eigenvector


\(\lambda _{1}=3\) \(\begin {bmatrix} -1\\ 2 \end {bmatrix} \)


\(\lambda _{2}=-2\) \(\begin {bmatrix} 2\\ 1 \end {bmatrix} \)


Since the matrix is not defective (because it has two unique eigenvalues), then it is diagonalizable. To show this, let \(P\) the matrix whose columns are the eigenvectors found, and let \(D\) be diagonal matrix with the eigenvalues at its diagonal. Then we can write\[ A=PDP^{-1}\] Where \(D=\begin {bmatrix} 3 & 0\\ 0 & -2 \end {bmatrix} \) and \(P=\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix} \). Hence\begin {align*} A & =\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix}\begin {bmatrix} 3 & 0\\ 0 & -2 \end {bmatrix}\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix} ^{-1}\\ & =\begin {bmatrix} \left (-1\right ) \relax (3) & 2\left (-2\right ) \\ 2\relax (3) & -2 \end {bmatrix}\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix} ^{-1}\\ & =\begin {bmatrix} -3 & -4\\ 6 & -2 \end {bmatrix}\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix} ^{-1} \end {align*}

But \(\begin {bmatrix} -1 & 2\\ 2 & 1 \end {bmatrix} ^{-1}=\frac {1}{\left (-1\right ) -\relax (4) }\begin {bmatrix} 1 & -2\\ -2 & -1 \end {bmatrix} =\frac {1}{-5}\begin {bmatrix} 1 & -2\\ -2 & -1 \end {bmatrix} \). Hence the above becomes\begin {align*} A & =\frac {1}{-5}\begin {bmatrix} -3 & -4\\ 6 & -2 \end {bmatrix}\begin {bmatrix} 1 & -2\\ -2 & -1 \end {bmatrix} \\ & =\frac {1}{-5}\begin {bmatrix} \left (-3\right ) \relax (1) +\left (-4\right ) \left (-2\right ) & \left (-3\right ) \left (-2\right ) +\left (-4\right ) \left (-1\right ) \\ \relax (6) \relax (1) +\left (-2\right ) \left (-2\right ) & \relax (6) \left (-2\right ) +\left (-2\right ) \left (-1\right ) \end {bmatrix} \\ & =\frac {1}{-5}\begin {bmatrix} 5 & 10\\ 10 & -10 \end {bmatrix} \\ & =\begin {bmatrix} -1 & -2\\ -2 & 2 \end {bmatrix} \end {align*}

Verified.

2.4.13 Problem 12

   2.4.13.1 Part a
   2.4.13.2 Part b
   2.4.13.3 Part c

Determine the general solution to the given differential equations a) \(y^{\prime \prime }-y^{\prime }-2y=0\). b) \(y^{\prime \prime }+10y^{\prime }+25y=0\). c) \(y^{\prime \prime }+6y^{\prime }+11y=0\)

Solution

2.4.13.1 Part a

This is a constant coefficients second order linear ODE. Hence it is solved using the characteristic polynomial method. Assuming solution is \(y=e^{\lambda x}\). Substituting this into the ODE gives\[ \lambda ^{2}e^{\lambda x}-\lambda e^{\lambda x}-2e^{\lambda x}=0 \] Since \(e^{\lambda x}\neq 0\), the above simplifies to \begin {align*} \lambda ^{2}-\lambda -2 & =0\\ \left (\lambda +1\right ) \left (\lambda -2\right ) & =0 \end {align*}

The roots are \(\lambda _{1}=-1,\lambda _{2}=2\). Therefore there are two basis solutions, they are \(y_{1}=e^{\lambda _{1}x}=e^{-x}\) and \(y_{2}=e^{\lambda _{2}x}=e^{2x}\). The general solution is a linear combination of these basis solutions. The general solution is\begin {align*} y\relax (x) & =c_{1}y_{1}\relax (x) +c_{2}y_{2}\left ( x\right ) \\ & =c_{1}e^{-x}+c_{2}e^{2x} \end {align*}

Where \(c_{1},c_{2}\) are the constants of integration.

2.4.13.2 Part b

This is a constant coefficients second order linear ODE. Hence it is solved using the characteristic polynomial method. Assuming solution is \(y=e^{\lambda x}\). Substituting this into the ODE gives\[ \lambda ^{2}e^{\lambda x}+10\lambda e^{\lambda x}+25e^{\lambda x}=0 \] Since \(e^{\lambda x}\neq 0\), then the above simplifies to \begin {align*} \lambda ^{2}+10\lambda +25 & =0\\ \left (\lambda +5\right ) \left (\lambda +5\right ) & =0 \end {align*}

Hence the roots are \(\lambda =-5\), which is double root. Since the root is double, then the first basis solution is \(y_{1}=e^{-5x}\) and the second is \(x\) times the first, which gives \(y_{2}=xe^{-5x}\).

The general solution is a linear combination of these basis solutions \begin {align*} y\relax (x) & =c_{1}y_{1}\relax (x) +c_{2}y_{2}\left ( x\right ) \\ & =c_{1}e^{-5x}+c_{2}xe^{-5x} \end {align*}

2.4.13.3 Part c

This is a constant coefficients second order linear ODE. Hence it is solved using the characteristic polynomial method. Assuming solution is \(y=e^{\lambda x}\). Substituting this into the ODE gives\[ \lambda ^{2}e^{\lambda x}+6\lambda e^{\lambda x}+11e^{\lambda x}=0 \] Since \(e^{\lambda x}\neq 0\), then the above simplifies to \[ \lambda ^{2}+6\lambda +11=0 \] Using quadratic formula \(\lambda =-\frac {b}{2a}\pm \frac {1}{2a}\sqrt {b^{2}-4ac}\) gives \begin {align*} \lambda & =\frac {-6}{2}\pm \frac {1}{2}\sqrt {36-4\left (11\right ) }\\ & =-3\pm \frac {1}{2}\sqrt {36-44}\\ & =-3\pm \frac {1}{2}\sqrt {-8}\\ & =-3\pm \sqrt {-2}\\ & =-3\pm i\sqrt {2} \end {align*}

Hence roots are \(\lambda _{1}=-3+i\sqrt {2},\lambda _{2}=-3-i\sqrt {2}\). Hence there are two basis solutions, they are \begin {align*} y_{1} & =e^{\lambda _{1}x}\\ & =e^{\left (-3+i\sqrt {2}\right ) x}\\ & =e^{-3x}e^{i\sqrt {2}x} \end {align*}

And \begin {align*} y_{2} & =e^{\lambda _{2}x}\\ & =e^{\left (-3-i\sqrt {2}\right ) x}\\ & =e^{-3x}e^{-i\sqrt {2}x} \end {align*}

The general solution is a linear combination of these basis solutions. Therefore\begin {align*} y\relax (x) & =c_{1}y_{1}\relax (x) +c_{2}y_{2}\left ( x\right ) \\ & =c_{1}e^{-3x}e^{i\sqrt {2}x}+c_{2}e^{-3x}e^{-i\sqrt {2}x}\\ & =e^{-3x}\left (c_{1}e^{i\sqrt {2}x}+c_{2}e^{-i\sqrt {2}x}\right ) \end {align*}

Using Euler formula \(e^{i\sqrt {2}x}=\cos \left (\sqrt {2}x\right ) +i\sin \left (\sqrt {2}x\right ) \) and \(e^{-i\sqrt {2}x}=\cos \left (\sqrt {2}x\right ) -i\sin \left (\sqrt {2}x\right ) \). The above becomes\begin {align*} y\relax (x) & =e^{-3x}\left (c_{1}\left (\cos \left (\sqrt {2}x\right ) +i\sin \left (\sqrt {2}x\right ) \right ) +c_{2}\left ( \cos \left (\sqrt {2}x\right ) -i\sin \left (\sqrt {2}x\right ) \right ) \right ) \\ & =e^{-3x}\left (\cos \left (\sqrt {2}x\right ) \left (c_{1}+c_{2}\right ) +\sin \left (\sqrt {2}x\right ) \left (ic_{1}+ic_{2}\right ) \right ) \end {align*}

Let \(\left (c_{1}+c_{2}\right ) =C_{1}\) and \(\left (ic_{1}+ic_{2}\right ) =C_{2}\) be new constants. Hence the above becomes\[ y\relax (x) =e^{-3x}\left (C_{1}\cos \left (\sqrt {2}x\right ) +C_{2}\sin \left (\sqrt {2}x\right ) \right ) \]