2.8 HW 8

  2.8.1 HW 8 questions
  2.8.2 Problem 1
  2.8.3 Problem 2
  2.8.4 Problem 3
  2.8.5 Problem 4
  2.8.6 Key solution for HW 8

2.8.1 HW 8 questions

pict
PDF (letter size)
PDF (legal size)

2.8.2 Problem 1

   2.8.2.1 part 1 \(\left ( AB\right ) ^{T}=B^{T}A^{T}\)
   2.8.2.2 Part 2 \(\left ( AB\right ) ^{\dag }=B^{\dag }A^{\dag }\)
   2.8.2.3 Part 3 \(\operatorname{Tr}\left ( AB\right ) =\operatorname{Tr}\left ( BA\right ) \)
   2.8.2.4 Part 4 \(\qopname \relax m{det}\left ( A^{T}\right ) =\qopname \relax m{det}A\)
   2.8.2.5 Part 5 \(\qopname \relax m{det}\left ( AB\right ) =\qopname \relax m{det}\left ( A\right ) \qopname \relax m{det}\left ( B\right ) \)

pict
Figure 2.32:Problem statement
2.8.2.1 part 1 \(\left ( AB\right ) ^{T}=B^{T}A^{T}\)

Let \(A\) be an \(n,m\) matrix and \(B\) be an \(m,p\) matrix. Hence \(AB=C\) is an \(n,p\) matrix. By definition of matrix product which is rows of \(A\) multiply columns of \(B\) then the \(ij\) element of \(C\) is\[ c_{ij}=\sum _{k=1}^{m}a_{ik}b_{kj}\] Then \(\left ( AB\right ) ^{T}=C^{T}\). Hence from above, elements of \(C^{T}\) are given by \begin{equation} c_{ji}=\sum _{k=1}^{m}a_{jk}b_{ki} \tag{1} \end{equation} Now let \(B^{T}A^{T}=Q\). Where now\(\ B^{T}\) is order \(p\times m\) and\(\ A^{T}\) is order \(m\times n\), hence \(Q\) is \(p\times n\). \begin{align*} q_{ij} & =\sum _{k=1}^{m}\left ( b_{ik}\right ) ^{T}\left ( a_{kj}\right ) ^{T}\\ & =\sum _{k=1}^{m}b_{ki}a_{jk} \end{align*}

But \(\sum b_{ki}a_{jk}\) means to multiply column \(i\) of \(B\) by row \(j\) in \(A\), which is the same as multiplying row \(j\) of \(A\) by column \(i\) of \(B\). Hence we can change the order of multiplication above as\begin{equation} q_{ij}=\sum _{k=1}^{m}a_{jk}b_{ki} \tag{2} \end{equation} Comparing (1) and (2) shows they are the same. Hence\[ C^{T}=Q \] Or \[ \left ( AB\right ) ^{T}=B^{T}A^{T}\]

2.8.2.2 Part 2 \(\left ( AB\right ) ^{\dag }=B^{\dag }A^{\dag }\)

By definition \(A^{\dag }=\left ( A^{T}\right ) ^{\ast }\). Which means we take the transpose of \(A\) and then apply complex conjugate to its entries. Hence the solution follows the above, but we just have to apply complex conjugate at the end of each operation

Let \(A\) be an \(n\times m\) matrix and \(B\) be \(m\times p\) matrix. Hence \(AB=C\) which is \(n\times p\) matrix. By definition of matrix product which is row of \(A\) multiplies columns of \(B\) then the \(ij\) element of \(C\) is\[ c_{ij}=\sum _{k=1}^{m}a_{ik}b_{kj}\] Then \(\left ( AB\right ) _{ij}^{\dag }=\left ( C_{ij}^{T}\right ) ^{\ast }=c_{ji}^{\ast }\). Hence from above\[ c_{ji}^{\ast }=\sum _{k=1}^{m}\left ( a_{jk}b_{ki}\right ) ^{\ast }\] But complex conjugate of product is same as product of complex conjugates, hence the above is same as\begin{equation} c_{ji}^{\ast }=\sum _{k=1}^{m}a_{jk}^{\ast }b_{ki}^{\ast } \tag{1} \end{equation} Now let \(B^{\dag }A^{\dag }=Q\). Then\begin{align*} q_{ij} & =\sum _{k=1}^{m}\left ( b_{ik}^{T}\right ) ^{\ast }\left ( a_{kj}^{T}\right ) ^{\ast }\\ & =\sum _{k=1}^{m}b_{ki}^{\ast }a_{jk}^{\ast } \end{align*}

But \(\sum _{k=1}^{m}b_{ki}^{\ast }a_{jk}^{\ast }\) means to multiply complex conjugate of column \(i\) of \(B\) by complex conjugate of row \(j\) in \(A\), which is the same as multiplying complex conjugate complex of row \(j\) of \(A\) by complex conjugate of column \(i\) of \(B\). Hence the above can be written as\begin{equation} q_{ij}=\sum _{k=1}^{m}a_{jk}^{\ast }b_{ki}^{\ast } \tag{2} \end{equation} Comparing (1) and (2) shows they are the same. Hence\[ \left ( C^{T}\right ) ^{\ast }=Q \] Or \[ \left ( AB\right ) ^{\dag }=B^{\dag }A^{\dag }\]

2.8.2.3 Part 3 \(\operatorname{Tr}\left ( AB\right ) =\operatorname{Tr}\left ( BA\right ) \)

The trace \(\operatorname{Tr}\) of a matrix is the sum of elements on the diagonal matrix (and this applies only to square matrices). Let \(A\) be \(n\times m\) And \(B\) be an \(m\times n\) matrix. Hence\(\ AB\) is \(n\times n\) matrix and \(BA\) is \(m\times m\) matrix.\begin{align*} \operatorname{Tr}\left ( AB\right ) & =\sum _{i=1}^{n}\left ( AB\right ) _{ii}\\ & =\sum _{i=1}^{n}\left ( \sum _{j=1}^{m}a_{ij}b_{ji}\right ) \\ & =\sum _{j=1}^{m}\left ( \sum _{i=1}^{n}b_{ji}a_{ij}\right ) \\ & =\sum _{j=1}^{m}\left ( BA\right ) _{jj}\\ & =\operatorname{Tr}\left ( BA\right ) \end{align*}

2.8.2.4 Part 4 \(\det \left ( A^{T}\right ) =\det A\)

Proof by induction. Let base be \(n=1\). Hence \(A_{1\times 1}\). It is clear that \(\det \left ( A\right ) =\det \left ( A^{T}\right ) \) in this case. We could also have selected base case to be \(n=2\). Any base case will work in proof by induction.

We now assume it is true for the \(n-1\) case. i.e. \(\det \left ( A_{\left ( n-1\right ) \times \left ( n-1\right ) }\right ) =\det \left ( A_{\left ( n-1\right ) \times \left ( n-1\right ) }^{T}\right ) \) is assumed to be true. This is called the induction hypothesis step.

We need now to show it is true for the case of \(n\), i.e. we need to show that \(\det \left ( A_{n\times n}\right ) =\det \left ( A_{n\times n}^{T}\right ) \).  Let\[ A_{n\times n}=\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \cdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn}\end{pmatrix} \] Therefore\[ A_{n\times n}^{T}=\begin{pmatrix} a_{11} & a_{21} & \cdots & a_{n1}\\ a_{12} & a_{22} & \cdots & a_{n2}\\ \vdots & \cdots & \ddots & \vdots \\ a_{1n} & a_{2n} & \cdots & a_{nn}\end{pmatrix} \] Now we take \(\det \left ( A\right ) \) and expand using cofactors along the first row which gives\begin{equation} \det \left ( A\right ) =a_{11}\det \left ( A_{11}\right ) -a_{12}\det \left ( A_{12}\right ) +\cdots +\left ( -1\right ) ^{n+1}a_{1n}\det \left ( A_{1n}\right ) \tag{1} \end{equation} Where \(A_{ij}\) in the above means the matrix of dimensions \(\left ( n-1,n-1\right ) \) taken from \(A_{n\times n}\) by removing the \(i^{th}\) row and the \(j^{th}\) column. Now we do the same for \(A^{T}\) above, but instead of expanding using first row, we expend using first column of \(A^{T}\) since we can pick any row or any column to expand around in order find the determinant. This gives\begin{equation} \det \left ( A^{T}\right ) =a_{11}\det \left ( A^{T}\right ) _{11}-a_{12}\det \left ( A^{T}\right ) _{21}+\cdots +\left ( -1\right ) ^{n+1}a_{1n}\det \left ( A^{T}\right ) _{n1} \tag{2} \end{equation} For (1) to be the same as (2) we need to show that \(\det \left ( A_{11}\right ) =\det \left ( A^{T}\right ) _{11}\) and \(\det \left ( A_{12}\right ) =\det \left ( A^{T}\right ) _{21}\) and all the way to \(\det \left ( A_{1n}\right ) =\det \left ( A^{T}\right ) _{n1}\). But this is true by assumption. Since we assumed that \(\det \left ( A_{\left ( n-1\right ) \times \left ( n-1\right ) }\right ) =\det \left ( A_{\left ( n-1\right ) \times \left ( n-1\right ) }^{T}\right ) \). In other words, by the induction hypothesis \(\det \left ( A_{ij}\right ) =\det \left ( A^{T}\right ) _{ji}\) since both are \(\left ( n-1\right ) \times \left ( n-1\right ) \) order. Hence (1) is the same as (2). This completes the proof.

2.8.2.5 Part 5 \(\det \left ( AB\right ) =\det \left ( A\right ) \det \left ( B\right ) \)

Since the matrices are diagonal they must be square. And since product \(AB\) is defined, then they must both be same dimension, say \(n\times n\).

Since \(A,B\) are diagonal, then \begin{align*} \det \left ( A\right ) & =a_{11}a_{22}\cdots a_{nn}={\displaystyle \prod \limits _{i}^{n}} a_{ii}\\ \det \left ( B\right ) & =b_{11}b_{22}\cdots b_{nn}={\displaystyle \prod \limits _{i}^{n}} b_{jj} \end{align*}

Now since \(A,B\) are diagonals, then the product is diagonal. Using definition of a row from \(A\) multiplies a column in \(B\), we get\[\begin{pmatrix} a_{11} & 0 & 0 & 0\\ 0 & a_{22} & 0 & 0\\ 0 & 0 & \ddots & 0\\ 0 & 0 & 0 & a_{nn}\end{pmatrix}\begin{pmatrix} b_{11} & 0 & 0 & 0\\ 0 & b_{22} & 0 & 0\\ 0 & 0 & \ddots & 0\\ 0 & 0 & 0 & b_{nn}\end{pmatrix} =\begin{pmatrix} a_{11}b_{11} & 0 & 0 & 0\\ 0 & a_{22}b_{22} & 0 & 0\\ 0 & 0 & \ddots & 0\\ 0 & 0 & 0 & a_{nn}b_{nn}\end{pmatrix} \] Then we see that \begin{align*} \det \left ( AB\right ) & =\left ( a_{11}b_{11}\right ) \left ( a_{22}b_{22}\right ) \cdots \left ( a_{nn}b_{nn}\right ) \\ & =\left ( a_{11}a_{22}\cdots a_{nn}\right ) \left ( b_{11}b_{22}\cdots b_{nn}\right ) \\ & ={\displaystyle \prod \limits _{i}^{n}} a_{ii}{\displaystyle \prod \limits _{i}^{n}} b_{jj}\\ & =\det \left ( A\right ) \det \left ( B\right ) \end{align*}

2.8.3 Problem 2

pict
Figure 2.33:Problem statement

We first need to find the eigenvalues \(\lambda \) by solving\[ \det \left ( A-\lambda I\right ) =0 \] The above gives a polynomial of order 3. \begin{align*} \left \vert \begin{pmatrix} \frac{5}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}\end{pmatrix} -\begin{pmatrix} \lambda & 0 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{pmatrix} \right \vert & =0\\\begin{vmatrix} \frac{5}{2}-\lambda & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3}-\lambda & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}-\lambda \end{vmatrix} & =0\\ \left ( \frac{5}{2}-\lambda \right ) \begin{vmatrix} \frac{7}{3}-\lambda & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{1}{18}} & \frac{13}{6}-\lambda \end{vmatrix} -\sqrt{\frac{3}{2}}\begin{vmatrix} \sqrt{\frac{3}{2}} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \frac{13}{6}-\lambda \end{vmatrix} +\sqrt{\frac{3}{4}}\begin{vmatrix} \sqrt{\frac{3}{2}} & \frac{7}{3}-\lambda \\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}}\end{vmatrix} & =0 \end{align*}

Hence\begin{multline*} \left ( \frac{5}{2}-\lambda \right ) \left ( \left ( \frac{7}{3}-\lambda \right ) \left ( \frac{13}{6}-\lambda \right ) -\sqrt{\frac{1}{18}}\sqrt{\frac{1}{18}}\right ) \\ -\sqrt{\frac{3}{2}}\left ( \sqrt{\frac{3}{2}}\left ( \frac{13}{6}-\lambda \right ) -\sqrt{\frac{1}{18}}\sqrt{\frac{3}{4}}\right ) \\ +\sqrt{\frac{3}{4}}\left ( \sqrt{\frac{3}{2}}\sqrt{\frac{1}{18}}-\left ( \frac{7}{3}-\lambda \right ) \sqrt{\frac{3}{4}}\right ) =0 \end{multline*} Or\begin{align*} \left ( \frac{5}{2}-\lambda \right ) \left ( \lambda ^{2}-\frac{9}{2}\lambda +\frac{90}{18}\right ) -\sqrt{\frac{3}{2}}\left ( \sqrt{6}-\frac{1}{2}\sqrt{2}\sqrt{3}\lambda \right ) +\sqrt{\frac{3}{4}}\left ( \sqrt{3}\left ( \frac{1}{2}\lambda -1\right ) \right ) & =0\\ \left ( \frac{5}{2}-\lambda \right ) \left ( \lambda ^{2}-\frac{9}{2}\lambda +\frac{90}{18}\right ) +\left ( \frac{3}{2}\lambda -3\right ) +\left ( \frac{3}{4}\lambda -\frac{3}{2}\right ) & =0\\ \left ( \frac{5}{2}-\lambda \right ) \left ( \lambda ^{2}-\frac{9}{2}\lambda +\frac{90}{18}\right ) +\frac{9}{4}\lambda -\frac{9}{2} & =0\\ -\lambda ^{3}+7\lambda ^{2}-14\lambda +8 & =0\\ \lambda ^{3}-7\lambda ^{2}+14\lambda -8 & =0 \end{align*}

By inspection we see that \(\lambda =2\) is a root. Then by long division \(\frac{\lambda ^{3}-7\lambda ^{2}+14\lambda -8}{\lambda -2}=\lambda ^{2}-5\lambda +4\). Therefore the above polynomial can be written as\begin{align*} \left ( \lambda ^{2}-5\lambda +4\right ) \left ( \lambda -2\right ) & =0\\ \left ( \lambda -1\right ) \left ( \lambda -4\right ) \left ( \lambda -2\right ) & =0 \end{align*}

Hence the eigenvalues are\begin{align*} \lambda _{1} & =1\\ \lambda _{2} & =2\\ \lambda _{3} & =4 \end{align*}

For each eigenvalue there is one corresponding eigenvector (unless it is degenerate). The eigenvectors are found by solving the following\begin{align*} Av_{i} & =\lambda _{i}v_{i}\\ \left ( A-\lambda _{i}I\right ) v_{i} & =0\\\begin{pmatrix} \frac{5}{2}-\lambda _{i} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3}-\lambda _{i} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}-\lambda _{i}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} _{i} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \end{align*}

For \(\lambda _{1}=1\)\begin{align*} \begin{pmatrix} \frac{5}{2}-1 & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3}-1 & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}-1 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \\\begin{pmatrix} \frac{3}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{4}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{7}{6}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \end{align*}

Let \(v_{1}=1\) and the above becomes\[\begin{pmatrix} \frac{3}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{4}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{7}{6}\end{pmatrix}\begin{pmatrix} 1\\ v_{2}\\ v_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] We only need the first 2 equations. This results in\begin{align*} \frac{3}{2}+\sqrt{\frac{3}{2}}v_{2}+\sqrt{\frac{3}{4}}v_{3} & =0\\ \sqrt{\frac{3}{2}}+\frac{4}{3}v_{2}+\sqrt{\frac{1}{18}}v_{3} & =0 \end{align*}

From the first equation above \begin{equation} v_{2}=\frac{-\frac{3}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}} \tag{4} \end{equation} Substituting in the second equation gives\begin{align*} \sqrt{\frac{3}{2}}+\frac{4}{3}\left ( \frac{-\frac{3}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}}\right ) +\sqrt{\frac{1}{18}}v_{3} & =0\\ -\frac{1}{2}\sqrt{2}v_{3}-\frac{1}{6}\sqrt{2}\sqrt{3} & =0\\ v_{3} & =-\frac{\frac{1}{6}\sqrt{2}\sqrt{3}}{\frac{1}{2}\sqrt{2}}\\ & =-\frac{2\sqrt{3}}{6}\\ & =-\frac{\sqrt{3}}{3}\\ & =-\frac{1}{\sqrt{3}} \end{align*}

Hence from (4)\begin{align*} v_{2} & =\frac{-\frac{3}{2}-\sqrt{\frac{3}{4}}\left ( -\frac{1}{\sqrt{3}}\right ) }{\sqrt{\frac{3}{2}}}\\ & =-\frac{\sqrt{2}}{\sqrt{3}} \end{align*}

Therefore the eigenvector associated with \(\lambda _{1}=1\) is \(\begin{pmatrix} 1\\ -\frac{\sqrt{2}}{\sqrt{3}}\\ -\frac{1}{\sqrt{3}}\end{pmatrix} \) or by scaling it all by \(-\sqrt{3}\) it becomes \[ \vec{v}_{1}=\begin{pmatrix} -\sqrt{3}\\ \sqrt{2}\\ 1 \end{pmatrix} \] We now do the same for the second eigenvalue.

For \(\lambda _{2}=2\)\begin{align*} \begin{pmatrix} \frac{5}{2}-2 & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3}-2 & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}-2 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \\\begin{pmatrix} \frac{1}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{1}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{1}{6}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \end{align*}

Let \(v_{1}=1\) and the above becomes\[\begin{pmatrix} \frac{1}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{1}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{1}{6}\end{pmatrix}\begin{pmatrix} 1\\ v_{2}\\ v_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] We only need the first 2 equations. This results in\begin{align*} \frac{1}{2}+\sqrt{\frac{3}{2}}v_{2}+\sqrt{\frac{3}{4}}v_{3} & =0\\ \sqrt{\frac{3}{2}}+\frac{1}{3}v_{2}+\sqrt{\frac{1}{18}}v_{3} & =0 \end{align*}

From the first equation above \begin{equation} v_{2}=\frac{-\frac{1}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}} \tag{4A} \end{equation} Substituting in the second equation gives\begin{align*} \sqrt{\frac{3}{2}}+\frac{1}{3}\left ( \frac{-\frac{1}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}}\right ) +\sqrt{\frac{1}{18}}v_{3} & =0\\ \sqrt{\frac{3}{2}}-\sqrt{\frac{1}{18}}v_{3}-\frac{1}{18}\sqrt{2}\sqrt{3}+\sqrt{\frac{1}{18}}v_{3} & =0\\ 0 & =\sqrt{\frac{3}{2}}+\frac{1}{18}\sqrt{2}\sqrt{3} \end{align*}

This is not possible. So out choice of setting \(v_{1}=1\) does not work. Let us try to set \(v_{2}=1\) and repeat the process\[\begin{pmatrix} \frac{1}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{1}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{1}{6}\end{pmatrix}\begin{pmatrix} v_{1}\\ 1\\ v_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] Again, we only need the first two equations. This results in\begin{align*} \frac{1}{2}v_{1}+\sqrt{\frac{3}{2}}+\sqrt{\frac{3}{4}}v_{3} & =0\\ \sqrt{\frac{3}{2}}v_{1}+\frac{1}{3}+\sqrt{\frac{1}{18}}v_{3} & =0 \end{align*}

From the first equation above \begin{equation} v_{1}=\frac{-\sqrt{\frac{3}{2}}-\sqrt{\frac{3}{4}}v_{3}}{\frac{1}{2}} \tag{4A} \end{equation} Substituting in the second equation gives\begin{align*} \sqrt{\frac{3}{4}}\left ( \frac{-\sqrt{\frac{3}{2}}-\sqrt{\frac{3}{4}}v_{3}}{\frac{1}{2}}\right ) +\frac{1}{3}+\sqrt{\frac{1}{18}}v_{3} & =0\\ -\frac{3}{2}v_{3}-\frac{3}{2}\sqrt{2}+\frac{1}{3}+\sqrt{\frac{1}{18}}v_{3} & =0\\ \frac{1}{6}\sqrt{2}v_{3}-\frac{3}{2}v_{3}-\frac{3}{2}\sqrt{2}+\frac{1}{3} & =0\\ v_{3}\left ( \frac{1}{6}\sqrt{2}-\frac{3}{2}\right ) & =\frac{3}{2}\sqrt{2}-\frac{1}{3}\\ v_{3} & =\frac{\frac{3}{2}\sqrt{2}-\frac{1}{3}}{\frac{1}{6}\sqrt{2}-\frac{3}{2}}\\ & =-\sqrt{2} \end{align*}

Hence from (4A) \(v_{1}=\frac{-\sqrt{\frac{3}{2}}-\sqrt{\frac{3}{4}}\left ( -\sqrt{2}\right ) }{\frac{1}{2}}=\frac{-\sqrt{\frac{3}{2}}+\sqrt{\frac{3}{2}}}{\frac{1}{2}}=0\). Therefore the eigenvector associated with \(\lambda _{2}=2\) is \(\begin{pmatrix} 0\\ 1\\ -\sqrt{2}\end{pmatrix} \) or by scaling it all by \(-\frac{1}{\sqrt{2}}\) it becomes \[ \vec{v}_{2}=\begin{pmatrix} 0\\ -\frac{1}{\sqrt{2}}\\ 1 \end{pmatrix} \] We now do the same for the final eigenvalue

For \(\lambda _{3}=4\)\begin{align*} \begin{pmatrix} \frac{5}{2}-4 & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & \frac{7}{3}-4 & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & \frac{13}{6}-4 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \\\begin{pmatrix} -\frac{3}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & -\frac{5}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & -\frac{11}{6}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ v_{3}\end{pmatrix} & =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \end{align*}

Let \(v_{1}=1\) and the above becomes\[\begin{pmatrix} -\frac{3}{2} & \sqrt{\frac{3}{2}} & \sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{2}} & -\frac{5}{3} & \sqrt{\frac{1}{18}}\\ \sqrt{\frac{3}{4}} & \sqrt{\frac{1}{18}} & -\frac{11}{6}\end{pmatrix}\begin{pmatrix} 1\\ v_{2}\\ v_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] We only need the first 2 equations. This results in\begin{align*} -\frac{3}{2}+\sqrt{\frac{3}{2}}v_{2}+\sqrt{\frac{3}{4}}v_{3} & =0\\ \sqrt{\frac{3}{2}}-\frac{5}{3}v_{2}+\sqrt{\frac{1}{18}}v_{3} & =0 \end{align*}

From the first equation above \begin{equation} v_{2}=\frac{\frac{3}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}} \tag{4B} \end{equation} Substituting in the second equation gives\begin{align*} \sqrt{\frac{3}{2}}-\frac{5}{3}\left ( \frac{\frac{3}{2}-\sqrt{\frac{3}{4}}v_{3}}{\sqrt{\frac{3}{2}}}\right ) +\sqrt{\frac{1}{18}}v_{3} & =0\\ \frac{5}{6}\sqrt{2}v_{3}-\frac{1}{3}\sqrt{2}\sqrt{3}+\sqrt{\frac{1}{18}}v_{3} & =0\\ \sqrt{2}v_{3}-\frac{1}{3}\sqrt{2}\sqrt{3} & =0\\ v_{3} & =\frac{\frac{1}{3}\sqrt{2}\sqrt{3}}{\sqrt{2}}\\ & =\frac{1}{3}\sqrt{3}\\ & =\frac{1}{\sqrt{3}} \end{align*}

Hence from (4B) \(v_{2}=\frac{\frac{3}{2}-\sqrt{\frac{3}{4}}\left ( \frac{1}{\sqrt{3}}\right ) }{\sqrt{\frac{3}{2}}}=\frac{1}{3}\sqrt{2}\sqrt{3}=\frac{\sqrt{2}}{\sqrt{3}}\). Therefore the eigenvector associated with \(\lambda _{3}=4\) is \(\begin{pmatrix} 1\\ \frac{\sqrt{2}}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}\end{pmatrix} \) or by scaling it all by \(\sqrt{3}\) it becomes \[ \vec{v}_{3}=\begin{pmatrix} \sqrt{3}\\ \sqrt{2}\\ 1 \end{pmatrix} \] Therefore the final solution is \begin{align*} \lambda _{1} & =1\\ \lambda _{2} & =2\\ \lambda _{3} & =4 \end{align*}

And\[ \vec{v}_{1}=\begin{pmatrix} -\sqrt{3}\\ \sqrt{2}\\ 1 \end{pmatrix} ,\vec{v}_{2}=\begin{pmatrix} 0\\ -\frac{1}{\sqrt{2}}\\ 1 \end{pmatrix} ,\vec{v}_{3}=\begin{pmatrix} \sqrt{3}\\ \sqrt{2}\\ 1 \end{pmatrix} \]

2.8.4 Problem 3

pict
Figure 2.34:Problem statement

A unitary matrix \(U\) means \(U^{-1}=U^{\dag }\). Let \(\lambda ,x\) be the eigenvalue and the associated eigenvector. We also assume that the eigenvalue is not zero. Hence\begin{equation} Ux=\lambda x \tag{1} \end{equation} Applying \(\dag \) operation (i.e. Transpose followed by complex conjugate) on the above gives\begin{align} \left ( Ux\right ) ^{\dag } & =\left ( \lambda x\right ) ^{\dag }\nonumber \\ x^{\dag }U^{\dag } & =x^{\dag }\lambda ^{\ast } \tag{2} \end{align}

Multiplying (2) by (1) gives\[ x^{\dag }U^{\dag }Ux=x^{\dag }\lambda ^{\ast }\lambda x \] But \(U\) is unitary, hence \(U^{\dag }=U^{-1}\) and the above becomes after replacing\(\ \lambda ^{\ast }\lambda \) by \(\left \vert \lambda \right \vert ^{2}\)\begin{align*} x^{\dag }U^{-1}Ux & =\left \vert \lambda \right \vert ^{2}\left ( x^{\dag }x\right ) \\ x^{\dag }x & =\left \vert \lambda \right \vert ^{2}\left ( x^{\dag }x\right ) \end{align*}

Hence \(\left \vert \lambda \right \vert ^{2}=1\) or \(\left \vert \lambda \right \vert =1\) since this is a length, and so can not be negative. But since \(\lambda \) is an arbitrary eigenvalue, then any complex eigenvalue has absolute value of \(1\). Therefore \[ \left \vert \lambda _{1}\right \vert =\left \vert \lambda _{2}\right \vert =1 \]

Now we consider the specific case when \(\lambda _{1}\neq \lambda _{2}\) but we still require that \(\left \vert \lambda _{1}\right \vert =1\) and \(\left \vert \lambda _{2}\right \vert =1\) which was shown in first part above. We also assume for generality that the eigenvalues are not zero.

Given that\begin{align} Ux_{1} & =\lambda _{1}x_{1}\tag{1}\\ Ux_{2} & =\lambda _{2}x_{2} \tag{2} \end{align}

From (1) we obtain \begin{align} \left ( Ux_{1}\right ) ^{\dag } & =\left ( \lambda _{1}x_{1}\right ) ^{\dag }\nonumber \\ x_{1}^{\dag }U^{\dag } & =x_{1}^{\dag }\lambda _{1}^{\ast } \tag{3} \end{align}

Multiplying (3) by (2) gives\begin{align*} x_{1}^{\dag }U^{\dag }Ux_{2} & =x_{1}^{\dag }\lambda _{1}^{\ast }\lambda _{2}x_{2}\\ x_{1}^{\dag }U^{-1}Ux_{2} & =\left ( \lambda _{1}^{\ast }\lambda _{2}\right ) \left ( x_{1}^{\dag }x_{2}\right ) \\ x_{1}^{\dag }x_{2} & =\left ( \lambda _{1}^{\ast }\lambda _{2}\right ) \left ( x_{1}^{\dag }x_{2}\right ) \end{align*}

Since \(\left \vert \lambda _{1}\right \vert =\left \vert \lambda _{2}\right \vert =1\) but \(\lambda _{1}\neq \lambda _{2}\), therefore \(\left ( \lambda _{1}^{\ast }\lambda _{2}\right ) \neq 1\). From the above this implies that \(x_{1}^{\dag }x_{2}=0\).

2.8.5 Problem 4

pict
Figure 2.35:Problem statement

\[ A=\begin{pmatrix} 0 & -i & 0 & 0 & 0\\ i & 0 & 0 & 0 & 0\\ 0 & 0 & 3 & 0 & 0\\ 0 & 0 & 0 & 1 & i\\ 0 & 0 & 0 & i & 1 \end{pmatrix} \] We want to expand using a row or column which has most zeros in it since this leads to lots of cancellations and more efficient. Expanding using first row, then\begin{align*} \det \left ( A\right ) & =0+i\det \begin{pmatrix} i & 0 & 0 & 0\\ 0 & 3 & 0 & 0\\ 0 & 0 & 1 & i\\ 0 & 0 & i & 1 \end{pmatrix} +0+0+0\\ & =i\left ( i\det \begin{pmatrix} 3 & 0 & 0\\ 0 & 1 & i\\ 0 & i & 1 \end{pmatrix} \right ) \\ & =i\left ( i\left ( 3\det \begin{pmatrix} 1 & i\\ i & 1 \end{pmatrix} \right ) \right ) \\ & =i\left ( i\left ( 3\left ( 1-i^{2}\right ) \right ) \right ) \\ & =3i^{2}\left ( 1-i^{2}\right ) \\ & =-3\left ( 1+1\right ) \\ & =-6 \end{align*}

To verify this, we will now do expansion along the second row. To get the sign of \(a_{21}\) we use \(\left ( -1\right ) ^{2+1}=-1^{3}=-1\). Hence \begin{align*} \det \left ( A\right ) & =-i\det \begin{pmatrix} -i & 0 & 0 & 0\\ 0 & 3 & 0 & 0\\ 0 & 0 & 1 & i\\ 0 & 0 & i & 1 \end{pmatrix} \\ & =-i\left ( -i\det \begin{pmatrix} 3 & 0 & 0\\ 0 & 1 & i\\ 0 & i & 1 \end{pmatrix} \right ) \\ & =-i\left ( -i\left ( 3\det \begin{pmatrix} 1 & i\\ i & 1 \end{pmatrix} \right ) \right ) \\ & =-i\left ( -i\left ( 3\left ( 1-i^{2}\right ) \right ) \right ) \\ & =3i^{2}\left ( 1-i^{2}\right ) \\ & =-3\left ( 1+1\right ) \\ & =-6 \end{align*}

Which is the same as the expansion using the first row. Verified OK.

2.8.6 Key solution for HW 8

PDF