2.3 HW 3

  2.3.1 Problems listing
  2.3.2 Problem 1
  2.3.3 Problem 2
  2.3.4 Problem 3
  2.3.5 Problem 4
  2.3.6 Problem 5
  2.3.7 Problem 6
  2.3.8 Problem 7
  2.3.9 Problem 8
  2.3.10 Problem 9
  2.3.11 Problem 10
  2.3.12 Problem 11
  2.3.13 Problem 12
  2.3.14 Problem 13
  2.3.15 Problem 14

2.3.1 Problems listing

PDF

PDF (letter size)

PDF (legal size)

2.3.2 Problem 1

If \(\vec {x}=\left (-3,9,9\right ) \) and \(\vec {y}=\left (3,0,-5\right ) \) find a vector \(\vec {z}\) in \(\mathbb {R} ^{3}\) such that \(4\vec {x}-\vec {y}+2\vec {z}=\vec {0}\) and its additive inverse.

Solution\begin {align*} 4\begin {bmatrix} -3\\ 9\\ 9 \end {bmatrix} -\begin {bmatrix} 3\\ 0\\ -5 \end {bmatrix} +2\begin {bmatrix} z_{1}\\ z_{2}\\ z_{3}\end {bmatrix} & =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \\\begin {bmatrix} -15\\ 36\\ 41 \end {bmatrix} +\begin {bmatrix} 2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {bmatrix}\begin {bmatrix} z_{1}\\ z_{2}\\ z_{3}\end {bmatrix} & =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \\\begin {bmatrix} 2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 2 \end {bmatrix}\begin {bmatrix} z_{1}\\ z_{2}\\ z_{3}\end {bmatrix} & =\begin {bmatrix} 15\\ -36\\ -41 \end {bmatrix} \end {align*}

Now it is in \(Az=b\) form. The \(A\) matrix is already in rref form. Last row gives \(2z_{3}=-41\) or \(z_{3}=\frac {-41}{2}\). Second row gives \(2z_{2}=-36\) or \(z_{2}=-18\) and first row gives \(2z_{1}=15\) or \(z_{1}=\frac {15}{2}\). Hence the vector \(z\) is\[ \vec {z}=\begin {bmatrix} \frac {15}{2}\\ -18\\ -\frac {41}{2}\end {bmatrix} \] Therefore its additive inverse is \[\begin {bmatrix} -\frac {15}{2}\\ 18\\ \frac {41}{2}\end {bmatrix} \]

2.3.3 Problem 2

   2.3.3.1 Part a
   2.3.3.2 Part b

Determine whether the given set \(S\) of vectors is closed under addition and is closed under scalar multiplication. The set of scalars is the set of all real numbers. Justify your answer

a) The set \(S=\mathbb {Q} \), the set of all rational numbers b) The set S of all solutions to the differential equation \(y^{\prime }+3y=0\)

Solution

2.3.3.1 Part a

Let \(x_{1},x_{2}\,\) be any two rational numbers in \(S\). Then \(x_{1}+x_{2}\) is also a rational number, since the sum of two rational numbers is a rational number. Hence \(x_{1}+x_{2}\in \mathbb {Q} \) which means \(S\) is closed under addition.

Let \(a\) be any real scalar and \(x\) a rational number in \(S\). The type of the product \(ax_{i}\) will depend on if the real number \(a\) can be represented as rational number or not. Since not all real numbers are rational, then it is possible to find scalar \(a\) which is not a rational number which make \(ax\) not rational (for an example if \(a=\pi \) or \(a=\sqrt {2}\)). Therefore the set \(S\) is not closed under scalar multiplication by real numbers.

2.3.3.2 Part b

The general solution to above first order ODE are given by \(y\left ( x\right ) =Cf\relax (x) \). Where \(C\) is an arbitrary constant.  Let \(y_{1}\relax (x) \) be one general solution given by \(y_{1}\left ( x\right ) =c_{1}f\relax (x) \) where \(c_{1}\) is arbitrary constant of integration. And let Let \(y_{2}\relax (x) \) be another general solution of the ODE given by \(y_{2}\relax (x) =c_{2}f\relax (x) \) where \(c_{2}\) is arbitrary constant of integration. Hence\begin {align*} y_{1}\relax (x) +y_{2}\relax (x) & =c_{1}f\relax (x) +c_{2}f\relax (x) \\ & =\left (c_{1}+c_{2}\right ) f\relax (x) \end {align*}

Let \(c_{1}+c_{2}=C_{0}\) be new constant. Hence the above can be written as\[ y_{1}\relax (x) +y_{2}\relax (x) =C_{0}f\relax (x) \] This shows it is closed under the sum since it has the same form.  Similarly, let \(a\) be any scalar from the reals . Then\[ ay_{1}\relax (x) =a\left (c_{1}f\relax (x) \right ) \] Let \(ac_{1}\) be new constant \(C_{0}\). The above becomes\[ ay_{1}\relax (x) =C_{0}f\relax (x) \] This shows it is closed under scalar multiplication since it has the same form.

2.3.4 Problem 3

Let \(S=\left \{ \left (x,y\right ) \in \mathbb {R} ^{2}:x\geq 0,y\geq 0\right \} \). Is \(S\) a subspace of \(\mathbb {R} ^{2}\). Justify your answer

Solution

The set \(S\) contains all vectors in the first quadrant in \(\mathbb {R} ^{2}\). First, we see that the zero vector is in \(S\) which is when \(x=0,y=0\). This is requirements for all subspaces. Now we check to see if \(S\) is closed closed under addition and scalar multiplication.

Let \(v_{1},v_{2}\) be two arbitrary vectors selected from first quadrant. Hence\begin {align*} \vec {v}_{1}+\vec {v}_{2} & =\begin {bmatrix} x_{1}\\ y_{1}\end {bmatrix} +\begin {bmatrix} x_{2}\\ y_{2}\end {bmatrix} \\ & =\begin {bmatrix} x_{1}+x_{2}\\ y_{1}+y_{2}\end {bmatrix} \end {align*}

But since \(x_{1}\geq 0\) and \(x_{2}\geq 0\) then \(x_{1}+x_{2}\geq 0\). Similarly since \(y_{1}\geq 0\) and \(y_{2}\geq 0\) then \(y_{1}+y_{2}\geq 0\). Hence \(\vec {v}_{1}+\vec {v}_{2}\in S\) which means closed under addition. Now, let \(a\) be real scalar. Hence\begin {align*} a\vec {v} & =a\begin {bmatrix} x\\ y \end {bmatrix} \\ & =\begin {bmatrix} ax\\ ay \end {bmatrix} \end {align*}

But this is not closed for all \(a\). For example if \(a=-1\) then \(ax\leq 0\) and \(ay\leq 0\). Hence not closed under scalar multiplication.

This shows \(S\) is not a subspace, since it is not closed under scalar multiplication.

2.3.5 Problem 4

Let \(V=C^{2}\relax (I) \) and \(S\) is a subset of \(V\) consisting of those functions satisfying the differential equation \(y^{\prime \prime }+2y^{\prime }-y=0\) on \(I\). Determine if \(S\) is a subspace of \(V\)

Solution

The first step is to check for the zero solution. Since \(y=0\) is a solution to the ode (since it satisfies it), then the zero solution is in \(S\). Now we need to check if \(S\) is closed under additions. Since the general solution to second order ode of constant coefficients can be written as (assuming the independent variable is \(t\)) \[ y\relax (x) =C_{1}e^{r_{1}t}+C_{2}e^{r_{2}t}\] Where \(C_{1},C_{2}\) are arbitrary constants, and \(r_{1},r_{2}\) are the roots of the auxiliary equation \(r^{2}+2r-1=0\). We do not have to solve the ODE, but the roots are distinct in this case, hence the above is a valid general solution form.

Let \(y_{1}\relax (x) =A_{1}e^{r_{1}t}+A_{2}e^{r_{2}t}\) be one solution which satisfies the ODE on \(I\) and let \(y_{2}\relax (x) =B_{1}e^{r_{1}t}+B_{2}e^{r_{2}t}\) be another solution which satisfies the ODE on \(I\). Both are twice differentiable. Therefore\begin {align*} y_{1}\relax (t) +y_{2}\relax (t) & =\left (A_{1}e^{r_{1}t}+A_{2}e^{r_{2}t}\right ) +\left (B_{1}e^{r_{1}t}+B_{2}e^{r_{2}t}\right ) \\ & =\left (A_{1}+B_{1}\right ) e^{r_{1}t}+\left (A_{2}+B_{2}\right ) e^{r_{2}t}\\ & =C_{1}e^{r_{1}t}+C_{2}e^{r_{2}t} \end {align*}

Where \(C_{1}=\left (A_{1}+B_{1}\right ) \) is new constant, and \(C_{2}=\left ( A_{2}+B_{2}\right ) \). This shows it is closed under addition since it has the same form and this is twice differentiable as well because the exponential functions are.

Now we show if it is closed under scalar multiplication. Let \(a\) be a scalar. Then, let \(y\relax (x) =Ae^{r_{1}t}+Be^{r_{2}t}\) be a solution which satisfies the ODE on \(I\) (it is also twice differentiable). Hence\begin {align*} ay\relax (t) & =a\left (Ae^{r_{1}t}+Be^{r_{2}t}\right ) \\ & =aAe^{r_{1}t}+aBe^{r_{2}t} \end {align*}

Let \(aA=C_{1}\) be new constant and let \(aB=C_{2}\) be new constant. The above becomes\[ ay=C_{1}e^{r_{1}t}+C_{2}e^{r_{2}t}\] This shows it is closed under scalar multiplication since it has the same form and this is twice differentiable.

2.3.6 Problem 5

   2.3.6.1 Part a
   2.3.6.2 Part b

a) Determine the null space of the given matrix \(A\), null-space(\(A\))  \[ A=\begin {bmatrix} 2 & 6 & 4\\ -3 & 2 & 5\\ -5 & -4 & 1 \end {bmatrix} \] b) Determine if \(w=\begin {bmatrix} 1\\ -1\\ 1 \end {bmatrix} \) is in the null-space\(\relax (A) \)

Solution

2.3.6.1 Part a

\(A\) is \(3\times 3\). The null-space of \(A\) is the set of all \(3\times 1\) vectors \(\vec {x}\) which satisfies \(A\vec {x}=\vec {0}\). To find this set, we need to solve\[\begin {bmatrix} 2 & 6 & 4\\ -3 & 2 & 5\\ -5 & -4 & 1 \end {bmatrix}\begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \] The augmented matrix is \[\begin {bmatrix} 2 & 6 & 4 & 0\\ -3 & 2 & 5 & 0\\ -5 & -4 & 1 & 0 \end {bmatrix} \] \(R_{1}=\frac {R_{1}}{2}\) gives\[\begin {bmatrix} 1 & 3 & 2 & 0\\ -3 & 2 & 5 & 0\\ -5 & -4 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}+3R_{1}\) gives\[\begin {bmatrix} 1 & 3 & 2 & 0\\ 0 & 11 & 11 & 0\\ -5 & -4 & 1 & 0 \end {bmatrix} \] \(R_{3}=R_{3}+5R_{1}\) gives\[\begin {bmatrix} 1 & 3 & 2 & 0\\ 0 & 11 & 11 & 0\\ 0 & 11 & 11 & 0 \end {bmatrix} \] \(R_{3}=R_{3}-R_{2}\) gives\[\begin {bmatrix} 1 & 3 & 2 & 0\\ 0 & 11 & 11 & 0\\ 0 & 0 & 0 & 0 \end {bmatrix} \] This shows that \(x_{1},x_{2}\) basic variables and \(x_{3}\) is free variable. There is no need to go all the way to rref to find the solution. But we can also do that and same solution will results. Let the free variable be \(x_{3}=s\).

Second row gives \(11x_{2}+11x_{3}=0\) or \(x_{2}=-s\). First row gives \(x_{1}+3x_{2}+2x_{3}=0\) or \(x_{1}=-3x_{2}-2x_{3}\) or \(x_{1}=3s-2s=s\). Hence the solution is\begin {align*} \begin {bmatrix} x_{1}\\ x_{2}\\ x_{3}\end {bmatrix} & =\begin {bmatrix} s\\ -s\\ s \end {bmatrix} \\ & =s\begin {bmatrix} 1\\ -1\\ 1 \end {bmatrix} \end {align*}

There are infinite number of solutions, one for different \(s\) value. Therefore the null-space(\(A\)) is the set of all vectors which are scalar multiples of \(\begin {bmatrix} 1\\ -1\\ 1 \end {bmatrix} \).

2.3.6.2 Part b

Yes. Since when \(s=1\) then \(\vec {w}=\begin {bmatrix} 1\\ -1\\ 1 \end {bmatrix} \) is in the null-space\(\relax (A) \). Hence \(\vec {w}\) is scalar multiple of \(\begin {bmatrix} 1\\ -1\\ 1 \end {bmatrix} \)

2.3.7 Problem 6

Let \(\vec {v}_{1}=\begin {bmatrix} 2\\ -1 \end {bmatrix} ,\vec {v}_{2}=\begin {bmatrix} 3\\ 2 \end {bmatrix} \,\) be vectors in \(\mathbb {R} ^{2}\). Express the vector \(\vec {v}=\begin {bmatrix} 5\\ -7 \end {bmatrix} \) as linear combinations of \(v_{1},v_{2}\)

Solution

We want to find scalars \(c_{1},c_{2}\) such that\[ c_{1}\vec {v}_{1}+c_{2}\vec {v}_{2}=\vec {v}\] Therefore\begin {align} c_{1}\begin {bmatrix} 2\\ -1 \end {bmatrix} +c_{2}\begin {bmatrix} 3\\ 2 \end {bmatrix} & =\begin {bmatrix} 5\\ -7 \end {bmatrix} \nonumber \\\begin {bmatrix} 2 & 3\\ -1 & 2 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} & =\begin {bmatrix} 5\\ -7 \end {bmatrix} \tag {1} \end {align}

The augmented matrix is\[\begin {bmatrix} 2 & 3 & 5\\ -1 & 2 & -7 \end {bmatrix} \] \(R_{2}=2R_{2}+R_{1}\) gives\[\begin {bmatrix} 2 & 3 & 5\\ 0 & 7 & -9 \end {bmatrix} \] \(R_{1}=\frac {R_{1}}{2},R_{2}=\frac {R_{2}}{7}\) gives\[\begin {bmatrix} 1 & \frac {3}{2} & \frac {5}{2}\\ 0 & 1 & -\frac {9}{7}\end {bmatrix} \] \(R_{1}=R_{1}-\frac {3}{2}R_{2}\) gives\[\begin {bmatrix} 1 & 0 & \frac {5}{2}-\frac {3}{2}\left (-\frac {9}{7}\right ) \\ 0 & 1 & -\frac {9}{7}\end {bmatrix} =\begin {bmatrix} 1 & 0 & \frac {31}{7}\\ 0 & 1 & -\frac {9}{7}\end {bmatrix} \] This is rref form. Hence the original system (1) now becomes\[\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} =\begin {bmatrix} \frac {31}{7}\\ -\frac {9}{7}\end {bmatrix} \] Last row gives \(c_{2}=-\frac {9}{7}\) and first row gives \(c_{1}=\frac {31}{7}\). Therefore the combination is\begin {align*} c_{1}\vec {v}_{1}+c_{2}\vec {v}_{2} & =\vec {v}\\ \frac {31}{7}\vec {v}_{1}-\frac {9}{7}\vec {v}_{2} & =\vec {v} \end {align*}

To verify\begin {align*} \frac {31}{7}\begin {bmatrix} 2\\ -1 \end {bmatrix} -\frac {9}{7}\begin {bmatrix} 3\\ 2 \end {bmatrix} & =\begin {bmatrix} 2\left (\frac {31}{7}\right ) \\ -1\left (\frac {31}{7}\right ) \end {bmatrix} -\begin {bmatrix} 3\left (\frac {9}{7}\right ) \\ 2\left (\frac {9}{7}\right ) \end {bmatrix} \\ & =\begin {bmatrix} \frac {62}{7}\\ -\frac {31}{7}\end {bmatrix} -\begin {bmatrix} \frac {27}{7}\\ \frac {18}{7}\end {bmatrix} \\ & =\begin {bmatrix} 5\\ -7 \end {bmatrix} \end {align*}

Which is \(\vec {v}\)

2.3.8 Problem 7

Let \(\vec {v}=\begin {bmatrix} 3\\ 3\\ 4 \end {bmatrix} ,\vec {v}_{1}=\begin {bmatrix} 1\\ -1\\ 2 \end {bmatrix} ,\vec {v}_{2}=\begin {bmatrix} 2\\ 1\\ 3 \end {bmatrix} \,\) be vectors in \(\mathbb {R} ^{3}\). Let \(W=span\left (\vec {v}_{1},\vec {v}_{2}\right ) \). Determine if \(\vec {v}\) is in \(W\).

Solution

To find if \(\vec {v}\) is in \(W\) means if \(\vec {v}\) can be reached using the vectors \(\vec {v}_{1},\vec {v}_{2}\). This implies we can find solution \(c_{1},c_{2}\) to \[ c_{1}\vec {v}_{1}+c_{2}\vec {v}_{2}=\vec {v}\] In this context, \(c_{1},c_{2}\) are called the coordinates of \(\vec {v}\) using the basis \(\vec {v}_{1},\vec {v}_{2}\). Setting the above gives\begin {align*} c_{1}\begin {bmatrix} 1\\ -1\\ 2 \end {bmatrix} +c_{2}\begin {bmatrix} 2\\ 1\\ 3 \end {bmatrix} & =\begin {bmatrix} 3\\ 3\\ 4 \end {bmatrix} \\\begin {bmatrix} 1 & 2\\ -1 & 1\\ 2 & 3 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} & =\begin {bmatrix} 3\\ 3\\ 4 \end {bmatrix} \end {align*}

The augmented matrix becomes\[\begin {bmatrix} 1 & 2 & 3\\ -1 & 1 & 3\\ 2 & 3 & 4 \end {bmatrix} \] \(R_{2}=R_{2}+R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 3\\ 0 & 3 & 6\\ 2 & 3 & 4 \end {bmatrix} \] \(R_{3}=R_{3}-2R_{1}\) gives\[\begin {bmatrix} 1 & 2 & 3\\ 0 & 3 & 6\\ 0 & -1 & -2 \end {bmatrix} \] \(R_{3}=3R_{3}+R_{2}\) gives\[\begin {bmatrix} 1 & 2 & 3\\ 0 & 3 & 6\\ 0 & 0 & 0 \end {bmatrix} \] \(R_{2}=\frac {R_{2}}{3}\) gives\[\begin {bmatrix} 1 & 2 & 3\\ 0 & 1 & 2\\ 0 & 0 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-2R_{2}\,\) gives\[\begin {bmatrix} 1 & 0 & -1\\ 0 & 1 & 2\\ 0 & 0 & 0 \end {bmatrix} \] The above is the rref form. Hence the system becomes\[\begin {bmatrix} 1 & 0\\ 0 & 1\\ 0 & 0 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} =\begin {bmatrix} -1\\ 2\\ 0 \end {bmatrix} \] The last row provides no information. The second row gives \(c_{2}=2\). First row gives \(c_{1}=-1\). Since solution is found, then \(\vec {v}\) is in \(W\). The vector \(\vec {v}\) can be expressed as a linear combination of the basis vectors given.\[ -\vec {v}_{1}+2\vec {v}_{2}=\vec {v}\]

2.3.9 Problem 8

Determine whether the given set \(\left \{ \left (1,-1,0\right ) ,\left ( 0,1,-1\right ) ,\left (1,1,1\right ) \right \} \) in \(\mathbb {R} ^{3}\) is linearly independent or linearly dependent

Solution

We need to find \(c_{1},c_{2},c_{3}\) such that\[ c_{1}\begin {bmatrix} 1\\ -1\\ 0 \end {bmatrix} +c_{2}\begin {bmatrix} 0\\ 1\\ -1 \end {bmatrix} +c_{2}\begin {bmatrix} 1\\ 1\\ 1 \end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \] If we can find \(c_{1},c_{2},c_{3}\) not all zero that solves the above, then the set is linearly dependent. If the only solution is \(c_{1}=c_{2}=c_{3}=0\) then the set is linearly independent. Writing the above in matrix form \(Ax=b\) gives\begin {equation} \begin {bmatrix} 1 & 0 & 1\\ -1 & 1 & 1\\ 0 & -1 & 1 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\\ c_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \tag {1} \end {equation}

Therefore, the augmented matrix is\[\begin {bmatrix} 1 & 0 & 1 & 0\\ -1 & 1 & 1 & 0\\ 0 & -1 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}+R_{1}\) gives\[\begin {bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 0\\ 0 & -1 & 1 & 0 \end {bmatrix} \] \(R_{3}=R_{3}+R_{2}\) gives\[\begin {bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 0\\ 0 & 0 & 2 & 0 \end {bmatrix} \] \(R_{3}=\frac {R_{3}}{3}\) gives\[\begin {bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-R_{3}\) gives\[\begin {bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-R_{3}\) gives\[\begin {bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end {bmatrix} \] The above is rref form. Hence the system (1) becomes\[\begin {bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\\ c_{3}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0 \end {bmatrix} \] This shows that \(c_{1}=0,c_{2}=0,c_{3}=0\). Since the only solution is \(c_{i}=0\), then the set is linearly independent. Another way we could have solved this is by finding the determinant of \(A=\begin {bmatrix} 1 & 0 & 1\\ -1 & 1 & 1\\ 0 & -1 & 1 \end {bmatrix} \). If the determinant is not zero, then \(\vec {x}=\vec {0}\) is the only solution and hence the columns are linearly independent. In this example \(\det \left ( A\right ) \) can be found to be \(3\), which confirms the above result.

2.3.10 Problem 9

Use the Wronskian to show that the given functions are linearly independent on the given interval \(I=\left (-\infty ,\infty \right ) \)\[ f_{1}\relax (x) =1\qquad f_{2}\relax (x) =3x\qquad f_{3}\left ( x\right ) =x^{2}-1 \] Solution

The Wronskian is \begin {align*} W & =\begin {vmatrix} f_{1} & f_{2} & f_{3}\\ f_{1}^{\prime } & f_{2}^{\prime } & f_{3}^{\prime }\\ f_{1}^{\prime \prime } & f_{2}^{\prime \prime } & f_{3}^{\prime \prime }\end {vmatrix} \\ & =\begin {vmatrix} 1 & 2x & x^{2}-1\\ 0 & 3 & 2x\\ 0 & 0 & 2 \end {vmatrix} \end {align*}

To find the determinant, it is easiest to expand along the last row as that has the most zeros (Also the first column will do). Therefore the determinant is\begin {align*} W & =\left (-1\right ) ^{3+3}\relax (2) \begin {vmatrix} 1 & 2x\\ 0 & 3 \end {vmatrix} \\ & =2\relax (3) \\ & =6 \end {align*}

Since \(W\neq 0\) then the functions are linearly independent.

2.3.11 Problem 10

Determine whether the set of vectors \[ S=\left \{ \left (1,1,0,2\right ) ,\left (2,2,3,-1\right ) ,\left ( -1,1,1,-2\right ) ,\left (2,-1,1,2\right ) \right \} \] is a basis for \(\mathbb {R} ^{4}\).

Solution

Since there are four vectors given, they can be used as basis for \(\mathbb {R} ^{4}\) if they are linearly independent of each others. To find this, we need to find \(c_{1},c_{2},c_{3},c_{4}\) which solves\[ c_{1}\begin {bmatrix} 1\\ 1\\ 0\\ 2 \end {bmatrix} +c_{2}\begin {bmatrix} 2\\ 2\\ 3\\ -1 \end {bmatrix} +c_{3}\begin {bmatrix} -1\\ 1\\ 1\\ -2 \end {bmatrix} +c_{4}\begin {bmatrix} 2\\ -1\\ 1\\ 2 \end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0\\ 0 \end {bmatrix} \] If we can find \(c_{1},c_{2},c_{3},c_{4}\) not all zero that solves the above, then the set is linearly dependent and they can not be used as basis for \(\mathbb {R} ^{4}\). If the only solution is \(c_{1}=c_{2}=c_{3}=c_{4}=0\) then they are basis for \(\mathbb {R} ^{4}\). Writing the above in matix form \(Ax=b\) gives\begin {equation} \begin {bmatrix} 1 & 2 & -1 & 2\\ 1 & 2 & 1 & -1\\ 0 & 3 & 1 & 1\\ 2 & -1 & -2 & 2 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\\ c_{3}\\ c_{4}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0\\ 0 \end {bmatrix} \tag {1} \end {equation} The augmented matrix is\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 1 & 2 & 1 & -1 & 0\\ 0 & 3 & 1 & 1 & 0\\ 2 & -1 & -2 & 2 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-R_{1}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & 3 & 1 & 1 & 0\\ 2 & -1 & -2 & 2 & 0 \end {bmatrix} \] \(R_{4}=R_{4}-2R_{1}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & -3 & 0 & -2 & 0 \end {bmatrix} \] Swapping \(R_{3},R_{2}\) so the pivot is non-zero\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & -3 & 0 & -2 & 0 \end {bmatrix} \] \(R_{4}=R_{4}+R_{2}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & 0 & 1 & -1 & 0 \end {bmatrix} \] \(R_{4}=2R_{4}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & 0 & 2 & -2 & 0 \end {bmatrix} \] \(R_{4}=R_{4}-R_{3}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & 0 & 2 & -3 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{3}=\frac {R_{3}}{2}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 3 & 1 & 1 & 0\\ 0 & 0 & 1 & -\frac {3}{2} & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=\frac {R_{2}}{3}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 1 & \frac {1}{3} & \frac {1}{3} & 0\\ 0 & 0 & 1 & -\frac {3}{2} & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{3}=R_{3}+\frac {3}{2}R_{4}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 1 & \frac {1}{3} & \frac {1}{3} & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-\frac {1}{3}R_{4}\)\[\begin {bmatrix} 1 & 2 & -1 & 2 & 0\\ 0 & 1 & \frac {1}{3} & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-2R_{4}\)\[\begin {bmatrix} 1 & 2 & -1 & 0 & 0\\ 0 & 1 & \frac {1}{3} & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{2}=R_{2}-\frac {1}{3}R_{3}\)\[\begin {bmatrix} 1 & 2 & -1 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}+R_{3}\)\[\begin {bmatrix} 1 & 2 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] \(R_{1}=R_{1}-2R_{2}\)\[\begin {bmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 \end {bmatrix} \] This is now rref. Hence the original system (1) is\[\begin {bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\\ c_{3}\\ c_{4}\end {bmatrix} =\begin {bmatrix} 0\\ 0\\ 0\\ 0 \end {bmatrix} \] Which implies \(c_{1}=0,c_{2}=0,c_{3}=0,c_{4}=0\). Therefore the set \(S\) is a basis for \(\mathbb {R} ^{4}\). Another way to solve this is to find the determinant of \(A\). If it is not zero, then the set \(S\) is basis.

2.3.12 Problem 11

Determine whether the set \[ S=\left \{ 1-3x^{2},2x+5x^{2},1-x+3x^{2}\right \} \] is a basis for \(p_{2}\relax (R) \)

Solution

Let \(p_{1}\relax (x) =1-3x^{2},p_{2}\relax (x) =2x+5x^{2},p_{3}\relax (x) =1-x+3x^{2}\), hence the Wronskian is\begin {align*} W & =\begin {vmatrix} p_{1} & p_{2} & p_{3}\\ p_{1}^{\prime } & p_{2}^{\prime } & p_{3}^{\prime }\\ p_{1}^{\prime \prime } & p_{2}^{\prime \prime } & p_{3}^{\prime \prime }\end {vmatrix} \\ & =\begin {vmatrix} 1-3x^{2} & 2x+5x^{2} & 1-x+3x^{2}\\ -6x & 2+10x & -1+6x\\ -6 & 10 & 6 \end {vmatrix} \end {align*}

Expanding along last row gives

\begin {align*} W & =\left (-1\right ) ^{3+1}\left (-6\right ) \begin {vmatrix} 2x+5x^{2} & 1-x+3x^{2}\\ 2+10x & -1+6x \end {vmatrix} +\left (-1\right ) ^{3+2}\left (10\right ) \begin {vmatrix} 1-3x^{2} & 1-x+3x^{2}\\ -6x & -1+6x \end {vmatrix} +\left (-1\right ) ^{3+3}\relax (6) \begin {vmatrix} 1-3x^{2} & 2x+5x^{2}\\ -6x & 2+10x \end {vmatrix} \\ & =-6\begin {vmatrix} 2x+5x^{2} & 1-x+3x^{2}\\ 2+10x & -1+6x \end {vmatrix} -10\begin {vmatrix} 1-3x^{2} & 1-x+3x^{2}\\ -6x & -1+6x \end {vmatrix} +6\begin {vmatrix} 1-3x^{2} & 2x+5x^{2}\\ -6x & 2+10x \end {vmatrix} \\ & =-6\left (\left (2x+5x^{2}\right ) \left (-1+6x\right ) -\left ( 1-x+3x^{2}\right ) \left (2+10x\right ) \right ) \\ & -10\left (\left (1-3x^{2}\right ) \left (-1+6x\right ) -\left ( 1-x+3x^{2}\right ) \left (-6x\right ) \right ) \\ & +6\left (\left (1-3x^{2}\right ) \left (2+10x\right ) -\left ( 2x+5x^{2}\right ) \left (-6x\right ) \right ) \end {align*}

or\begin {align*} W & =-6\left (\left (30x^{3}+7x^{2}-2x\right ) -\left (30x^{3}-4x^{2}+8x+2\right ) \right ) \\ & -10\left (\left (-18x^{3}+3x^{2}+6x-1\right ) -\left (-18x^{3}+6x^{2}-6x\right ) \right ) \\ & +6\left (\left (-30x^{3}-6x^{2}+10x+2\right ) -\left (-30x^{3}-12x^{2}\right ) \right ) \end {align*}

or\begin {align*} W & =-6\left (11x^{2}-10x-2\right ) -10\left (-3x^{2}+12x-1\right ) +6\left (6x^{2}+10x+2\right ) \\ & =-66x^{2}+60x+12+30x^{2}-120x+10+36x^{2}+60x+12\\ & =34 \end {align*}

Since the Wronskian is not zero, then set \(S\) is basis for \(p_{2}\left ( R\right ) \)

2.3.13 Problem 12

Find the dimension of the null space of the given matrix \(A\) \[ A=\begin {bmatrix} 1 & -1 & 4\\ 2 & 3 & -2\\ 1 & 2 & -2 \end {bmatrix} \] Solution

\(R_{2}=R_{2}-2R_{1}\) gives\[\begin {bmatrix} 1 & -1 & 4\\ 0 & 5 & -11\\ 1 & 2 & -2 \end {bmatrix} \] \(R_{3}=R_{3}-R_{1}\) gives\[\begin {bmatrix} 1 & -1 & 4\\ 0 & 5 & -11\\ 0 & 4 & -62 \end {bmatrix} \] \(R_{2}=4R_{2},R_{3}=5R_{3}\) gives\[\begin {bmatrix} 1 & -1 & 4\\ 0 & 20 & -44\\ 0 & 20 & -310 \end {bmatrix} \] \(R_{3}=R_{3}-R_{2}\) gives\[\begin {bmatrix} 1 & -1 & 4\\ 0 & 20 & -44\\ 0 & 0 & -266 \end {bmatrix} \] The above shows there are \(3\) pivot columns, which means the rank is \(3\) which is the same as the dimension of the column space. The dimension of \(A\) is \(3\).  Using the Rank–nullity theorem (4.9.1, in textbook at page 325) which says, for matrix \(A\) of dimensions \(m\times n\)\[ Rank\relax (A) +nullity\relax (A) =n \] Therefore, since \(n=3\) in this case (it is the number of columns)\[ 3+nullity\relax (A) =3 \] Hence\begin {align*} nullity\relax (A) & =3-3\\ & =0 \end {align*}

This means the dimension of the null space of \(A\) is zero. The \(nullity\left ( A\right ) \) is the dimension of null-space\(\relax (A) \).

2.3.14 Problem 13

Determine the component vector of the given vector space \(V\) relative to the given ordered basis \(B.\)\[ V=\mathbb {R} ^{2}\qquad B=\left \{ \left (2,-2\right ) ,\left (1,4\right ) \right \} \qquad v=\left (5,-10\right ) \] Solution

Let \[ c_{1}\begin {bmatrix} 2\\ -2 \end {bmatrix} +c_{2}\begin {bmatrix} 1\\ 4 \end {bmatrix} =\begin {bmatrix} 5\\ -10 \end {bmatrix} \] In \(Ax=b\) form the above becomes\begin {equation} \begin {bmatrix} 2 & 1\\ -2 & 4 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} =\begin {bmatrix} 5\\ -10 \end {bmatrix} \tag {1} \end {equation} The augmented matrix is \[\begin {bmatrix} 2 & 1 & 5\\ -2 & 4 & -10 \end {bmatrix} \] \(R_{2}=R_{2}+R_{1}\)\[\begin {bmatrix} 2 & 1 & 5\\ 0 & 5 & -5 \end {bmatrix} \] \(R_{2}=\frac {R_{2}}{5},R_{1}=\frac {R_{1}}{2}\)\[\begin {bmatrix} 1 & \frac {1}{2} & \frac {5}{2}\\ 0 & 1 & -1 \end {bmatrix} \] \(R_{1}=R_{1}-\frac {1}{2}R_{2}\)\[\begin {bmatrix} 1 & 0 & 3\\ 0 & 1 & -1 \end {bmatrix} \] This is rref form. Hence the original system (1) becomes\[\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix}\begin {bmatrix} c_{1}\\ c_{2}\end {bmatrix} =\begin {bmatrix} 3\\ -1 \end {bmatrix} \] Which means \(c_{2}=-1,c_{1}=3\), Therefore the component vector is \(\begin {bmatrix} 3\\ -1 \end {bmatrix} \)

2.3.15 Problem 14

   2.3.15.1 Part a
   2.3.15.2 Part b

a) find \(n\) such that rowspace(\(A\)) is a subspace of \(\mathbb {R} ^{n}\) and determine the basis for rowspace(\(A\)).

b) find \(m\) such that colspace(\(A\)) is a subspace of \(\mathbb {R} ^{m}\) and determine a basis for colspace(\(A\))\[ A=\begin {bmatrix} 1 & -1 & 2 & 3\\ 1 & 1 & -2 & 6\\ 3 & 1 & 4 & 2 \end {bmatrix} \] Solution

2.3.15.1 Part a

\(R_{2}=R_{2}-R_{1}\) gives\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 2 & -4 & 3\\ 3 & 1 & 4 & 2 \end {bmatrix} \] \(R_{3}=R_{3}-3R_{1}\)\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 2 & -4 & 3\\ 0 & 4 & -2 & -7 \end {bmatrix} \] \(R_{3}=R_{3}-2R_{2}\)\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 2 & -4 & 3\\ 0 & 0 & 6 & -13 \end {bmatrix} \] Pivots are \(A\left (1,1\right ) ,A\left (2,2\right ) ,A\left (3,3\right ) \,\).

\(R_{3}=\frac {R_{3}}{6}\) gives\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 2 & -4 & 3\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} \] \(R_{2}=\frac {R_{2}}{2}\)\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 1 & -2 & \frac {3}{2}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} \] \(R_{2}=R_{2}+2R_{3}\)\[\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 1 & 0 & \frac {3}{2}+2\left (-\frac {13}{6}\right ) \\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} =\begin {bmatrix} 1 & -1 & 2 & 3\\ 0 & 1 & 0 & -\frac {17}{6}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} \] \(R_{1}=R_{1}-2R_{3}\)\[\begin {bmatrix} 1 & -1 & 0 & 3-2\left (-\frac {13}{6}\right ) \\ 0 & 1 & 0 & -\frac {17}{6}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} =\begin {bmatrix} 1 & -1 & 0 & \frac {22}{3}\\ 0 & 1 & 0 & -\frac {17}{6}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} \] \(R_{1}=R_{1}+R_{2}\)\[\begin {bmatrix} 1 & 0 & 0 & \frac {22}{3}-\frac {17}{6}\\ 0 & 1 & 0 & -\frac {17}{6}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} =\begin {bmatrix} 1 & 0 & 0 & \frac {9}{2}\\ 0 & 1 & 0 & -\frac {17}{6}\\ 0 & 0 & 1 & -\frac {13}{6}\end {bmatrix} \] The above is rref form. Pivot columns are \(1,2,3\).  The set of nonzero row vectors in the above rref form are basis for rowspace\(\relax (A) \). Hence rowspace is\[ \left \{ \left (1,0,0,\frac {9}{2}\right ) ,\left (0,1,0,-\frac {17}{6}\right ) ,\left (0,0,1,-\frac {13}{6}\right ) \right \} \] The rowspace is \(3\) dimensional in \(\mathbb {R} ^{4}\).

2.3.15.2 Part b

From part(a) we found that the pivot columns are \(1,2,3\). Therefore the column space is given by the corresponding columns in the original vector A. Hence the column space is \[ \left \{ \begin {bmatrix} 1\\ 1\\ 3 \end {bmatrix} ,\begin {bmatrix} -1\\ 1\\ 1 \end {bmatrix} ,\begin {bmatrix} 2\\ -2\\ 4 \end {bmatrix} \right \} \] It is \(3\) dimensional in \(\mathbb {R} ^{3}\)