2.6 HW6

  2.6.1 Questions
  2.6.2 Problem 1 Controllability
  2.6.3 Problem 2 P Transformation
  2.6.4 Problem 3 Solve
  2.6.5 Problem 4 Circuit
  2.6.6 Problem 5 Control effort
  2.6.7 Problem 6 Range
  2.6.8 key solution

2.6.1 Questions

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

2.6.2 Problem 1 Controllability

\(n=2.\) Since \(A\left ( t\right ) ,b\left ( t\right ) \) are \(n-1\) or \(1\) time differentiable, we can obtain \(M\left ( t\right ) =\begin{bmatrix} M_{0}\left ( t\right ) & M_{1}\left ( t\right ) \end{bmatrix} \) and check that its rank is \(n\) using the theorem that \({\displaystyle \sum } \) is controllable at \(t_{0}\) if there exist \(t>t_{0}\) such that \(\rho \left ( M\left ( t\right ) \right ) =n\).\begin{align*} M_{0}\left ( t\right ) & =\begin{bmatrix} 0\\ 1 \end{bmatrix} \\ M_{1}\left ( t\right ) & =-A\left ( t\right ) M_{0}\left ( t\right ) +\frac{d}{dt}M_{0}\left ( t\right ) \\ & =-\begin{bmatrix} 2 & -e^{t}\\ e^{-t} & 1 \end{bmatrix}\begin{bmatrix} 0\\ 1 \end{bmatrix} +\begin{bmatrix} 0\\ 0 \end{bmatrix} \\ & =\begin{bmatrix} e^{t}\\ -1 \end{bmatrix} \end{align*}

Hence\[ M\left ( t\right ) =\begin{bmatrix} 0 & e^{t}\\ 1 & -1 \end{bmatrix} \] The determinant is \(\Delta =-e^{t}\) which is not zero for any \(t>0\). Hence \(M\left ( t\right ) \) is not singular and so has rank \(2\). Hence \({\displaystyle \sum } \) is controllable at \(t=0\). Note: This system is not stable.

2.6.3 Problem 2 P Transformation

\begin{align*} z & =Px\\ z^{\prime } & =P^{\prime }x+Px^{\prime } \end{align*}

Hence\begin{align*} x^{\prime } & =P^{-1}\left ( z^{\prime }-P^{\prime }x\right ) \\ & =P^{-1}\left ( z^{\prime }-P^{\prime }P^{-1}z\right ) \end{align*}

Therefore, the state space \(x^{\prime }=Ax+Bu\) becomes\begin{align*} P^{-1}\left ( z^{\prime }-P^{\prime }P^{-1}z\right ) & =AP^{-1}z+Bu\\ z^{\prime }-P^{\prime }P^{-1}z & =PAP^{-1}z+PBu\\ z^{\prime } & =P^{\prime }P^{-1}z+PAP^{-1}z+PBu\\ & =\left ( P^{\prime }P^{-1}+PAP^{-1}\right ) z+PBu \end{align*}

Therefore \[ \tilde{A}=\left ( P^{\prime }P^{-1}+PAP^{-1}\right ) \] And\[ \tilde{B}\left ( t\right ) =P\left ( t\right ) B\left ( t\right ) \] Now the state equation solution for \({\displaystyle \sum } \) is given by\[ x\left ( t\right ) =\Phi \left ( t,0\right ) x\left ( 0\right ) +{\displaystyle \int \limits _{0}^{t}} \Phi \left ( t,\tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau \] Applying the transformation to the above results in\begin{align*} P^{-1}\left ( t\right ) z\left ( t\right ) & =\Phi \left ( t,0\right ) P^{-1}\left ( t\right ) z\left ( 0\right ) +{\displaystyle \int \limits _{0}^{t}} \Phi \left ( t,\tau \right ) P^{-1}\left ( \tau \right ) \tilde{B}u\left ( \tau \right ) d\tau \\ z\left ( t\right ) & =\overset{\tilde{\Phi }\left ( t,0\right ) }{\overbrace{P\left ( t\right ) \Phi \left ( t,0\right ) P^{-1}\left ( t\right ) }}z\left ( 0\right ) +{\displaystyle \int \limits _{0}^{t}} \overset{\tilde{\Phi }\left ( t,\tau \right ) }{\overbrace{P\left ( t\right ) \Phi \left ( t,\tau \right ) P^{-1}\left ( \tau \right ) }}\tilde{B}\left ( \tau \right ) u\left ( \tau \right ) d\tau \\ z\left ( t\right ) & =\tilde{\Phi }\left ( t,0\right ) z\left ( 0\right ) +{\displaystyle \int \limits _{0}^{t}} \tilde{\Phi }\left ( t,\tau \right ) \tilde{B}\left ( \tau \right ) u\left ( \tau \right ) d\tau \end{align*}

Hence \[ \tilde{\Phi }\left ( t,\tau \right ) =P\left ( t\right ) \Phi \left ( t,\tau \right ) P^{-1}\left ( \tau \right ) \] Now that we found \(\tilde{\Phi }\left ( t,\tau \right ) \) and \(\tilde{B}\left ( t\right ) ,\) we are now ready to do the proof.

Theorem: \(\left ( A,B\right ) \) is controllable at \(t_{0}\) iff \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}.\)

Necessity \(\Longrightarrow \). We need to show: If \(\left ( A,B\right ) \) is controllable at \(t_{0}\) then \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}\)

Sufficiency \(\Longleftarrow \). We need to show: If \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}\) then \(\left ( A,B\right ) \) is controllable at \(t_{0}\)

Proof of Necessity: Given that \(\left ( A,B\right ) \) is controllable at \(t_{0}\), show that \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}\).

Since \(\left ( A,B\right ) \) is controllable at \(t_{0}\), then the following controllability Gramian \(W\left ( t_{0},t\right ) \) is not singular

\begin{equation} W(t_0,t) = \int \limits _{t_0}^t \Phi (t_0,\tau ) B(\tau ) B^T(\tau ) \Phi ^T (t_0,\tau ) d\tau \tag{1} \end{equation} We want to show the above implies that

\begin{equation} \tilde{W}(t_0,t) =\int \limits _{t_0}^{t} \tilde{\Phi }(t_0,\tau ) \tilde{B}(\tau ) \tilde{B}^{T}(\tau ) \tilde{\Phi }^T(t_0,\tau ) d\tau \tag{2} \end{equation} is also not singular.

Applying the transformations found to (2) gives\begin{align*} \tilde{W}\left ( t_{0},t\right ) & ={\displaystyle \int \limits _{t_{0}}^{t}} \left [ P\left ( t\right ) \Phi \left ( t_0,\tau \right ) P^{-1}\left ( \tau \right ) \right ] \left [ P\left ( \tau \right ) B\left ( \tau \right ) \right ] \left [ P\left ( \tau \right ) B\left ( \tau \right ) \right ] ^{T}\left [ P\left ( t\right ) \Phi \left ( t_0,\tau \right ) P^{-1}\left ( \tau \right ) \right ] ^{T}d\tau \\ & ={\displaystyle \int \limits _{t_{0}}^{t}} P\left ( t\right ) \Phi \left ( t_0,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) P^{T}\left ( \tau \right ) \left ( P^{T}\left ( \tau \right ) \right ) ^{-1}\Phi ^{T}\left ( t_0,\tau \right ) P^{T}\left ( t\right ) d\tau \end{align*}

Notice in the above we used \(\left ( P^{-1}\left ( \tau \right ) \right ) ^{T}=\left ( P^{T}\left ( \tau \right ) \right ) ^{-1}\). Therefore the above simplifies to\begin{align*} \tilde{W}\left ( t_{0},t\right ) & ={\displaystyle \int \limits _{t_{0}}^{t}} P\left ( t\right ) \Phi \left ( t_0,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_0,\tau \right ) P^{T}\left ( t\right ) d\tau \\ & =P\left ( t\right ) \left ({\displaystyle \int \limits _{t_{0}}^{t}} \Phi \left ( t_0,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_0,\tau \right ) d\tau \right ) P^{T}\left ( t\right ) \\ & =P\left ( t\right ) W\left ( t_{0},t\right ) P^{T}\left ( t\right ) \end{align*}

Since \(W\left ( t_{0},t\right ) \) is not singular, and \(P\left ( t\right ) \) is given as not singular, then \(P\left ( t\right ) W\left ( t_{0},t\right ) P^{T}\left ( t\right ) \) is not singular also and this implies \(\tilde{W}\left ( t_{0},t\right ) \) is not singular.

Proof of sufficiency: \(\Longleftarrow \). We need to show: If \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}\) then \(\left ( A,B\right ) \) is controllable at \(t_{0}.\) Since \(\left ( \tilde{A},\tilde{B}\right ) \) is controllable at \(t_{0}\), then the controllability Gramian \(\tilde{W}\left ( t_{0},t\right ) \) is not singular\begin{equation} \tilde{W}\left ( t_{0},t\right ) ={\displaystyle \int \limits _{t_{0}}^{t}} \tilde{\Phi }\left ( t_0,\tau \right ) \tilde{B}\left ( \tau \right ) \tilde{B}^{T}\left ( \tau \right ) \tilde{\Phi }^{T}\left ( t_0,\tau \right ) d\tau \tag{3} \end{equation} We want to show the above implies that \begin{equation} W\left ( t_{0},t\right ) ={\displaystyle \int \limits _{t_{0}}^{t}} \Phi \left ( t_0,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_0,\tau \right ) d\tau \tag{4} \end{equation} Applying the transformations to (4) gives\begin{align*} W\left ( t_{0},t\right ) & ={\displaystyle \int \limits _{t_{0}}^{t}} \left [ P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t_0,\tau \right ) P\left ( \tau \right ) \right ] \left [ P^{-1}\left ( \tau \right ) \tilde{B}\left ( \tau \right ) \right ] \left [ P^{-1}\left ( \tau \right ) \tilde{B}\left ( \tau \right ) \right ] ^{T}\left [ P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t_0,\tau \right ) P\left ( \tau \right ) \right ] ^{T}d\tau \\ & ={\displaystyle \int \limits _{t_{0}}^{t}} P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t_0,\tau \right ) \tilde{B}\left ( \tau \right ) \tilde{B}^{T}\left ( \tau \right ) P^{T}\left ( \tau \right ) ^{-1}\left [ P^{T}\left ( \tau \right ) \left [ P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t_0,\tau \right ) \right ] ^{T}\right ] d\tau \\ & ={\displaystyle \int \limits _{t_{0}}^{t}} P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t_0,\tau \right ) \tilde{B}\left ( \tau \right ) \tilde{B}^{T}\left ( \tau \right ) \left ( P^{-T}\left ( \tau \right ) \right ) ^{-1}P^{T}\left ( \tau \right ) \tilde{\Phi }^{T}\left ( t_0,\tau \right ) P^{-1}\left ( t\right ) ^{T}d\tau \\ & ={\displaystyle \int \limits _{t_{0}}^{t}} P^{-1}\left ( t\right ) \tilde{\Phi }\left ( t,\tau \right ) \tilde{B}\left ( \tau \right ) \tilde{B}^{T}\left ( \tau \right ) \tilde{\Phi }^{T}\left ( t,\tau \right ) P^{-1}\left ( t\right ) ^{T}d\tau \\ & =P^{-1}\left ( t\right ) \left ({\displaystyle \int \limits _{t_{0}}^{t}} \tilde{\Phi }\left ( t_0,\tau \right ) \tilde{B}\left ( \tau \right ) \tilde{B}^{T}\left ( \tau \right ) \tilde{\Phi }^{T}\left ( t_0,\tau \right ) d\tau \right ) P^{-1}\left ( t\right ) ^{T}\\ & =P^{-1}\left ( t\right ) \tilde{W}\left ( t_{0},t\right ) P^{-1}\left ( t\right ) ^{T} \end{align*}

Similar to the same argument used for the Necessity, since \(\tilde{W}\left ( t_{0},t\right ) \) is not singular, and \(P\left ( t\right ) \) is given as not singular, then \(P\left ( t\right ) \tilde{W}\left ( t_{0},t\right ) P^{T}\left ( t\right ) \) is not singular and this implies \(W\left ( t_{0},t\right ) \) is not singular.

2.6.4 Problem 3 Solve

Part (a)

\[ e^{At}=Y_{01}e^{\lambda _{1}t}+Y_{02}e^{\lambda _{2}t}+Y_{03}e^{\lambda _{3}t}\] Where\begin{align*} Y_{01} & =\frac{\left ( A-\lambda _{2}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }\\ Y_{02} & =\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{2}-\lambda _{1}\right ) \left ( \lambda _{2}-\lambda _{3}\right ) }\\ Y_{03} & =\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{2}I\right ) }{\left ( \lambda _{3}-\lambda _{1}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) } \end{align*}

We know that \(\left . e^{At}\right \vert _{t=0}=I\) and \(\left . \frac{d}{dt}e^{At}\right \vert _{t=0}=A\) and \(\left . \frac{d^{2}}{dt^{2}}e^{At}\right \vert _{t=0}=A^{2}\). So now need to verify that using the above expressions these remain satisfied.\begin{align*} \left . e^{At}\right \vert _{t=0} & =I\\ Y_{01}+Y_{02}+Y_{03} & =\frac{\left ( A-\lambda _{2}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }+\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{2}-\lambda _{1}\right ) \left ( \lambda _{2}-\lambda _{3}\right ) }+\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{2}I\right ) }{\left ( \lambda _{3}-\lambda _{1}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) } \end{align*}

Using common denominator \(\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) \) results in

\begin{align*} Y_{01}+Y_{02}+Y_{03} & =\frac{\left ( A-\lambda _{2}I\right ) \left ( A-\lambda _{3}I\right ) \left ( \lambda _{3}-\lambda _{2}\right ) +\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{3}I\right ) \left ( \lambda _{1}-\lambda _{3}\right ) -\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{2}I\right ) \left ( \lambda _{1}-\lambda _{2}\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) }\\ & =\frac{\left ( A^{2}-\lambda _{3}A-\lambda _{2}A+\lambda _{2}\lambda _{3}I\right ) \left ( \lambda _{3}-\lambda _{2}\right ) +\left ( A^{2}-\lambda _{3}A-\lambda _{1}A+\lambda _{1}\lambda _{3}I\right ) \left ( \lambda _{1}-\lambda _{3}\right ) -\left ( A^{2}-\lambda _{2}A-\lambda _{1}A+\lambda _{1}\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{2}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) } \end{align*}

Expanding the numerator and simplifying results in \(\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) I\,\), hence\begin{align*} Y_{01}+Y_{02}+Y_{03} & =\frac{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) I}{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) }\\ & =I \end{align*}

Now we need to verify the second equation\begin{align*} \left . \frac{d}{dt}e^{At}\right \vert _{t=0} & =A\\ \left . \frac{d}{dt}\left ( Y_{01}e^{\lambda _{1}t}+Y_{02}e^{\lambda _{2}t}+Y_{03}e^{\lambda _{3}t}\right ) \right \vert _{t=0} & =\lambda _{1}Y_{01}+\lambda _{2}Y_{02}+\lambda _{3}Y_{03} \end{align*}

But\begin{align*} \lambda _{1}Y_{01}+\lambda _{2}Y_{02}+\lambda _{3}Y_{03} & =\lambda _{1}\frac{\left ( A-\lambda _{2}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }+\lambda _{2}\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{2}-\lambda _{1}\right ) \left ( \lambda _{2}-\lambda _{3}\right ) }+\lambda _{3}\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{2}I\right ) }{\left ( \lambda _{3}-\lambda _{1}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) }\\ & =\frac{-\left ( A-\lambda _{1}\right ) \left ( A\lambda _{1}-\lambda _{2}\lambda _{3}\right ) +\left ( A-\lambda _{2}\right ) \left ( A-\lambda _{3}\right ) \lambda _{1}}{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }\\ & =\frac{A\lambda _{1}^{2}-A\lambda _{1}\lambda _{2}-A\lambda _{1}\lambda _{3}+A\lambda _{2}\lambda _{3}}{\lambda _{1}^{2}-\lambda _{1}\lambda _{2}-\lambda _{1}\lambda _{3}+\lambda _{2}\lambda _{3}}\\ & =\frac{A\left ( \lambda _{1}^{2}-\lambda _{1}\lambda _{2}-\lambda _{1}\lambda _{3}+\lambda _{2}\lambda _{3}\right ) }{\lambda _{1}^{2}-\lambda _{1}\lambda _{2}-\lambda _{1}\lambda _{3}+\lambda _{2}\lambda _{3}}\\ & =A \end{align*}

Now we need to verify the third equation\begin{align*} \left . \frac{d^{2}}{dt^{2}}e^{At}\right \vert _{t=0} & =A^{2}\\ \left . \frac{d^{2}}{dt^{2}}\left ( Y_{01}e^{\lambda _{1}t}+Y_{02}e^{\lambda _{2}t}+Y_{03}e^{\lambda _{3}t}\right ) \right \vert _{t=0} & =\lambda _{1}^{2}Y_{01}+\lambda _{2}^{2}Y_{02}+\lambda _{3}^{2}Y_{03} \end{align*}

But\begin{align*} \lambda _{1}^{2}Y_{01}+\lambda _{2}^{2}Y_{02}+\lambda _{3}^{2}Y_{03} & =\lambda _{1}^{2}\frac{\left ( A-\lambda _{2}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }+\lambda _{2}^{2}\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{3}I\right ) }{\left ( \lambda _{2}-\lambda _{1}\right ) \left ( \lambda _{2}-\lambda _{3}\right ) }+\lambda _{3}^{2}\frac{\left ( A-\lambda _{1}I\right ) \left ( A-\lambda _{2}I\right ) }{\left ( \lambda _{3}-\lambda _{1}\right ) \left ( \lambda _{3}-\lambda _{2}\right ) }\\ & =\frac{-\left ( A-\lambda _{1}\right ) \left ( A\lambda _{1}\lambda _{2}+\left ( A\lambda _{1}-\left ( A+\lambda _{1}\right ) \lambda _{2}\right ) \lambda _{3}\right ) +\left ( A-\lambda _{2}\right ) \left ( A-\lambda _{3}\right ) \lambda _{1}^{2}}{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }\\ & =\frac{A^{2}\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) \left ( \lambda _{1}-\lambda _{3}\right ) }\\ & =A^{2} \end{align*}

Verified for \(n=3\) OK.

Part(b)

\[ A=-\frac{1}{2}\begin{bmatrix} 3 & -1\\ -1 & 3 \end{bmatrix} \] The eigenvalues are \(\lambda _{1}=-1,\lambda _{2}=-2\). Hence\[ e^{At}=Y_{01}e^{\lambda _{1}t}+Y_{02}e^{\lambda _{2}t}\] Where \begin{align*} Y_{01} & =\frac{\left ( A-\lambda _{2}I\right ) }{\left ( \lambda _{1}-\lambda _{2}\right ) }=\frac{\left ( \begin{bmatrix} \frac{-3}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-3}{2}\end{bmatrix} -\begin{bmatrix} -2 & 0\\ 0 & -2 \end{bmatrix} \right ) }{\left ( -1+2\right ) }=\begin{bmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{bmatrix} \\ Y_{02} & =\frac{\left ( A-\lambda _{1}I\right ) }{\left ( \lambda _{2}-\lambda _{1}\right ) }=\frac{\left ( \begin{bmatrix} \frac{-3}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-3}{2}\end{bmatrix} -\begin{bmatrix} -1 & 0\\ 0 & -1 \end{bmatrix} \right ) }{\left ( -2+1\right ) }=\begin{bmatrix} \frac{1}{2} & -\frac{1}{2}\\ -\frac{1}{2} & \frac{1}{2}\end{bmatrix} \end{align*}

Hence\begin{align*} e^{At} & =\begin{bmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2}\end{bmatrix} e^{-t}+\begin{bmatrix} \frac{1}{2} & -\frac{1}{2}\\ -\frac{1}{2} & \frac{1}{2}\end{bmatrix} e^{-2t}\\ & =\begin{bmatrix} \frac{1}{2}e^{-t}\left ( e^{-t}+1\right ) & -\frac{1}{2}e^{-t}\left ( e^{-t}-1\right ) \\ -\frac{1}{2}e^{-t}\left ( e^{-t}-1\right ) & \frac{1}{2}e^{-t}\left ( e^{-t}+1\right ) \end{bmatrix} \end{align*}

2.6.5 Problem 4 Circuit

Part(a)

\[\begin{bmatrix} x_{1}^{\prime }\\ x_{2}^{\prime }\end{bmatrix} =\overset{A}{\overbrace{\begin{bmatrix} -\frac{1}{R\left ( t\right ) } & -1\\ 1 & 0 \end{bmatrix} }}\begin{bmatrix} x_{1}\\ x_{2}\end{bmatrix} +\overset{B}{\overbrace{\begin{bmatrix} \frac{1}{R\left ( t\right ) }\\ 0 \end{bmatrix} }}u\left ( t\right ) \] Since \(A,B\) are continuously differentiable, we can use the short cut \(M\) based method to determine if \(\left ( A,B\right ) \) is controllable at some instance of time and we do not need to compute the controllability Gramian \(W\). First we will find \(M\)\begin{align*} M_{0} & =B\left ( t\right ) =\begin{bmatrix} \frac{1}{R\left ( t\right ) }\\ 0 \end{bmatrix} \\ M_{1}\left ( t\right ) & =-A\left ( t\right ) M_{0}\left ( t\right ) +\frac{d}{dt}M_{0}\left ( t\right ) \\ & =-\begin{bmatrix} -\frac{1}{R\left ( t\right ) } & -1\\ 1 & 0 \end{bmatrix}\begin{bmatrix} \frac{1}{R\left ( t\right ) }\\ 0 \end{bmatrix} +\begin{bmatrix} \frac{-1}{R^{2}\left ( t\right ) }\\ 0 \end{bmatrix} \\ & =\begin{bmatrix} \frac{1}{R^{2}\left ( t\right ) }\\ -\frac{1}{R\left ( t\right ) }\end{bmatrix} -\begin{bmatrix} \frac{1}{R^{2}\left ( t\right ) }\\ 0 \end{bmatrix} \\ & =\begin{bmatrix} 0\\ -\frac{1}{R\left ( t\right ) }\end{bmatrix} \end{align*}

Hence\[ M=\begin{bmatrix} \frac{1}{R\left ( t\right ) } & 0\\ 0 & -\frac{1}{R\left ( t\right ) }\end{bmatrix} \] The determinant of \(M\) is \[ \Delta =\frac{-1}{R^{2}\left ( t\right ) }\] The system is not controllable at \(t_{0}\) if the determinant is zero at that instance of time.  But for the determinant to become zero means that \(R\left ( t\right ) \) has to become \(\infty \). Therefore, assuming \(R\left ( t\right ) \) remain finite for all \(t>0\) which is expected in a working physical system, then we conclude the system is indeed controllable for any \(t_{0}>0\).

Part(b)

A system is differentially controllable at some time \(t_{0}\) if there exist \(u\left ( t\right ) \) which will steer \(x\left ( t_{0}\right ) \) to \(x\left ( t_{1}\right ) \) no matter how small \(t_{1}-t_{0}\) is. Clearly if the system is differentially controllable at \(t_{0}\), then it is also controllable at \(t_{0}\) by making \(t_{1}-t_{0}\) as large as we want. The question is asking to show the system is differentially controllable for \(t_{0}>0\).

This actually follows from the fact that \(A,B\,\ \)are analytic functions. By definition, analytic functions over \(\left [ 0,\infty \right ] \) are linearly independent iff they are linearly independent over any sub interval no matter how small the interval is. But I think we need to proof this using calculus. Therefore an attempt to do so is given below:

Let \(t_{1}=t_{0}+\varepsilon \) where \(\varepsilon \) is the time increment we will make as small as we want. Since the system is controllable at \(t_{0}\) then \(W\left ( t_{0},t_{0}+\varepsilon \right ) \) is nonsingular.

Now I will use the same result used in proofing controllability itself, which is to claim the following \(u\left ( t\right ) \) will steer the system from \(x\left ( t_{0}\right ) \) to \(x\left ( t_{0}+\varepsilon \right ) \)\[ u\left ( t\right ) =-B^{T}\left ( t\right ) \Phi ^{T}\left ( t_{0}+\varepsilon ,t\right ) W^{-1}\left ( t_{0},t_{0}+\varepsilon \right ) \left [ \Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -x\left ( t_{0}+\varepsilon \right ) \right ] \] To show that the above \(u\) results in system moving to \(x\left ( t_{0}+\varepsilon \right ) \) from \(x\left ( t_{0}\right ) \), we substitute the above \(u\) into the state solution\[ \Delta =\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{0}+\varepsilon }} \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau \] And this will result in \(\Delta =x\left ( t_{0}+\varepsilon \right ) \).

\begin{align*} \Delta & =\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -{\displaystyle \int \limits _{t_{0}}^{t_{0}+\varepsilon }} \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) \overbrace{\left ( -B^{T}\left ( t\right ) \Phi ^{T}\left ( t_{0}+\varepsilon ,\tau \right ) W^{-1}\left ( t_{0},t_{0}+\varepsilon \right ) \left [ \Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -x\left ( t_{0}+\varepsilon \right ) \right ] \right ) }^{u\left ( \tau \right ) }d\tau \\ & =\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -\overbrace{\left ({\displaystyle \int \limits _{t_{0}}^{t_{0}+\varepsilon }} \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) B^{T}\left ( t\right ) \Phi ^{T}\left ( t_{0}+\varepsilon ,\tau \right ) d\tau \right ) }^{W\left ( t_{0},t_{0}+\varepsilon \right ) } W^{-1}\left ( t_{0},t_{0}+\varepsilon \right ) \left [ \Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -x\left ( t_{0}+\varepsilon \right ) \right ] \\ & =\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -\overbrace{W\left ( t_{0},t_{0}+\varepsilon \right ) W^{-1}\left ( t_{0},t_{0}+\varepsilon \right ) }^I\left [ \Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -x\left ( t_{0}+\varepsilon \right ) \right ] \\ & =\overbrace{\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) -\Phi \left ( t_{0}+\varepsilon ,t_{0}\right ) x\left ( t_{0}\right ) }+x\left ( t_{0}+\varepsilon \right ) \\ & =x\left ( t_{0}+\varepsilon \right ) \end{align*}

The only requirement for the above proof was the condition that \(W\left ( t_{0},t_{0}+\varepsilon \right ) \) is nonsingular at \(t_{0}\) which was established in part(a).

I give another proof just in case the above is not acceptable. Consider\[ W\left ( t_{0},t_{1}\right ) ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) d\tau \] We know the above is nonsingular since the system is controllable at \(t_{0}>0\) from part (a). Using \(\Phi \left ( t_{0},\tau \right ) =\Phi \left ( t_{0},t_{0}+\varepsilon \right ) \Phi \left ( t_{0}+\varepsilon ,\tau \right ) \) we can rewrite the above as\begin{align*} W\left ( t_{0},t_{1}\right ) & ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},t_{0}+\varepsilon \right ) \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \left ( \Phi \left ( t_{0},t_{0}+\varepsilon \right ) \Phi \left ( t_{0}+\varepsilon ,\tau \right ) \right ) ^{T}d\tau \\ & ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},t_{0}+\varepsilon \right ) \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0}+\varepsilon ,\tau \right ) \Phi ^{T}\left ( t_{0},t_{0}+\varepsilon \right ) d\tau \end{align*}

Now \(\Phi \left ( t_{0},t_{0}+\varepsilon \right ) \) and \(\Phi ^{T}\left ( t_{0},t_{0}+\varepsilon \right ) \) do not depend on \(\tau \) and can be removed outside the integral\[ W\left ( t_{0},t\right ) =\Phi \left ( t_{0},t_{0}+\varepsilon \right ) \left ({\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0}+\varepsilon ,\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0}+\varepsilon ,\tau \right ) d\tau \right ) \Phi ^{T}\left ( t_{0},t_{0}+\varepsilon \right ) \] The integral inside the controllability Gramian \(W\left ( t_{0}+\varepsilon ,t_{1}\right ) \), hence\[ W\left ( t_{0},t\right ) =\Phi \left ( t_{0},t_{0}+\varepsilon \right ) W\left ( t_{0}+\varepsilon ,t\right ) \Phi ^{T}\left ( t_{0},t_{0}+\varepsilon \right ) \] Therefore\[ W\left ( t_{0}+\varepsilon ,t\right ) =\Phi ^{-1}\left ( t_{0},t_{0}+\varepsilon \right ) W\left ( t_{0},t\right ) \Phi ^{-T}\left ( t_{0},t_{0}+\varepsilon \right ) \] Since \(W\left ( t_{0},t\right ) \) is nonsingular, and since \(\Phi \left ( t_{0},t_{0}+\varepsilon \right ) \) is also nonsingular, then \(W\left ( t_{0}+\varepsilon ,t\right ) \) is also nonsingular for any \(\varepsilon \). Therefore the system is controllable at any time after \(t_{0}\) no matter how small \(\varepsilon \) is.

2.6.6 Problem 5 Control effort

Future state is given by\begin{equation} x\left ( t_{1}\right ) =e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} e^{A\left ( t_{1}-\tau \right ) }Bu\left ( \tau \right ) d\tau \tag{1} \end{equation} Let \(M\) be the controllability matrix, which we know is nonsingular since the system is controllable. The following \(u\left ( t\right ) \) will bring the system from \(x\left ( t_{0}\right ) \) to \(x\left ( t_{1}\right ) \)\[ u\left ( t\right ) =-B^{T}\left ( e^{A\left ( t_{1}-t\right ) }\right ) ^{T}M^{-1}\left ( e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -x\left ( t_{1}\right ) \right ) \] Substituting this in (1) shows that this is the case\begin{align*} e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} e^{A\left ( t_{1}-\tau \right ) }Bu\left ( \tau \right ) d\tau & =e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} e^{A\left ( t_{1}-\tau \right ) }B\left [ -B^{T}\left ( e^{A\left ( t_{1}-t\right ) }\right ) ^{T}M^{-1}\left ( e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -x\left ( t_{1}\right ) \right ) \right ] d\tau \\ & =e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -\overbrace{\left ({\displaystyle \int \limits _{t_{0}}^{t_{1}}} e^{A\left ( t_{1}-\tau \right ) }BB^{T}\left ( e^{A\left ( t_{1}-t\right ) }\right ) ^{T}d\tau \right ) }^{\text{for LTI = }M\text{ matrix}} M^{-1}\left ( e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -x\left ( t_{1}\right ) \right ) \\ & =e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -MM^{-1}\left ( e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -x\left ( t_{1}\right ) \right ) \\ & =e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) +x\left ( t_{1}\right ) \\ & =x\left ( t_{1}\right ) \end{align*}

Hence we know that \(u\left ( t\right ) =-B^{T}\left ( e^{A\left ( t_{1}-t\right ) }\right ) ^{T}M^{-1}\left ( e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) -x\left ( t_{1}\right ) \right ) \) will steer the system from \(x\left ( t_{0}\right ) \) to \(x\left ( t_{1}\right ) \).  Now if we set \(x\left ( t_{1}\right ) =0\) as the goal state, then \(u\left ( t\right ) \) simplifies to\[ u\left ( t\right ) =-B^{T}\left ( e^{A\left ( t_{1}-t\right ) }\right ) ^{T}M^{-1}e^{A\left ( t_{1}\right ) }x\left ( t_{0}\right ) \] This control will steer the system from state \(x\left ( t_{0}\right ) \) to state \(0\). Now we need to show that \(\left \Vert u\left ( t\right ) \right \Vert \leq \beta \) for any given \(\beta >0\). In the above \(B\) and \(x\left ( t_{0}\right ) \) are fixed and given and do not change with time. The same for \(M\). This is because this is an LTI system. The only effect on the norm of \(u\left ( t\right ) \) comes from \(e^{A\left ( t_{1}-t\right ) }\) matrix, since this is the only quantity in the above that changes with time. Therefore, to reduce the norm of \(u\left ( t\right ) \) is means we can change \(t\) where \(u\left ( t\right ) \) is applied such that the resulting \(e^{A\left ( t_{1}-t\right ) }\) is such that \(\left \Vert u\left ( t\right ) \right \Vert \leq \beta \). We might have to make \(\left ( t_{1}-t\right ) \) very small, but we can always do that in order to cause \(\left \Vert u\left ( t\right ) \right \Vert \leq \beta \).  

2.6.7 Problem 6 Range

Need to proof the following: \(x\left ( t_{0}\right ) \) can be steered to \(x\left ( t_{1}\right ) =0\) iff \(x\left ( t_{0}\right ) \) is in range of \(W\left ( t_{0},t_{1}\right ) \,\).

Proof: The above is equivalent to proofing this: \(x\left ( t_{0}\right ) \) can be steered to \(x\left ( t_{1}\right ) =0\) iff \(W\left ( t_{0},t_{1}\right ) v=x\left ( t_{0}\right ) \) for \(\vec{v}\neq 0\). But the ability to steer from \(x\left ( t_{0}\right ) \) to \(x\left ( t_{1}\right ) =0\) is the same as saying the system is controllable at \(t_{0}\). Therefore, what we want to proof is the following

The system is controllable at \(t_{0}\) iff \(W\left ( t_{0},t_{1}\right ) v=x\left ( t_{0}\right ) \) for \(\vec{v}\neq 0\)

Since if the system is controllable, then by definition, we can find control \(u\left ( t\right ) \) to steer \(x\left ( t_{0}\right ) \) to \(x\left ( t_{1}\right ) =0\). Now we will start by proofing the above.

Necessity: \(\Longrightarrow \) If The system is controllable at \(t_{0}\) then \(W\left ( t_{0},t_{1}\right ) v=x\left ( t_{0}\right ) \) for \(\vec{v}\neq 0\)

sufficient: \(\Longleftarrow \) If \(W\left ( t_{0},t_{1}\right ) v=x\left ( t_{0}\right ) \) for \(\vec{v}\neq 0\) then the system is controllable at \(t_{0}\).

Proof of Necessity: Since the system is controllable at \(t_{0}\) then we can find \(u\left ( t\right ) \) such that \[ x\left ( t_{1}\right ) =0=\Phi \left ( t_{1},t_{0}\right ) x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{1},\tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau \] Premultiply both sides by \(\Phi \left ( t_{0},t_{1}\right ) \) then\begin{align} 0 & =\overbrace{\Phi \left ( t_{0},t_{1}\right ) \Phi \left ( t_{1},t_{0}\right ) }^{I}x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \overbrace{\Phi \left ( t_{0},t_{1}\right ) \Phi \left ( t_{1},\tau \right ) }^{\Phi \left ( t_{0},\tau \right ) } B\left ( \tau \right ) u\left ( \tau \right ) d\tau \nonumber \\ 0 & =x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau \nonumber \\ -x\left ( t_{0}\right ) & ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau \tag{1} \end{align}

Let the control be \(u\left ( \tau \right ) =-B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) \vec{v}\left ( t\right ) \) for some none zero \(\vec{v}\left ( t\right ) .\) Since \(B^{T}\left ( t\right ) \) has size \(m\times n\) and \(\Phi ^{T}\left ( t_{0},t\right ) \) has size \(n\times n\) then \(\vec{v}\left ( t\right ) \) will have size \(m\times 1\). Substituting this control law into (1) gives\[ -x\left ( t_{0}\right ) =\left ( -{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) d\tau \right ) \vec{v}\left ( t\right ) \] where we moved \(v\) outside the integral since it does not depend on \(t\). But \[{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) d\tau =W\left ( t_{0},t_{1}\right ) \] Hence the above becomes\[ W\left ( t_{0},t_{1}\right ) \vec{v}\left ( t\right ) =\vec{x}\left ( t_{0}\right ) \] Therefore \(x\left ( t_{0}\right ) \) is in the range of \(W\left ( t_{0},t_{1}\right ) \).

Proof of sufficient: \(\Longleftarrow \) If \(W\left ( t_{0},t_{1}\right ) v=x\left ( t_{0}\right ) \) for \(\vec{v}\neq 0\) then the system is controllable at \(t_{0}\).

Since \(W\left ( t_{0},t_{1}\right ) \vec{v}\left ( t\right ) =x\left ( t_{0}\right ) \) then\begin{align*} x\left ( t_{0}\right ) & =W\left ( t_{0},t_{1}\right ) \vec{v}\left ( t\right ) \\ & =\left ({\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) d\tau \right ) \\ & ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) \vec{v}\left ( t\right ) d\tau \end{align*}

Premultiply both sides by \(\Phi \left ( t_{1},t\right ) \)\begin{align*} \Phi \left ( t_{1},t\right ) x\left ( t_{0}\right ) & ={\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{1},t\right ) \Phi \left ( t_{0},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) \vec{v}\left ( t\right ) d\tau \\ 0 & =-\Phi \left ( t_{1},t\right ) x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{1},\tau \right ) B\left ( \tau \right ) B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) \vec{v}\left ( t\right ) d\tau \end{align*}

Let \(B^{T}\left ( \tau \right ) \Phi ^{T}\left ( t_{0},\tau \right ) \vec{v}\left ( t\right ) =-u\left ( t\right ) \), then the above can be written as\[ 0=-\Phi \left ( t_{1},t\right ) x\left ( t_{0}\right ) -{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{1},\tau \right ) B\left ( \tau \right ) u\left ( t\right ) d\tau \] Since \(x\left ( t_{1}\right ) =0\) then the LHS above is \(x\left ( t_{1}\right ) \) then\[ x\left ( t_{1}\right ) =\Phi \left ( t_{1},t\right ) x\left ( t_{0}\right ) +{\displaystyle \int \limits _{t_{0}}^{t_{1}}} \Phi \left ( t_{1},\tau \right ) B\left ( \tau \right ) u\left ( t\right ) d\tau \] But the above means \(x\left ( t_{0}\right ) \) is steered to \(x\left ( t_{1}\right ) =0\). This completes the proof.

2.6.8 key solution

pict

pict

pict

pict

pict

pict

pict

pict

pict

pict

pict