1.28 Lecture 26. Thursday November 20 2014 (Handout, Kharitonov’s Theorem)

  1.28.1 Lecture: Stability, duel systems

pict

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

pict ;

1.28.1 Lecture: Stability, duel systems

No class next Tuesday.

Note, for LTV, the duel system \begin{align*} \tilde{A} & =-A^{T}(t)\\ \tilde{B} & =C^{T}(t)\\ \tilde{C} & =B^{T}(t)\\ \tilde{D} & =D(t) \end{align*}

But for LTI, the duel system is\begin{align*} \tilde{A} & =A^{T}\\ \tilde{B} & =C^{T}\\ \tilde{C} & =B^{T}\\ \tilde{D} & =D \end{align*}

Reader: show that for the duel \(\tilde{\Phi }\left ( t,\tau \right ) =\Phi ^{T}\left ( \tau ,t\right ) \) and that \(\left [ \Psi ^{T}\left ( t\right ) \right ] ^{-1}=\tilde{\Psi }\left ( t\right ) \)

Summary of LT and LTV:



Duality (LTI)

\(\begin{aligned}{A,B,C,D} &\Longleftrightarrow{A^{T},C^{T},B^{T},D}\\ \begin{aligned} x'(t)&=Ax(t)+B u(t)\\ y(t) &=Cx(t)+D u(t) \end{aligned} &\Longleftrightarrow \begin{aligned} x'(t)&=A^T x(t) + C^T u(t)\\ y(t) &=B^T x(t) + D^T u(t) \end{aligned} \end{aligned}\)



Duality (LTV)

\(\begin{aligned} \text{primal} &\Longleftrightarrow \text{duel}\\{A(t),B(t),C(t),D(t)} &\Longleftrightarrow{-A^{T}(t),C^{T}(t),B^{T}(t),D^{T}(t)}\\ \Phi (t_0,\tau ) &\Longleftrightarrow \Phi ^T(\tau ,t_0)\\ \begin{aligned} x'(t)&=A(t)x(t)+B(t)u(t)\\ y(t) &=C(t)x(t)+D(t)u(t) \end{aligned} &\Longleftrightarrow \begin{aligned} z'(t)&=-A^T(t) z(t) + C^T(t)v(t)\\ w(t) &=B^T(t) z(t) + D^T(t)v(t) \end{aligned} \end{aligned}\)



controllability Gramian (LTV)

\(W(t_0,t_1) = \int _{t_0}^{t_1} \Phi (t_0,\tau )B(\tau ) B^T(\tau ) \Phi ^T(t_0,\tau ) \, dt\)



observability Gramian (LTV)

\(W_o(t_0,t_1) = \int _{t_0}^{t_1} \Phi ^T(\tau ,t_0)C^T(\tau ) C(\tau ) \Phi (\tau ,t_0) \, dt\)



Controllability Matrix (LTI)

\(\mathbb{C} ={\left [ \begin{array}{c|c|c|c|c} B & AB & A^{2}B & \cdots & A^{n-1}B \end{array} \right ]} \)



Controllability Matrix (LTV)

\(M={\left [ \begin{array}{c|c|c|c} M_{0} & M_{1} & \cdots & M_{n-1} \end{array} \right ]} \)

\(M_{0}=B(t), M_{k+1}=-A(t) M_{k}+\frac{d}{dt}M_{k}\) for \(k=0\cdots n-2\)



Observability Matrix (LTI)

\( \mathbb{Q}= \begin{pmatrix} C\\ CA\\ CA^{2}\\ \vdots \\ CA^{n-1} \end{pmatrix} \)



Observability Matrix (LTV)

\(L(t)= \begin{pmatrix} L_{0}\\ L_{1}\\ L_{2}\\ \vdots \\ L_{n-1} \end{pmatrix}\)

\(L_{0}(t)= C(t) ,L_{k+1}(t) =L_{k}(t) A(t) +\frac{d}{dt}L_{k}(t)\) for \(k=0\cdots n-2\)



definition of physical controllability (LTI)

System is controllable if for any initial state \(x_{0}\) and any final state \(x_{1}\) \(\exists \) input \(u(t)\) that transfer \(x_{0}\) to \(x_{1}\) in finite time.



definition of physical controllability (LTV)

System is controllable at \(t_{0}\) if \(\exists \) input \(u\) over \([t_{0},t_{1}]\) that transfers \(x(t_0)\) to any \(x(t_1)\) where \(t_1>t_0\)



definition of physical observability (LTI)

System is observable if \(\exists \) time \(t_1>t_0\) such that knowing input \(u\) and output \(y\) over \([t_0,t_1]\) suffices to determine state \(x(t_0)\)



definition of physical observability (LTV)

System is observable at \(t_0\) if the following condition holds: With \(x(t_0)=x^0\) unknown, suppose \(u(t)\) and \(y(t)\) are known, then there exist time \(t_1\geq t_0\) such that \(x(t_0)\) can be determined from knowing \(u(t)\) and \(y(t)\) over \([t_0,t_1]\). This is true for any \(x(t_0)\)



State solution (LTV). \(A(t)\) commutes with itself, i.e. \(A(t)A(\tau )=A(\tau )A(t)\) then \[ \begin{aligned} \Psi (t) &= e^{\int _{t_0}^t A(\zeta ) \, d\zeta } \\ \Phi (t,\tau ) &= \Psi (t) \Psi ^{-1}(\tau ) \\ \Phi (t,\tau ) &= e^{\int _t^\tau A(\zeta ) \, d\zeta } \end{aligned} \]

\(x(t_1)= \Phi (t_1,t_0) x(t_0) + \int _{t_0}^{t_1} \Phi (t_1,\tau ) B(\tau ) u(\tau ) \, d\tau \)



State solution (LTV)
\(A(t)\) does not commutes with itself, but \(A(t)\) commutes with its integral. i.e. \(A(t) e^{\int _0^t A(\tau ) \, d\tau } = e^{\int _0^t A(\tau ) \, d\tau } A(t)\) then the same applied as above.

\(x(t_1)= \Phi (t_1,t_0) x(t_0) + \int _{t_0}^{t_1} \Phi (t_1,\tau ) B(\tau ) u(\tau ) \, d\tau \)



State solution (LTV)
None of the above conditions apply. This is the hard case. Need to actually solve for \(\Phi (t,\tau )\) by solving the state equations.

\(x(t_1)= \Phi (t_1,t_0) x(t_0) + \int _{t_0}^{t_1} \Phi (t_1,\tau ) B(\tau ) u(\tau ) \, d\tau \)



State solution (LTI)

\(x(t_1)= e^{A(t_1-t_0)} x(t_0) + \int _{t_0}^{t_1} e^{A(t_1-\tau )} B u(\tau ) \, d\tau \)



State solution (LTI) with \(t_0=0\)

\(x(t_1)= e^{A(t_1)} x(0) + \int _0^{t_1} e^{A(t_1-\tau )} B u(\tau ) \, d\tau \)



Back to canonical decomposition. Notice that decomposition is for LTI only, not for LTV.

Stability:

system characteristic equation \(P(s) = \displaystyle \sum \limits _{i=0}^{n} a_{i}s^{i}\). Let us assume all signs of \(P(s)\) is the same to start with (if they are the same, then the polynomial is not stable. Also, assume all are positive. (we can always multiply by \(-1\) to force this if needed.)

To find roots of \(P(s)\) we can solve and check if \(\operatorname{Re}\left ( .\right ) \) of each root is negative. If so, we say the system is stable. But we can check for stability without finding the roots using Routh-Hurwitz. The proof is complicated. To use, here is an example for \(n=5\)\[ H_{Hurwitz}=\begin{pmatrix} a_{1} & a_{3} & a_{5} & 0 & 0\\ a_{0} & a_{2} & a_{4} & 0 & 0\\ 0 & a_{1} & a_{3} & a_{5} & 0\\ 0 & a_{0} & a_{2} & a_{4} & 0\\ 0 & 0 & a_{1} & a_{3} & a_{5}\end{pmatrix} \] Now find \(\Delta _{i}\) for each leading principle minor. Hence \begin{align*} \Delta _{1} & =a_{1}\\ \Delta _{2} & =\begin{vmatrix} a_{1} & a_{3}\\ a_{0} & a_{2}\end{vmatrix} \\ \Delta _{3} & =\begin{vmatrix} a_{1} & a_{3} & a_{5}\\ a_{0} & a_{2} & a_{4}\\ 0 & a_{1} & a_{3}\end{vmatrix} \\ & \vdots \end{align*}

The system is stable if all \(\Delta _{i}>0\). A necessary condition for stability is that all \(a_{i}\) must be same sign. But this is not sufficient. Therefore, always start by checking for this. If there is sign change, no need to do Hurwitz, since not stable. Otherwise, have to do the above to determine stability.

Example: \(P\left ( s\right ) =s^{3}+3s^{2}+3s+1\). \[ H_{Hurwitz}=\begin{pmatrix} 3 & 1 & 0\\ 1 & 3 & 0\\ 0 & 3 & 1 \end{pmatrix} \] \(\Delta _{1}=1,\Delta _{2}=8,\Delta _{3}=8\), hence stable.

Reader: Suppose we want to check for stable where \(\operatorname{Re}\left ( .\right ) \) of all roots are such that \(\operatorname{Re}\left ( .\right ) <-\alpha \). Modify \(P\left ( s\right ) \) to become \(P\left ( s+\alpha \right ) \). Now we want to generalize to robust control. When we created \({\displaystyle \sum } =\left ( A,B,C,D\right ) \) we have an approximation to the system. So we have actually \(A_{true}=A+\Delta A\), i.e. some perturbation of \(A\) on both sides. So the true \(A\) can become unstable. So we want \(P(s)+\) some perturbation. Consider

\(P_{true}\left ( s\right ) =s^{n}+a_{n-1}s^{n-1}+\cdots +a_{0}\) where now we way that \(a_{i}^{-}\leq a_{i}\leq a_{i}^{+}\) and the limits are known. This is called interval polynomial.

This is robustly stable no matter what values of \(a_{i}\) can have between the limits. The robust analysis problem was solved only in the last 30 years. Motivation for solution. Assume we have only \(2\) \(a_{i}\) which are \(a_{1},a_{2}\). Each with known limits. Hence we have the following diagram.

pict

One approach is to make grid and solve for each combination, but this will become very large as more \(a^{\prime }s\) are added.  But Kharitonov’s theorem reduces this to only 4 parameters. See hand out on Kharitonov’s theorem. So we only have to check stability for 4 different polynomials instead of thousands and millions of them as the case would be with the grid method.