2.8 special problem

  2.8.1 special problem
  2.8.2 Appendix
  2.8.3 References
  2.8.4 key solution

2.8.1 special problem

pict

Part(a)

The first step is to obtain the matrix \(A(q)\) characteristic polynomial\[ A\left ( q\right ) =\begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ -\left ( 0.5+\varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}\right ) & -\left ( 1+q_{1}+q_{2}\right ) & -\left ( 1+q_{1}+q_{2}\right ) \end{bmatrix} \] Therefore \begin{align*} p\left ( s,q\right ) & =\left \vert sI-A\left ( q\right ) \right \vert \\ & =\begin{vmatrix} s & -1 & 0\\ 0 & s & -1\\ \left ( 0.5+\varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}\right ) & \left ( 1+q_{1}+q_{2}\right ) & s+\left ( 1+q_{1}+q_{2}\right ) \end{vmatrix} \\ & =\left ( \varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}+\frac{1}{2}\right ) +\left ( 1+q_{1}+q_{2}\right ) s+\left ( 1+q_{1}+q_{2}\right ) s^{2}+s^{3}\\ & =a_{0}\left ( q\right ) +a_{1}\left ( q\right ) s+a_{2}\left ( q\right ) s^{2}+a_{3}\left ( q\right ) s^{3} \end{align*}

Checking for robust stability

The method of polynomial over-bounding was tried first, but it was inconclusive. The attempt is included in the appendix. A graphical method was then tried based on the zero exclusion principle using set value of polytope of polynomials, but that also was inconclusive as the polygon seen crossing the zero as the frequency increased. The result of this attempt is described in the appendix.

Analysis using Hurwitz matrix Using \(P\left ( s\right ) =\left ( \varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}+\frac{1}{2}\right ) +s\left ( 1+q_{1}+q_{2}\right ) +s^{2}\left ( 1+q_{1}+q_{2}\right ) +s^{3}\) with \(0\leq q_{1}\leq 1,0\leq q_{2}\leq 1.\) We setup the Hurwitz matrix for the above polynomial to find the conditions under which it is stable.\[ H=\begin{bmatrix} 1+q_{1}+q_{2} & 1 & 0\\ 0.5+\varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2} & 1+q_{1}+q_{2} & 0\\ 0 & 1+q_{1}+q_{2} & 1 \end{bmatrix} \] The leading minors are\begin{align*} \Delta _{1} & =1+q_{1}+q_{2}\\ \Delta _{2} & =0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}\\ \Delta _{3} & =\Delta _{2} \end{align*}

Hence we only need to examine two cases. For \(\Delta _{1}=1+q_{1}+q_{2}\), we see this is positive for all \(q\), since \(0\leq q_{i}\leq 1\).

For \(\Delta _{2}=0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}\) we need to determine the conditions which makes this positive.\[ 0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}>0 \] In other words, the minimum of \(\Delta _{2}\) should be positive to insure stability. This is global minimization with constrain problem. However, an algebraic reduce method was used instead to obtain the limits on \(q_{1},q_{2}\) for each different \(\varepsilon _{i}\) where \(\varepsilon _{i}\) was incremented by \(0.005\) from \(0\) to \(0.1\)

There are \(20\) increments, and for each \(\varepsilon _{i}\) an algebraic conditions was found on using the computer on \(q_{1},q_{2}\) which insures that \(\Delta _{2}>0\). The result is tabulated below. In addition, 3D plot of \(\Delta _{2}\) is given, using \(q_{1},q_{2}\) as \(x,y\) and using the value of \(\Delta _{2}\) as the z-axis. This gives a visual view of the \(\Delta _{2}\) showing that it is indeed positive all the time using the constraints found on \(q_{i}\) for some specific \(\varepsilon \) used. A typical 3D for some \(\varepsilon \) is shown below for illustration

pict

The above shows that for \(\varepsilon =0.005\), \(\Delta _{2}>0\), and hence the system is stable under the conditions \(0\leq q_{1}\leq 0.4294,0\leq q_{2}\leq 1\).

pict
Figure 2.1:conditions on \(q_1\) and \(q_2\) for each \(\varepsilon \) used for the visualization that follows
Visualization of the solution space for positive \(\Delta _{2}\)

Robust stability depends on positive \(\Delta _{2}\). In the above, we obtained conditions on \(\varepsilon ,q_{1},q_{2}\) which when met, will insure \(\Delta _{2}>0\) and hence a stable polynomial. To visualize the solution in 3D, where we will use \(q_{1},q_{2},\varepsilon \) as the three axis, and use two colors: green to indicate the region where \(\Delta _{2}>0\) and red color for the remaining region where \(\Delta _{2}\leq 0\). Therefore, the space is a 3D cube enclosed in \(1,1,\varepsilon \). Using the above inequalities, the 3D plots are made.

pict
Figure 2.2:points which results in stable polynomial are green, otherwise they are red

Another 3D plot was made, using different 3D plot, called the surface plot, to help visualize the regions in different way. (some parts of the cube are not fully shown due to limited sampling used in producing the data).

pict
Figure 2.3:Surface around region of stable points is colored green, otherwise red

We see from the above, that stability emerges for the region enclosed by \(0\leq q_{1}\leq 0.5\) and \(0\leq q_{2}\leq 1\) for low values of \(\varepsilon \) and for the region \(0\leq q_{1}\leq 0.2\) and \(0\leq q_{2}\leq 1\) for the large values of \(\varepsilon \). The small \(\varepsilon \) is, and the smaller \(q_{1}\) is, the larger the stability region becomes.

Theoretical support for the numerical computation

Attempts were first made to obtain answer using zero exclusion condition. But both attempts resulted in inclusive result, as the polygon crosses the zero point. Algebraic reduction was used to obtain the constraints on solving for conditions on \(q_{1},q_{2}\) for making \(\Delta _{2}=0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}\) positive as \(\varepsilon \) was incremented by small amount as shown above. Writing \(\Delta _{2}\) as \[ \Delta _{2}=0.5-\varepsilon -\overbrace{\left ( q_{1}+q_{2}\right ) }^A+\overbrace{\left ( q_{1}^{2}+q_{2}^{2}\right ) }^B \] Since each \(0\leq q_{i}\leq 1\) then \(q_{i}^{2}<q_{i}\). This means that the smaller the \(q_{i}\) becomes then \(A\) will dominate over \(B\) more in size but they are both small, and hence \(\Delta _{2}\) is positive .

When both \(q\) are around mid point of their range, for example \(q_{i}=\frac{1}{2}\), then \(-A+B=-\frac{1}{2}\) and the result is \(\Delta _{2}=-\varepsilon \), hence not stable.

As the \(q_{i}\) becomes larger than \(\frac{1}{2}\) then \(A\) does not dominate over \(B\) as much, but they are both larger now and the difference between \(A,B\) becomes smaller again as when they were both below \(\frac{1}{2}\), which means now \(\Delta _{2}\) becomes larger, hence stable.

Here is a small table showing this variation. The above implies that the critical condition is where both \(q_{i}\) are close to each other in value. As they get closer to \(0.5\) then \(\varepsilon \) has to become smaller in order to keep \(\Delta _{2}\) positive.

pict

Solving as constraint minimization problem Another attempt at theoretical support for the numerical computation, is to view this as constraint minimization problem, where we want to find the minimum of \(\Delta _{2}\) subject to constrain \(0\leq q_{1}\leq 1,0\leq q_{2}\leq 1\) and \(0<\varepsilon <0.1\), then the method of Lagrangian multipliers can be used.

Let the objective function be \(Q=\Delta _{2}=0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}\) subject to \(0\leq q_{1}\leq 1,0\leq q_{2}\leq 1,0<\varepsilon <0.1\). Hence the Lagrangian \(L\) is\begin{align*} L & =Q+\lambda _{1}\left ( 1-q_{1}\right ) +\lambda _{2}\left ( 1-q_{2}\right ) +\lambda _{3}\left ( 0.1-\varepsilon \right ) +\lambda _{4}q_{1}+\lambda _{5}q_{2}+\lambda _{6}\varepsilon \\ & =0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}+\lambda _{1}\left ( 1-q_{1}\right ) +\lambda _{2}\left ( 1-q_{2}\right ) +\lambda _{3}\left ( 0.1-\varepsilon \right ) +\lambda _{4}q_{1}+\lambda _{5}q_{2}+\lambda _{6}\varepsilon \end{align*}

Therefore \begin{align*} \frac{\partial L}{\partial q_{1}} & =-1+2q_{1}-\lambda _{1}+\lambda _{4}=0\\ \frac{\partial L}{\partial q_{2}} & =-1+2q_{2}-\lambda _{2}+\lambda _{5}=0\\ \frac{\partial L}{\partial \varepsilon } & =-1-\lambda _{3}+\lambda _{6}=0\\ \frac{\partial L}{\partial \lambda _{1}} & =1-q_{1}=0\\ \frac{\partial L}{\partial \lambda _{2}} & =1-q_{2}=0\\ \frac{\partial L}{\partial \lambda _{3}} & =0.1-\varepsilon =0\\ \frac{\partial L}{\partial \lambda _{4}} & =q_{1}=0\\ \frac{\partial L}{\partial \lambda _{5}} & =q_{2}=0\\ \frac{\partial L}{\partial \lambda _{6}} & =\varepsilon =0 \end{align*}

And \[\begin{bmatrix} 2 & 0 & 0 & -1 & 0 & 0 & 1 & 0 & 0\\ 0 & 2 & 0 & 0 & -1 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 1\\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} q_{1}\\ q_{2}\\ \varepsilon \\ \lambda _{1}\\ \lambda _{2}\\ \lambda _{3}\\ \lambda _{4}\\ \lambda _{5}\\ \lambda _{6}\end{bmatrix} =\begin{bmatrix} 1\\ 1\\ 1\\ -1\\ -1\\ -0.1\\ 0\\ 0\\ 0 \end{bmatrix} \] Solving the above \(Ax=b\) system using least squares gave the solution as \(q_{1}=0.5,q_{2}=0.5,\varepsilon =0.05\), which is where the minimum is. Using the solution we can find \(\Delta _{2}\) at these values \[ \Delta _{2}=-0.05 \] The above Lagrangian multiplier method to find the minimum was implemented in separate program allowing one to change the limits of \(\varepsilon \) and \(q_{1},q_{2}\) to see where the minimum shows up for each different combination. Here are few screen shots showing different results. This method was used to verify the numerical 3D based plots shown above by verifying the stable points are where they are shown in the 3D plots.

pict

Part(b)

Plotting \(P_{\varepsilon }\) We now treat \(q_{1},q_{2}\) as random variables. Stability is still decided by \[ \Delta _{2}=0.5-\varepsilon -q_{1}+q_{1}^{2}-q_{2}+q_{2}^{2}\] If we call \(\Delta _{2}\) as the random variable \(Z\) which is now function of random variables \(q_{1},q_{2}\) renamed to be \(X,Y\) and are drawn from uniform distribution, then we can write \[ Z=0.5-\varepsilon -X+X^{2}-Y+Y^{2}\] Then \[ \Pr \left ( z_{1}\leq Z\leq z_{2}\right ) ={\displaystyle \int \limits _{z_{1}}^{z_{1}}} f_{Z}\left ( z\right ) dz \] or\[ \Pr \left ( Z\leq z\right ) =F_{Z}\left ( z\right ) ={\displaystyle \int \limits _{-\infty }^{z}} f_{Z}\left ( z\right ) dz \] Where \(F_{Z}\left ( z\right )\) is the cumulative distribution function. To find \(\Pr \left ( Z>0\right )\) then\[ \Pr \left ( Z>0\right ) ={\displaystyle \int \limits _{0}^{\infty }} f_{Z}\left ( z\right ) dz \] \(\varepsilon \) is fixed each time before finding \(f_{z}\left ( z\right ) \). Hence for each \(\varepsilon \) there will be a different \(f_{Z}\left ( z\right )\) which we then use to find \(\Pr \left ( Z>0\right ) \) for robust stability, since \(\Pr \left ( Z>0\right ) \) is the same as asking what is the probability that \(\Delta _{2}>0\) for a given \(\varepsilon \).

\(P_{\varepsilon }\) was drawn for \(\varepsilon =0\cdots 1\) to see how it shows up before zooming in.

pict

The above shows that for \(0\leq \varepsilon \leq 0.1\) the probability of robust stability is high. We can zoom in to that region

pict

We see that at \(\varepsilon =0.1\) there is about \(68\%\) chance that the system will be stable and for \(\varepsilon =0\) the probability the system is stable is \(100\%\).

Finding the formula for the probability \(P_{\varepsilon }\) \(f_{Z}\left ( z\right ) \) is the probability density function of the random variable \(Z\). To obtain \(f_{Z}\left ( z\right ) \), we use the following two definitions. For random variable \(Z=X+Y\) where \(X\) is random variable drawn from \(f_{X}\) distribution and \(Y\) is random variable drawn from \(f_{Y}\) distribution, then the random variable \(Z\) will be drawn from distribution made from the convolution of \(f_{X}\) with \(f_{Y}\)\[ f_{Z}\left ( z\right ) =\int _{-\infty }^{\infty }f_{X}\left ( z-x\right ) f_{Y}\left ( x\right ) dx \] Where \(f_{X}\) is the pdf of the uniform distribution for \(Q\) which is defined for \(q=\left [ 0,1\right ] \) as\[ f_{X}\left ( x\right ) =\left \{ \begin{array} [c]{cc}1 & 0\leq x\leq 1\\ 0 & \text{otherwise}\end{array} \right . \] For \(Z=XY\) evaluation, we need a product of two random variables. This is given by\[ f_{Z}\left ( z\right ) =\int _{-\infty }^{\infty }f\left ( x\right ) f\left ( z/x\right ) \frac{1}{\left \vert x\right \vert }dx \] Using the above, we can find that \(f_{Z}\left ( z\right ) \) for \(Z=0.5-\varepsilon -X+X^{2}-Y+Y^{2}\).

Let \(Z_{1}=\) \(-X-Y=-\left ( X+Y\right )\), hence \[ f_{Z_{1}}\left ( z\right ) =-\int _{-\infty }^{\infty }f_{U}\left ( z-x\right ) f_{U}\left ( x\right ) dx \] Now let \(Z_{2}=X^{2}\). Using the product formula, we write \[ f_{Z_{2}}\left ( z\right ) =\int _{-\infty }^{\infty }f_{U}\left ( x\right ) f_{U}\left ( z/x\right ) \frac{1}{\left \vert x\right \vert }dx \] Therefore for \(Z_{3}=-X-Y+X^{2}\) we now have \(Z_{3}=Z_{1}+Z_{2}\) and now we use the addition formula\[ f_{Z_{3}}\left ( z\right ) =\int _{-\infty }^{\infty }f_{Z_{1}}\left ( z-x\right ) f_{Z_{2}}\left ( x\right ) dx \] For \(-Y^{2}\), let \(Z_{4}=-Y^{2}\) and using the product formula gives\[ f_{Z_{4}}\left ( z\right ) =-\int _{-\infty }^{\infty }f_{U}\left ( x\right ) f_{U}\left ( z/x\right ) \frac{1}{\left \vert x\right \vert }dx \] Hence we now have the following \(Z_{5}=Z_{3}+Z_{4}\) and the pdf is \[ f_{Z_{5}}\left ( z\right ) =\int _{-\infty }^{\infty }f_{Z_{3}}\left ( z-x\right ) f_{Z_{4}}\left ( x\right ) dx \] Finally, we have \(Z=0.5-\varepsilon +Z_{5}\) which have the pdf\[ f_{Z}\left ( z\right ) =\frac{1}{\left \vert 0.5-\varepsilon \right \vert }f_{Z_{5}}\left ( z-\left ( 0.5-\varepsilon \right ) \right ) \] With the help of the computer, formula for the pdf of \(f_{Z}\left ( z\right ) \) was obtained which was used to generate the above plots of \(P_{\varepsilon }\)\[ f_{Z}\left ( z\right ) =\left \{ \begin{array} [c]{ccc}\pi & & 0<q+\epsilon \leq 0.25\\ 2\left ( \operatorname{arcsec}\left ( 2\sqrt{q+\varepsilon }\right ) -\arctan \left ( \sqrt{-1+4q+4\varepsilon }\right ) \right ) & & 0.25\leq q+\varepsilon <0.5\\ 0 & & \text{otherwise}\end{array} \right . \]

2.8.2 Appendix

polynomial over-bounding method

The first method to try is the method of polynomial over-bounding If this method says the polynomial is robustly stable, then we are done. However, if this method says the polynomial is not robustly stable, then it can still be stable. Hence the polynomial over-bounding method is called inconclusive, and we need to try other methods.

But we start with this method since it is simple to use to check. Using the method of over-bounding, we first need to determine the bounds on \(\bar{q}_{i}\). In other words, we convert the polynomial in \(q\) to an interval polynomial in \(\bar{q}\)\[ \bar{q}_{0}^{-}=\min _{q\in Q}a_{0}\left ( q\right ) =\min _{0\leq q\leq 1}\left ( \varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}+\frac{1}{2}\right ) \] Setting up the following table




\(q_{1}\) \(q_{2}\) \(\varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}+\frac{1}{2}\)






\(0\) \(0\) \(\varepsilon +0.5\)



\(0\) \(1\) \(\varepsilon +3.5\)



\(1\) \(0\) \(\varepsilon +3.5\)



\(1\) \(1\) \(\varepsilon +3+3+2+\frac{1}{2}=\varepsilon +8.5\)



Hence\[ \bar{q}_{0}^{-}=\varepsilon +0.5 \] And\[ \bar{q}_{0}^{+}=\max _{q\in Q}a_{0}\left ( q\right ) =\varepsilon +8.5 \] And\[ \bar{q}_{1}^{-}=\min _{q\in Q}a_{1}\left ( q\right ) =\min _{0\leq q\leq 1}\left ( 1+q_{1}+q_{2}\right ) \] Setting up a table




\(q_{1}\) \(q_{2}\) \(1+q_{1}+q_{2}\)






\(0\) \(0\) \(1\)



\(0\) \(1\) \(2\)



\(1\) \(0\) \(2\)



\(1\) \(1\) \(3\)



Hence\[ \bar{q}_{1}^{-}=1 \] And\[ \bar{q}_{1}^{+}=\max _{q\in Q}a_{1}\left ( q\right ) =3 \] And similarly, \(\bar{q}_{2}^{-}=1,\bar{q}_{2}^{+}=3\). And for \(\bar{q}_{3}\), since it has no uncertainties, then \(\bar{q}_{3}^{-}=\bar{q}_{3}^{+}=1\), Hence the over-bounding interval polynomial is\begin{align*} \bar{p}\left ( s,\bar{q}\right ) & =\left [ \bar{q}_{0}^{-},\bar{q}_{0}^{+}\right ] +\left [ \bar{q}_{1}^{-},\bar{q}_{1}^{+}\right ] s+\left [ \bar{q}_{2}^{-},\bar{q}_{2}^{+}\right ] s^{2}+\left [ \bar{q}_{3}^{-},\bar{q}_{3}^{+}\right ] s^{3}\\ & =\left [ \varepsilon +0.5,\varepsilon +8.5\right ] +\left [ 1,3\right ] s+\left [ 1,3\right ] s^{2}+\left [ 1,1\right ] s^{3} \end{align*}

We now construct the four Kharitonov polynomials to check for stability using Hurwitz matrix method\begin{align*} K_{1}\left ( s\right ) & =\left ( \varepsilon +0.5\right ) +s+3s^{2}+s^{3}\\ K_{2}\left ( s\right ) & =\left ( \varepsilon +8.5\right ) +3s+s^{2}+s^{3}\\ K_{3}\left ( s\right ) & =\left ( \varepsilon +8.5\right ) +s+s^{2}+s^{3}\\ K_{4}\left ( s\right ) & =\left ( \varepsilon +0.5\right ) +3s+3s^{2}+s^{3} \end{align*}

Hence \(H_{1}=\begin{bmatrix} 1 & 1 & 0\\ \left ( \varepsilon +0.5\right ) & 3 & 0\\ 0 & 1 & 1 \end{bmatrix} \) and for stability we require that \(\Delta _{1}=1>0,\) \(\Delta _{2}=3-\left ( \varepsilon +0.5\right ) =2.5-\varepsilon >0\) and \(\Delta _{3}=2.5-\varepsilon >0\). Since \(\,0<\varepsilon <0.1\), then we see that \(\Delta _{i},i=1\cdots 3\) are all positive. Now we consider \(K_{2}.\)

\(H_{2}=\begin{bmatrix} 3 & 1 & 0\\ \left ( \varepsilon +8.5\right ) & 1 & 0\\ 0 & 3 & 1 \end{bmatrix} \) and for stability we require that \(\Delta _{1}=3>0,\) \(\Delta _{2}=-5.5-\varepsilon \). We see that \(\Delta _{2}<0\) since \(0<\varepsilon <0.1\). Hence

the over-bounding polynomial method was not conclusive

.

Therefore we need to try a different method.

Graphical method based on zero exclusion using set value of polytope of polynomials The \(P\left ( s\right ) =\left ( \varepsilon +3q_{1}+3q_{2}+2q_{1}q_{2}+\frac{1}{2}\right ) +s\left ( 1+q_{1}+q_{2}\right ) +s^{2}\left ( 1+q_{1}+q_{2}\right ) +s^{3}\) with \(0\leq q_{1}\leq 1,0\leq q_{2}\leq 1\) is multilinear in \(q\). To use the value set for polytope of polynomials and apply graphical zero exclusion method in the hope to confirm robust stability, we have to convert the polynomial to an over-bounding affine linear polynomial in \(q\) by introducing new \(q_{3}=q_{1}q_{2}\)4

The polynomial \(P\left ( s\right ) =\left ( \varepsilon +3q_{1}+3q_{2}+2q_{3}+\frac{1}{2}\right ) +s\left ( 1+q_{1}+q_{2}\right ) +s^{2}\left ( 1+q_{1}+q_{2}\right ) +s^{3}\) with \(0\leq q_{1}\leq 1,0\leq q_{2}\leq 1,0\leq q_{3}\leq 1\) and \(0<\varepsilon <\varepsilon ^{+}\). Hence the uncertainty bounding set \(Q\) has 8 extremes \(q^{1}=\left ( 0,0,0\right ) ,q^{2}=\left ( 0,0,1\right ) ,q^{3}=\left ( 0,1,0\right ) ,q^{4}=\left ( 0,1,1\right ) \) \(,q^{5}=\left ( 1,0,0\right ) ,q^{6}=\left ( 1,0,1\right ) ,q^{7}=\left ( 1,1,0\right ) ,q^{8}=\left ( 1,1,1\right ) \). The eight associated polynomials generated are\begin{align*} P\left ( s,q^{1}\right ) & =\left ( \varepsilon +0.5\right ) +s+s^{2}+s^{3}\\ P\left ( s,q^{2}\right ) & =\varepsilon +2.5+s+s^{2}+s^{3}\\ P\left ( s,q^{3}\right ) & =\left ( \varepsilon +3.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{4}\right ) & =\left ( \varepsilon +5.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{5}\right ) & =\left ( \varepsilon +3.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{6}\right ) & =\left ( \varepsilon +5.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{7}\right ) & =\left ( \varepsilon +6.5\right ) +3s+3s^{2}+s^{3}\\ P\left ( s,q^{8}\right ) & =\left ( \varepsilon +8.5\right ) +3s+3s^{2}+s^{3} \end{align*}

From the above the nodes will be determined only by the unique polynomials. We will use only six out of the eight above, they are\begin{align*} P\left ( s,q^{1}\right ) & =\left ( \varepsilon +0.5\right ) +s+s^{2}+s^{3}\\ P\left ( s,q^{2}\right ) & =\varepsilon +2.5+s+s^{2}+s^{3}\\ P\left ( s,q^{3}\right ) & =\left ( \varepsilon +3.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{4}\right ) & =\left ( \varepsilon +5.5\right ) +2s+2s^{2}+s^{3}\\ P\left ( s,q^{7}\right ) & =\left ( \varepsilon +6.5\right ) +3s+3s^{2}+s^{3}\\ P\left ( s,q^{8}\right ) & =\left ( \varepsilon +8.5\right ) +3s+3s^{2}+s^{3} \end{align*}

We need to check that we have at least one nominal stable polynomial in the family of polynomials. Using \(P\left ( s,q^{1}\right ) =0.5+s+s^{2}+s^{3}\) as the stable member, we check it for stability first:\[ H=\begin{bmatrix} 1 & 1 & 0\\ 0.5 & 1 & 0\\ 0 & 1 & 1 \end{bmatrix} \] For \(\Delta _{1}=1>0\), and \(\Delta _{2}=0.5>0\) and \(\Delta _{3}=\Delta _{2}>0\) as well. Hence we verified the stable member exist. Now we need to generate the polygonal value set for each of the polynomials above. \begin{align*} P\left ( j\omega ,q^{1}\right ) & =0.5+j\omega +\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =0.5+j\omega -\omega ^{2}-j\omega ^{3}\\ & =0.5-\omega ^{2}+j\left ( \omega -\omega ^{3}\right ) \end{align*}

And\begin{align*} P\left ( j\omega ,q^{2}\right ) & =3.5+2j\omega +2\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =3.5+2j\omega -2\omega ^{2}-j\omega ^{3}\\ & =3.5-2\omega ^{2}+j\left ( 2\omega -\omega ^{3}\right ) \end{align*}

Polynomial \(P\left ( s,q^{3}\right ) \) is the same as \(P\left ( s,q^{2}\right ) \), so corner \(q^{2}\) and \(q^{3}\) map to same point in complex plane.\begin{align*} P\left ( j\omega ,q^{4}\right ) & =8.5+3j\omega +3\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =8.5+3j\omega -3\omega ^{2}-j\omega ^{3}\\ & =8.5-3\omega ^{2}+j\left ( 3\omega -\omega ^{3}\right ) \end{align*}

And\begin{align*} P\left ( j\omega ,q^{5}\right ) & =\left ( \varepsilon ^{+}+0.5\right ) +j\omega +\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =\left ( \varepsilon ^{+}+0.5\right ) +j\omega -\omega ^{2}-j\omega ^{3}\\ & =\left ( \varepsilon ^{+}+0.5\right ) -\omega ^{2}+j\left ( \omega -\omega ^{3}\right ) \end{align*}

And\begin{align*} P\left ( j\omega ,q^{6}\right ) & =\left ( \varepsilon ^{+}+3.5\right ) +2j\omega +2\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =\left ( \varepsilon ^{+}+3.5\right ) +2j\omega -2\omega ^{2}-j\omega ^{3}\\ & =\left ( \varepsilon ^{+}+3.5\right ) -2\omega ^{2}+j\left ( 2\omega -\omega ^{3}\right ) \end{align*}

Polynomial \(P\left ( s,q^{7}\right ) \) is the same as \(P\left ( s,q^{6}\right ) \), so corner \(q^{6}\) and \(q^{7}\) map to same point in complex plane.\begin{align*} P\left ( j\omega ,q^{8}\right ) & =\left ( \varepsilon ^{+}+8.5\right ) +3j\omega +3\left ( j\omega \right ) ^{2}+\left ( j\omega \right ) ^{3}\\ & =\left ( \varepsilon ^{+}+8.5\right ) +3j\omega -3\omega ^{2}-j\omega ^{3}\\ & =\left ( \varepsilon ^{+}+8.5\right ) -3\omega ^{2}+j\left ( 3\omega -\omega ^{3}\right ) \end{align*}

This diagram illustrates the mapping5

pict

To find the cut off frequency \(\omega _{c}\), using \(\omega _{c}=1+\frac{\max \left \{ q_{0}^{+},q_{1}^{+},\cdots ,q_{n-1}^{+}\right \} }{q_{n}^{-1}}\), where here we use the interval polynomial \(\bar{p}\left ( s,\bar{q}\right ) =\left [ \varepsilon +0.5,\varepsilon +8.5\right ] +\left [ 1,3\right ] s+\left [ 1,3\right ] s^{2}+\left [ 1,1\right ] s^{3}\) found above. This results in\[ \omega _{c}=1+\frac{\max \left \{ \varepsilon +8.5,3,3\right \} }{1}=1+\varepsilon +8.5=9.5+\varepsilon =9.51 \] Therefore the sweep frequency from be \(0<\omega <9.51\) for the simulation6 . A program was written to simulate the value sets for this problem. For each \(\varepsilon \) value, the value set is displayed over the sweep frequency to see if the polygon will include the origin or not. \(\varepsilon \) was changed by small increments and the simulation was run again.

Unfortunately, the result of this method was inconclusive as well

The polygons generated did include the origin as the frequency \(\omega \) was increased and different \(\varepsilon \) tried. Below is an example running a program written to implement this method

pict

2.8.3 References

1.
Barmish, B. Ross. New tools for robustness of linear systems. Macmillian publishing company, 1994.
2.
Class notes, ECE 717, University of Wisconsin, Madison. Fall 2014.
3.
Strang, Gilbert. Introduction to applied mathematics. Wellesley-Cambridge press, 1986.

2.8.4 key solution

pict