4.3 Final exam, May 10, 2020

  4.3.1 What will be covered
  4.3.2 Questions
  4.3.3 My exam

4.3.1 What will be covered

PDF

4.3.2 Questions

PDF

4.3.3 My exam

   4.3.3.1 Problem 1
   4.3.3.2 Problem 2
PDF (letter size)
PDF (legal size)

4.3.3.1 Problem 1

pict
Figure 4.8:Problem description

\begin{align*} \dot{x} & =-x+y+xy\\ \dot{y} & =x-y-x^{2}-y^{3} \end{align*}

Part 1 Equilibrium points are found by solving for \(x,y\) in\begin{align} -x+y+xy & =0\tag{1}\\ x-y-x^{2}-y^{3} & =0 \tag{2} \end{align}

The first obvious solution is \(x=0,y=0\). To find other solutions, then from (1) and solving for \(x\) gives\begin{align} y+x\left ( y-1\right ) & =0\nonumber \\ x & =\frac{-y}{y-1}=\frac{y}{1-y} \tag{3} \end{align}

Substituting (3) into (2) results in\begin{align*} \left ( \frac{y}{1-y}\right ) -y-\left ( \frac{y}{1-y}\right ) ^{2}-y^{3} & =0\\ \frac{y}{1-y}-y-\frac{y^{2}}{\left ( 1-y\right ) ^{2}}-y^{3} & =0\\ y\left ( 1-y\right ) -y\left ( 1-y\right ) ^{2}-y^{2}-y^{3}\left ( 1-y\right ) ^{2} & =0\\ y\left ( \left ( 1-y\right ) -\left ( 1-y\right ) ^{2}-y-y^{2}\left ( 1-y\right ) ^{2}\right ) & =0 \end{align*}

The above shows that \(y=0\) is a solution and \(\left ( 1-y\right ) ^{2}-\left ( 1-y\right ) ^{2}-y-y^{2}\left ( 1-y\right ) ^{2}=0\) is the second solution. But \(y=0\) gives \(x=0\) which we already found earlier. So now we look at the solution for \(y\) from the second case which gives the following\begin{align*} \left ( 1-y\right ) -\left ( 1-y\right ) ^{2}-y-y^{2}\left ( 1-y\right ) ^{2} & =0\\ \left ( 1-y\right ) -\left ( 1+y^{2}-2y\right ) -y-y^{2}\left ( 1+y^{2}-2y\right ) & =0\\ 1-y-1-y^{2}+2y-y-\left ( y^{2}+y^{4}-2y^{3}\right ) & =0\\ 1-y-1-y^{2}+2y-y-y^{2}-y^{4}+2y^{3} & =0\\ -y^{2}-y^{2}-y^{4}+2y^{3} & =0\\ -2y^{2}-y^{4}+2y^{3} & =0\\ y^{2}\left ( -2-y^{2}+2y\right ) & =0 \end{align*}

This gives solutions \(y=0\) or \(-2-y^{2}+2y=0\). But \(y=0\) gives \(x=0\) from (3) which we already found earlier. So now we look at the solution for \(y\) from the second solution which gives\begin{align*} -2-y^{2}+2y & =0\\ y^{2}-2y+2 & =0 \end{align*}

Therefore the roots are, by using the quadratic formula \(y=-\frac{b}{2a}\pm \frac{1}{2a}\sqrt{b^{2}-4ac}\) or \begin{align*} y & =\frac{2}{2}\pm \frac{1}{2}\sqrt{4-8}\\ & =1\pm \frac{1}{2}\sqrt{-4}\\ & =1\pm i \end{align*}

Since we are looking for real solutions, then the above is not a solution that we can accept. This shows that there is only one equilibrium point\[ \left ( x^{\ast },y^{\ast }\right ) =\left \{ 0,0\right \} \] Using the computer, the phase plot for the non-linear is given below. The red point is the equilibrium point \(\left \{ 0,0\right \} \)

pict
Figure 4.9:Phase plot

The following is the same phase plot, but made for a much larger domain of the state variables \(x,y\).

pict
Figure 4.10:Phase plot using larger domain

pict
Figure 4.11:code used for the above plot

Part 2 The linearized system at the equilibrium point is given by\begin{equation} \begin{pmatrix} \dot{x}\\ \dot{y}\end{pmatrix} =\left [ A\right ] \begin{pmatrix} x\\ y \end{pmatrix} \tag{1} \end{equation} Where the matrix \(A\) is the Jacobian matrix \(J\) when evaluated at the equilibrium point. The Jacobian matrix is given by\begin{equation} J=\begin{pmatrix} \frac{\partial \dot{x}}{\partial x} & \frac{\partial \dot{x}}{\partial y}\\ \frac{\partial \dot{y}}{\partial x} & \frac{\partial \dot{y}}{\partial y}\end{pmatrix} \tag{2} \end{equation} Where \(\dot{x}=-x+y+xy,\dot{y}=x-y-x^{2}-y^{3}\). Therefore\begin{align*} \frac{\partial \dot{x}}{\partial x} & =-1+y\\ \frac{\partial \dot{x}}{\partial y} & =1+x\\ \frac{\partial \dot{y}}{\partial x} & =1-2x\\ \frac{\partial \dot{y}}{\partial y} & =-1-3y^{2} \end{align*}

Using the above in (2) gives the Jacobian matrix as\[ J=\begin{pmatrix} -1+y & 1+x\\ 1-2x & -1-3y^{2}\end{pmatrix} \] Then the linearized system around \(x=0,y=0\) now is found as\begin{align*} \begin{pmatrix} \dot{x}\\ \dot{y}\end{pmatrix} & =A\begin{pmatrix} x\\ y \end{pmatrix} \\ & =\begin{pmatrix} -1+y & 1+x\\ 1-2x & -1-3y^{2}\end{pmatrix} _{\substack{x=0\\y=0}}\begin{pmatrix} x\\ y \end{pmatrix} \\ & =\begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} x\\ y \end{pmatrix} \end{align*}

Part 3 From part (2) above, we found the linearized system around \(x=0,y=0\) to be\[\begin{pmatrix} \dot{x}\\ \dot{y}\end{pmatrix} =\begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} x\\ y \end{pmatrix} \] Now we find the eigenvalues of \(A\). Solving\begin{align*} \left \vert A-\lambda I\right \vert & =0\\\begin{vmatrix} -1-\lambda & 1\\ 1 & -1-\lambda \end{vmatrix} & =0\\ \left ( -1-\lambda \right ) \left ( -1-\lambda \right ) -1 & =0\\ \lambda ^{2}+2\lambda & =0\\ \lambda \left ( \lambda +2\right ) & =0 \end{align*}

Therefore the eigenvalues are \(\lambda _{1}=0,\lambda _{2}=-2\). Now we find the corresponding eigenvectors of \(A\).

For \(\lambda _{1}=0\) we solve for \(v\) from\begin{align*} \begin{pmatrix} -1-\lambda _{1} & 1\\ 1 & -1-\lambda _{1}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 0 \end{pmatrix} \\\begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 0 \end{pmatrix} \end{align*}

First equation gives \(-v_{1}+v_{2}=0\). Let \(v_{1}=1\) then \(v_{2}=1\). Hence the eigenvector associated with \(\lambda _{1}=0\) is \(\begin{pmatrix} 1\\ 1 \end{pmatrix} \)

For \(\lambda _{2}=-2\) we solve for \(v\) from\begin{align*} \begin{pmatrix} -1-\lambda _{2} & 1\\ 1 & -1-\lambda _{2}\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 0 \end{pmatrix} \\\begin{pmatrix} -1+2 & 1\\ 1 & -1+2 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 0 \end{pmatrix} \\\begin{pmatrix} 1 & 1\\ 1 & 1 \end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\end{pmatrix} & =\begin{pmatrix} 0\\ 0 \end{pmatrix} \end{align*}

First equation gives \(v_{1}+v_{2}=0\). Let \(v_{1}=1\) then \(v_{2}=-1\). Hence the eigenvector associated with \(\lambda _{2}=-2\) is \(\begin{pmatrix} 1\\ -1 \end{pmatrix} \)

Summary of results for part 3





\(x^{\ast }\) Linearized system at \(x^{\ast }\) Eigenvalues Eigenvectors








\(\left ( 0,0\right ) \) \(\begin{pmatrix} \dot{x}\\ \dot{y}\end{pmatrix} =\begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix}\begin{pmatrix} x\\ y \end{pmatrix} \) \(\lambda _{1}=0,\lambda _{2}=-2\) \(\begin{pmatrix} 1\\ 1 \end{pmatrix} ,\begin{pmatrix} 1\\ -1 \end{pmatrix} \)




Part 4 Since the system is non-linear, and one of the eigenvalues is zero, then the equilibrium point is called defective. What this means is that it is not possible to conclude that the origin is stable or not. Even though the second eigenvalue is negative, we can not conclude that the non-linear system is stable at the origin since one eigenvalue is zero.

This only happens for non-linear systems. If the actual system was linear, then we could have concluded it is stable. But not for non-linear systems.

Part 5 Considering system \(\mathbf{\dot{x}}=f\left ( \mathbf{x},t\right ) \) and neighborhood \(D\subset \mathbb{R} ^{n}\) around the origin point \(\mathbf{x}=\mathbf{0}\). Here the origin is always taken as the equilibrium point. But any other equilibrium point will also work in this definition, since we can always translate the system to make the equilibrium point as the origin. So it is easier to always take the equilibrium point as the origin.

Now, let solution that start at time \(t=t_{0}\) from point \(\mathbf{x}_{0}\in D\) be called \(\mathbf{x}\left ( t;t_{0},\mathbf{x}_{0}\right ) \). Then we say that the the solution at \(\mathbf{x}=\mathbf{0}\) is stable in the sense of Lyapunov if for each \(\epsilon >0\) and \(t_{0}\) we can find \(\delta \left ( \epsilon ,t_{0}\right ) \) such that \(\left \Vert \mathbf{x}_{0}\right \Vert \leq \delta \) implies \(\left \Vert \mathbf{x}\left ( t;t_{0},\mathbf{x}_{0}\right ) \right \Vert \leq \epsilon \) for all \(t\geq t_{0}\).

The above is basically what the book gives as the definition of Lyapunov stability.

The following is a diagram made to help explain what the above means, and also I give may be a little simpler definition as follows.

Lyapunov stability intuitively says that if we start with initial conditions \(x_{0}(t_{0})\) at time \(t_{0}\) somewhere near the equilibrium point (this is the domain \(D\)) then if the solution \(x(t)\) is always bounded from above for any future time \(t\) by some limit (which depends on how far the initial conditions are from the origin, and the time \(t_{0}\) the solution started), then that the origin is called a stable equilibrium point in the sense of Lyapunov.

This basically says that solutions that starts near the equilibrium point will never go too far away from the origin for all time.

To make this more mathematically precise2 , we say that for any \(||x_{0}||\leq \delta (t_{0})\) we can find \(\epsilon (\delta )\) such that \(||x(t)||\leq \epsilon \) for any \(t\geq t_{0}\). In this both \(\delta \) and \(\epsilon \) are some positive quantities and \(\epsilon \) depends on choice of \(\delta \) and \(\delta \) depends on \(t_{0}\).

This diagram helps illustrate the above definition.

pict
Figure 4.12:Graphical representation of Lyapunov stability

In the above diagram, we start with the system in some initial state shown on the left where we have the norm \(\left \Vert x_{0}\right \Vert \leq \delta \) where \(\delta \) depends on \(t_{0}\). Now, if we can always find \(\epsilon \) such that the solution norm \(\left \Vert x\left ( t\right ) \right \Vert \leq \epsilon \) for any time in the future \(t>t_{0}\) where \(\epsilon \) depends on \(\delta \), then we say the equilibrium point is stable in the sense of Lyapunov.

Part 6 The theorem that gives the conditions for Lyapunov stability is given in theorem 8.8 in the book. This is what it basically says. If given the system \(\mathbf{\dot{x}}=f\left ( \mathbf{x},t\right ) \) with \(f\left ( \mathbf{0},t\right ) =\mathbf{0}\) and \(\mathbf{x\in }\) \(D\subset \mathbb{R} ^{n},t\geq t_{0}\), then assuming we can find what is called the Lyapunov function \(V\left ( \mathbf{x}\right ) \) for this system with the following three conditions

  1. \(V\left ( \mathbf{x}\right ) \) is continuously differentiable function in \(\mathbb{R} ^{n}\) and \(V\left ( \mathbf{x}\right ) \geq 0\) (positive definite or positive semidefinite) for all points away from the origin, or everywhere inside some fixed region around the origin. This function represents the total energy of the system (For Hamiltonian systems). For non-Hamiltonian systems we have to work harder to find it.
  2. \(V\left ( \mathbf{0}\right ) =0\). This condition says the system has no energy when it is at the equilibrium point. (rest state).
  3. The orbital derivative along any solution trajectory is \(\frac{dV}{dt}\leq 0\) (negative definite or negative semi-definite) for all points, or inside some fixed region around the origin. This condition says that the total energy is either constant in time (the zero case) or the total energy is decreasing in time (the negative definite case). Both of which indicate that the origin is a stable equilibrium point.

If such \(V\left ( \mathbf{x}\right ) \) could be found, then these are sufficient conditions for the stability of equilibrium point. If \(\frac{dV}{dt}\) is strictly negative definite, then we say the equilibrium point is asymptotically stable. If \(\frac{dV}{dt}\) is negative semidefinite, then the equilibrium point is stable in the sense of Lyapunov. asymptotically stable have stronger stability.

Negative semi-definite means the system, when perturbed away from the origin, a solution trajectory remains around the origin since its energy do not increase nor decrease. So it is stable. But asymptotically stable equilibrium is a stronger stability. It means when perturbed from the origin the solution will eventually return back to the origin since the energy is always decreasing. Global stability means \(\frac{dV}{dt}\leq 0\) everywhere, and not just in some closed region around the origin. Local stability means \(\frac{dV}{dt}\leq 0\) in some closed region around the origin. Global stability is stronger stability than local stability. Sometimes it is easier to determine local stability than global stability.

Part 7 Let \(V\left ( x,y\right ) =ax^{2}+2y^{2}\). Condition (2) \(V\left ( \mathbf{0}\right ) =0\) is satisfied, since when \(x=0,y=0\) then \(V\left ( x,y\right ) =0.\)

Condition (1) is also satisfied since both terms are positive if we choose \(a>0\). This makes \(V\left ( x,y\right ) >0\) for non zero \(x,y\). We now need to check the third condition. This condition is always the hardest one to check. The orbital derivative \(\frac{dV}{dt}\) is\begin{align} \frac{dV}{dt} & =\frac{\partial V}{\partial x}\dot{x}+\frac{\partial V}{\partial y}\dot{y}\nonumber \\ & =2ax\dot{x}+4y\dot{y} \tag{1} \end{align}

But \begin{align*} \dot{x} & =-x+y+xy\\ \dot{y} & =x-y-x^{2}-y^{3} \end{align*}

Eq(1) now becomes\begin{align*} \frac{dV}{dt} & =2ax\left ( -x+y+xy\right ) +4y\left ( x-y-x^{2}-y^{3}\right ) \\ & =-2ax^{2}+2axy+2ax^{2}y+4yx-4y^{2}-4yx^{2}-4y^{4}\\ & =-\left ( 2ax^{2}+4y^{2}+4y^{4}\right ) +2axy+2ax^{2}y+4yx-4yx^{2}\\ & =-\left ( 2ax^{2}+4y^{2}+4y^{4}\right ) +xy\left ( 2a+4\right ) +2ax^{2}y-4yx^{2}\\ & =-\left ( 2ax^{2}+4y^{2}+4y^{4}\right ) -\left ( -xy\left ( 2a+4\right ) -2ax^{2}y+4yx^{2}\right ) \end{align*}

We see that is we choose \(a\geq 0\) then the first term above which is \(-\left ( 2ax^{2}+4y^{2}+4y^{4}\right ) \) is always negative (or negative semidefinite for \(x=0,y=0\)) and can not be positive.

Let us try \(a=2\), (we only need to find one \(a\) value to make it valid Lyapunov function). This means our choice of Lyapunov function becomes\[ \fbox{$V\left ( x,y\right ) =2x^2+2y^2$}\] The above \(\frac{dV}{dt}\) now becomes\begin{align*} \frac{dV}{dt} & =-\left ( 4x^{2}+4y^{2}+4y^{4}\right ) -\left ( -xy\left ( 4+4\right ) -4x^{2}y+4yx^{2}\right ) \\ & =-4x^{2}+8xy-4y^{4}-4y^{2}\\ & =-\left ( 4x^{2}-8xy+8y^{4}\right ) \\ & =-\left ( \left ( 2x-2y\right ) ^{2}+4y^{2}\right ) \end{align*}

Since the terms inside are all squares, then this shows \(\frac{dV}{dt}\leq 0\). It can not be positive. The maximum it can be is zero and this is at the origin only. This shows origin is indeed stable in the sense of Lyapunov because now all the three conditions given above are satisfied. Plotting the Lyapunov \(2x^{2}+2y^{2}\,\ \,\) for some region around the origin gives

pict
Figure 4.13:Graphical representation of Lyapunov function used

pict
Figure 4.14:Code used for the above

The following shows the orbital derivative \(\frac{dV}{dt}\) plot also in a region around the origin showing it is indeed negative definite.

pict
Figure 4.15:Graphical representation of \(\frac{dV}{dt}\)

pict
Figure 4.16:Code used for the above

The following plot shows Lyapunov function and the orbital derivative function found above in the same plot. These functions can only meet at the equilibrium point which is the origin in this case if the system is stable in the sense of Lyapunov.

pict
Figure 4.17:Combined Graphical representation of \(\frac{dV}{dt}\) and Lapunov function

pict
Figure 4.18:Code used for the above

Part 8 \(\omega \) limit set, is the set of all points that are the limit of all positive orbits \(\gamma ^{+}\left ( x\right ) \). In other words, given a specific orbit \(\gamma ^{+}\left ( x\right ) \) that starts at some initial conditions point \(x_{0}\) and if as \(t\rightarrow \infty \) this orbit terminates at point \(p\) then \(p\) is in the \(\omega \) limit set of such orbit.

To find the \(\omega \) limit set, we need to find the points where solutions terminate at them eventually (attractive or saddle points). But from above, we found that there is only one critical point, which is the origin, and that this point was stable. And since \(\frac{dV}{dt}<0\) for all points away from the origin and zero only at the origin, then the origin is asymptotically stable equilibrium. This means all orbits \(\gamma ^{+}\left ( x\right ) \) have their limit as the origin. Hence \(\omega \) limit set is the origin point.

Part 9 Poincare-Bedixon theorem for \(\mathbb{R} ^{2}\), says that having positive, bounded, non-periodic orbit \(\gamma ^{+}\) of the system \(\mathbf{\dot{x}}=f\left ( \mathbf{x}\right ) \), then the \(\omega \) limit set \(\omega \left ( \gamma ^{+}\right ) \) contains either a critical point or consists of closed orbit.  In this, \(\gamma ^{+}\) means a solution orbit which as \(t\rightarrow \infty \,\) goes to or terminates at a point in the \(\omega \) limit set. In this we also require that \(f\mathbf{:}\) \(\mathbb{R} ^{2}\rightarrow \) \(\mathbb{R} ^{2}\) has continuous first partial derivatives and that solutions exist for all time \(-\infty <t<\infty \).

Part 10 Since a limit cycle implies closed orbit, and since we found that in part 8 the \(\omega \) limit set contains a critical point (the origin), then by Poincare-Bedixon, it is not possible for the system to have a limit cycle in its \(\omega \) limit set.

4.3.3.2 Problem 2

pict
Figure 4.19:Problem description

Part 11 A fixed point of a map \(f\left ( x\right ) \) is one which is mapped to itself. In other words, all points \(x^{\ast }\) that satisfy \(f\left ( x^{\ast }\right ) =x^{\ast }\) where \(f\left ( x\right ) \) is the map.

Part 12 From (2)

\[ f\left ( x\right ) =\frac{x}{1+x^{2}}-ax \] Hence we need to solve for \(x\) in the following\begin{align*} \frac{x}{1+x^{2}}-ax & =x\\ x-ax\left ( 1+x^{2}\right ) & =x\left ( 1+x^{2}\right ) \\ x\left ( 1+x^{2}\right ) +ax\left ( 1+x^{2}\right ) -x & =0\\ ax^{3}+ax+x^{3} & =0\\ x\left ( ax^{2}+a+x^{2}\right ) & =0\\ x\left ( x^{2}\left ( 1+a\right ) +a\right ) & =0 \end{align*}

Hence \(x=0\) is a fixed point, and \(x^{2}\left ( 1+a\right ) +a=0\). Or \[ x^{2}=\frac{-a}{1+a}\] For real \(x\), the RHS must be positive and also \(a\neq -1\). Hence we need \(-1<a<0\). And now the remaining fixed points are given by \(x=\pm \sqrt{\frac{-a}{1+a}}\). Hence the fixed points are\begin{align*} x_{1}^{\ast } & =0\qquad \text{for all }a\\ x_{2}^{\ast } & =\sqrt{\frac{-a}{1+a}}\qquad -1<a<0\\ x_{3}^{\ast } & =-\sqrt{\frac{-a}{1+a}}\qquad -1<a<0 \end{align*}

Part 13 By definition, for a map \(f\left ( x\right ) \) with fixed point \(x^{\ast }\) then

  1. \(x^{\ast }\) is sink of \(\left \vert f^{\prime }\left ( x^{\ast }\right ) \right \vert <1\)
  2. \(x^{\ast }\) is source if \(\left \vert f^{\prime }\left ( x^{\ast }\right ) \right \vert >1\)
  3. Unable to decide if \(\left \vert f^{\prime }\left ( x^{\ast }\right ) \right \vert =1\)

Therefore, we now apply the above on the two non-zero fixed points found in part 12.

For \(x_{2}^{\ast }=\sqrt{\frac{-a}{1+a}}\)\begin{align} f^{\prime }\left ( x\right ) & =\frac{d}{dx}\left ( \frac{x}{1+x^{2}}-ax\right ) \nonumber \\ & =\frac{d}{dx}\frac{x}{1+x^{2}}-\frac{d}{dx}ax\nonumber \\ & =\frac{\left ( 1+x^{2}\right ) -x\left ( 2x\right ) }{\left ( 1+x^{2}\right ) ^{2}}-a\nonumber \\ & =\frac{1+x^{2}-2x^{2}}{\left ( 1+x^{2}\right ) ^{2}}-a\nonumber \\ & =\frac{1-x^{2}}{\left ( 1+x^{2}\right ) ^{2}}-a \tag{1} \end{align}

Evaluating the above at \(x=x_{2}^{\ast }=\sqrt{\frac{-a}{1+a}}\) gives\begin{align*} f^{\prime }\left ( x_{2}^{\ast }\right ) & =\frac{1-\left ( \sqrt{\frac{-a}{1+a}}\right ) ^{2}}{\left ( 1+\left ( \sqrt{\frac{-a}{1+a}}\right ) ^{2}\right ) ^{2}}-a\\ & =\frac{1-\frac{-a}{1+a}}{\left ( 1+\frac{-a}{1+a}\right ) ^{2}}-a\\ & =\frac{\frac{1+a+a}{1+a}}{\left ( \frac{1+a-a}{1+a}\right ) ^{2}}-a\\ & =\frac{\frac{1+2a}{1+a}}{\left ( \frac{1}{1+a}\right ) ^{2}}-a\\ & =\frac{\frac{1+2a}{1+a}}{\frac{1}{\left ( 1+a\right ) ^{2}}}-a\\ & =\frac{1+2a}{\frac{1}{1+a}}-a\\ & =\left ( 1+2a\right ) \left ( 1+a\right ) -a\\ & =2a^{2}+2a+1\\ & =1+2a\left ( 1+a\right ) \end{align*}

Since \(-1<a<0\) then \(0<1+a<1\) and \(-1<2a<0\). Hence \(0<1+2a\left ( 1+a\right ) <1\). This means \(\left \vert f^{\prime }\left ( x_{2}^{\ast }\right ) \right \vert <1\) which implies that \(x_{2}^{\ast }\) is a sink. To verify this, the \(f^{\prime }\left ( x_{2}^{\ast }\right ) =1+2a\left ( 1+a\right ) \) was plotted for \(-1<a<0\) which shows it is indeed smaller than one over this range of \(a\).

pict
Figure 4.20:Plot of \(f'(x^{*}_2)\) showing it is less than \(1\)

For \(x_{3}^{\ast }=-\sqrt{\frac{-a}{1+a}}\), evaluating \(f^{\prime }\left ( x\right ) =\frac{1-x^{2}}{\left ( 1+x^{2}\right ) ^{2}}-a\) found in Eq (1) above, at this fixed point gives\begin{align*} f^{\prime }\left ( x_{3}^{\ast }\right ) & =\frac{1-\left ( -\sqrt{\frac{-a}{1+a}}\right ) ^{2}}{\left ( 1+\left ( -\sqrt{\frac{-a}{1+a}}\right ) ^{2}\right ) ^{2}}-a\\ & =\frac{1-\frac{-a}{1+a}}{\left ( 1+\frac{-a}{1+a}\right ) ^{2}}-a \end{align*}

Which gives the result above \(x_{2}^{\ast }\) which is \(f^{\prime }\left ( x_{3}^{\ast }\right ) =1+2a\left ( 1+a\right ) \). This means This means \(x_{3}^{\ast }\) is also a sink.

What the above analysis means, is that if we start near one of these fixed points, then map iteration (the discrete orbit sequence) will converge to the sink fixed point. For illustration, let us choose \(a=-\frac{1}{2}\). For this \(a\) the fixed point is \(x_{2}^{\ast }=\sqrt{\frac{-a}{1+a}}=\sqrt{\frac{\frac{1}{2}}{1-\frac{1}{2}}}=1\). We expect if we start the sequence near \(1\), say at \(1.2\), then the discrete orbit will approach \(1\) as more iterations of the map are made. Let us find out.\begin{align*} x_{0} & =1.2\\ x_{1} & =f\left ( x_{0}\right ) \\ x_{2} & =f\left ( x_{1}\right ) \\ x_{3} & =f\left ( x_{2}\right ) \\ x_{4} & =f\left ( x_{3}\right ) \\ x_{5} & =f\left ( x_{4}\right ) \\ & \vdots \end{align*}

Plugging in numerical values gives\begin{align*} x_{0} & =1.2\\ x_{1} & =f\left ( x_{0}\right ) =\frac{x_{0}}{1+x_{0}^{2}}-\left ( -\frac{1}{2}\right ) x_{0}=\frac{1.2}{1+\left ( 1.2\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( 1.2\right ) =1.0918\\ x_{2} & =f\left ( x_{1}\right ) =\frac{x_{1}}{1+x_{1}^{2}}-\left ( -\frac{1}{2}\right ) x_{1}=\frac{1.0918}{1+\left ( 1.0918\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( 1.0918\right ) =1.044\,\\ x_{3} & =f\left ( x_{2}\right ) =\frac{x_{2}}{1+x_{2}^{2}}-\left ( -\frac{1}{2}\right ) x_{2}=\frac{1.044}{1+\left ( 1.044\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( 1.044\right ) =1.0215\\ x_{4} & =f\left ( x_{3}\right ) =\frac{x_{3}}{1+x_{3}^{2}}-\left ( -\frac{1}{2}\right ) x_{3}=\frac{1.0215}{1+\left ( 1.0215\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( 1.0215\right ) =1.0106\\ x_{5} & =f\left ( x_{4}\right ) =\frac{x_{4}}{1+x_{4}^{2}}-\left ( -\frac{1}{2}\right ) x_{4}=\frac{1.0106}{1+\left ( 1.0106\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( 1.0106\right ) =1.0053\\ & \vdots \end{align*}

We see that the map discrete orbit is given by \[ 1.2,1.0918,1.044,1.0215,1.0106,1.0053,\cdots ,x_{2}^{\ast }\] Where \(x_{2}^{\ast }=1\) in this case. The same thing will happen if we choose to start near the other fixed point \(x_{3}^{\ast }\) using the same \(a\) used in this example. This will now give \(x_{3}^{\ast }=-\sqrt{\frac{-a}{1+a}}=-\sqrt{\frac{\frac{1}{2}}{1-\frac{1}{2}}}=-1\). If we start the sequence now near \(-1\), say at \(-1.2\), then the discrete orbit will approach \(-1\) as more iterations of the map are made. Let us find out.\begin{align*} x_{0} & =-1.2\\ x_{1} & =f\left ( x_{0}\right ) \\ x_{2} & =f\left ( x_{1}\right ) \\ x_{3} & =f\left ( x_{2}\right ) \\ x_{4} & =f\left ( x_{3}\right ) \\ x_{5} & =f\left ( x_{4}\right ) \\ & \vdots \end{align*}

Plugging in numerical values gives\begin{align*} x_{0} & =-1.2\\ x_{1} & =f\left ( x_{0}\right ) =\frac{x_{0}}{1+x_{0}^{2}}-\left ( -\frac{1}{2}\right ) x_{0}=\frac{-1.2}{1+\left ( -1.2\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( -1.2\right ) =-1.0918\\ x_{2} & =f\left ( x_{1}\right ) =\frac{x_{1}}{1+x_{1}^{2}}-\left ( -\frac{1}{2}\right ) x_{1}=\frac{-1.0918}{1+\left ( -1.0918\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( -1.0918\right ) =-1.044\,\\ x_{3} & =f\left ( x_{2}\right ) =\frac{x_{2}}{1+x_{2}^{2}}-\left ( -\frac{1}{2}\right ) x_{2}=\frac{-1.044}{1+\left ( -1.044\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( -1.044\right ) =-1.0215\\ x_{4} & =f\left ( x_{3}\right ) =\frac{x_{3}}{1+x_{3}^{2}}-\left ( -\frac{1}{2}\right ) x_{3}=\frac{-1.0215}{1+\left ( -1.0215\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( -1.0215\right ) =-1.0106\\ x_{5} & =f\left ( x_{4}\right ) =\frac{x_{4}}{1+x_{4}^{2}}-\left ( -\frac{1}{2}\right ) x_{4}=\frac{-1.0106}{1+\left ( -1.0106\right ) ^{2}}-\left ( -\frac{1}{2}\right ) \left ( -1.0106\right ) =-1.0053\\ & \vdots \end{align*}

We see that the map discrete orbit is given by \[ -1.2,-1.0918,-1.044,-1.0215,-1.0106,-1.0053,\cdots ,x_{3}^{\ast }\] where \(x_{3}^{\ast }=-1\) in this case. The above verifies that \(x_{3}^{\ast },x_{2}^{\ast }\) are fixed point of type sink.

Part 14 \[ g\left ( x\right ) =\left ( 1-a\right ) x \] The fixed point is given by solving \(\left ( 1-a\right ) x=x\) which gives\[ x^{\ast }=0 \] Let us apply \(g\left ( x\right ) \). Using seed \(x=\epsilon >0\), a very small value. Therefore\begin{align*} x_{0} & =\epsilon \\ x_{1} & =g\left ( x_{0}\right ) =\left ( 1-a\right ) x_{0}=\left ( 1-a\right ) \epsilon \\ x_{2} & =g\left ( x_{1}\right ) =\left ( 1-a\right ) x_{1}=\left ( 1-a\right ) \left ( 1-a\right ) \epsilon =\left ( 1-a\right ) ^{2}\epsilon \\ x_{3} & =g\left ( x_{2}\right ) =\left ( 1-a\right ) x_{2}=\left ( 1-a\right ) \left ( 1-a\right ) \left ( 1-a\right ) \epsilon =\left ( 1-a\right ) ^{3}\epsilon \\ & \vdots \\ x_{n} & =g\left ( x_{n}\right ) =\left ( 1-a\right ) x_{n-1}=\left ( 1-a\right ) ^{n}\epsilon \end{align*}

Choosing \(a=2\) this results in\begin{align*} x_{n} & =g\left ( x_{n}\right ) \\ & =\left ( 1-a\right ) ^{n}x_{n-1}\\ & =\left ( -1\right ) ^{n}\epsilon \end{align*}

We now see that for \(n=0,x_{0}=\epsilon >0\) and for \(n=1,x_{1}=-\epsilon \) and for \(n=2,x_{2}=\epsilon \) and for \(n=3,x_{3}=-\epsilon \) and so on. In other words, the sequence is\[ \left \{ \epsilon ,-\epsilon ,\epsilon ,-\epsilon ,\cdots \right \} \] Hence, from the above we see that the map \(g\left ( \cdot \right ) \) has discrete period of \(2\). We notice also that the orbit is switching back and forth around \(x^{\ast }=0\), the fixed point found for above for \(g\left ( x\right ) \).