2.1 HW 1

  2.1.1 Problem 1
  2.1.2 Problem 2
  2.1.3 Problem 3

1.
This HW in one PDF (letter size)
2.
This HW in one PDF (legal size)

2.1.1 Problem 1

   2.1.1.1 Euler method
   2.1.1.2 Modified Euler method
   2.1.1.3 Second order Runge-Kutta
   2.1.1.4 Results
   2.1.1.5 Discussion of results
   2.1.1.6 Source code listing

problem description

pict

solution

\begin{align} y^{\prime }\left ( t\right ) & =f(t)\nonumber \\ & =t^{2}-t \tag{1} \end{align}

This ODE is separable. Integrating both sides gives\[ y=\frac{t^{3}}{3}-\frac{t^{2}}{2}+C \] Where \(C\) is the constant of integration which is found from the initial conditions. At \(t=0\), we are given \(y\left ( 0\right ) =1\). This results in \(C=1\). The solution becomes\[ \fbox{$y=1+\frac{t^3}{3}-\frac{t^2}{2}$}\] Matlab program was written to implements Euler, modified Euler and Runge-Kutta using time spacing of \(0.1\) and was run to \(4\) seconds.

2.1.1.1 Euler method

\[ y_{n+1}=y_{n}+hy_{n}^{\prime }+O\left ( h^{2}\right ) \] In the above, \(y_{0}\) was taken from given initial conditions and \(y_{n}^{\prime }=f_{n}\) is the RHS of (1) evaluated at each time step. \(h\) is the time step used (which is \(0.1\) seconds in this problem). Hence \(t_{n}=nh\). Euler method has local error (per step) \(O\left ( h^{2}\right ) \) and an overall global error \(O\left ( h\right ) \).

2.1.1.2 Modified Euler method

\[ y_{n+1}=y_{n}+h\left ( \frac{y_{n}^{\prime }+y_{n+1}^{\prime }}{2}\right ) +O\left ( h^{3}\right ) \] Modified Euler method has local error (per step) \(O\left ( h^{3}\right ) \) and overall global error \(O\left ( h^{2}\right ) \), therefore it is more accurate than the standard Euler method. In this problem, the RHS \(y_{n}^{\prime }=f_{n}\left ( t\right ) \) only depends on \(t\) and not on \(y_{n}\).

2.1.1.3 Second order Runge-Kutta

This method has local error \(O\left ( h^{3}\right ) \) and global error \(O\left ( h^{2}\right ) \)\begin{align*} t_{n} & =nh\\ k_{1} & =hf\left ( t_{n},y_{n}\right ) \\ k_{2} & =hf\left ( t_{n}+\alpha h,y_{n}+\beta k_{1}\right ) \\ y_{n+1} & =y_{n}+ak_{1}+bk_{2} \end{align*}

In this problem \(y\) do not appear in the RHS (hence \(\beta \) is not used). The above reduces to \begin{align*} k_{1} & =hf\left ( t_{n}\right ) \\ k_{2} & =hf\left ( t_{n}+\alpha h\right ) \\ y_{n+1} & =y_{n}+ak_{1}+bk_{2} \end{align*}

Using \(a=\frac{2}{3},b=\frac{1}{3},\alpha =\frac{3}{2},\beta =\frac{3}{2}\), (as was done in exercise one handout) the above now becomes \begin{align*} k_{1} & =hf\left ( nh\right ) \\ k_{2} & =hf\left ( nh+\frac{3}{2}h\right ) =hf\left ( \left ( n+\frac{3}{2}\right ) h\right ) \\ y_{n+1} & =y_{n}+\frac{2}{3}hf\left ( nh\right ) +\frac{1}{3}hf\left ( \left ( n+\frac{3}{2}\right ) h\right ) \\ & =y_{n}+\frac{2}{3}h\left [ f\left ( nh\right ) +\frac{1}{2}f\left ( \left ( n+\frac{3}{2}\right ) h\right ) \right ] \end{align*}

The following is a summary table of the three methods.




scheme name
main computation
global error



Euler
\(y_{n+1}=y_{n}+hy_{n}^{\prime }\)
\(O(h)\)



Modified Euler
\(y_{n+1}=y_{n}+\frac{h}{2}\left ( y_{n}^{\prime }+y_{n+1}^{\prime }\right ) \)
\(O(h^{2})\)



RK2
\(y_{n+1}=y_{n}+\frac{2}{3}h\left ( y_{n}^{\prime }+\frac{1}{2}y_{n+\frac{3}{2}}^{\prime }\right ) \)
\(O(h^{2})\)



2.1.1.4 Results

Four plots were generated. three plots to show the solution by each scheme next to the exact solution. The fourth plot shows the relative error of each scheme. The relative error was found by calculating \(\frac{\left \vert \text{exact solution-numerical}\right \vert }{\left \vert \text{exact}\right \vert }\) at each time instance.

pict

pict

pict

pict

2.1.1.5 Discussion of results

Euler method was least accurate as expected since it has global error \(O(h)\). There is no large difference between RK2 and modified Euler since both have global error \(O(h^{2})\).

2.1.1.6 Source code listing

2.1.2 Problem 2

   2.1.2.1 Results
   2.1.2.2 Source code listing

problem description

pict

solution\begin{align*} x^{\prime }\left ( t\right ) & =xy+t\\ y^{\prime }\left ( t\right ) & =ty+x \end{align*}

With \(x\left ( 0\right ) =1,y\left ( 0\right ) =-1.\)

The following calculation shows the evaluation of all the derivatives needed for use with Taylor expansion. The expansion is around \(t=0\). This table gives the result.


derivative

\(x^{\prime }\left ( t\right ) =xy+t\)

\(y^{\prime }\left ( t\right ) =ty+x\)

\(x^{\prime \prime }\left ( t\right ) =x^{\prime }y+xy^{\prime }+1\)

\(y^{\prime \prime }\left ( t\right ) =y+ty^{\prime }+x^{\prime }\)

\(x^{\prime \prime \prime }\left ( t\right ) =x^{\prime \prime }y+x^{\prime }y^{\prime }+x^{\prime }y^{\prime }+xy^{\prime \prime }\)

\(y^{\prime \prime \prime }\left ( t\right ) =y^{\prime }+y^{\prime }+ty^{\prime \prime }+x^{\prime \prime }\)

\(x^{\left ( 4\right ) }\left ( t\right ) =x^{\prime \prime \prime }y+x^{\prime \prime }y^{\prime }+x^{\prime \prime }y^{\prime }+x^{\prime }y^{\prime \prime }+x^{\prime \prime }y^{\prime }+x^{\prime }y^{\prime \prime }+x^{\prime }y^{\prime \prime }+xy^{\prime \prime \prime }\)

\(y^{\left ( 4\right ) }\left ( t\right ) =y^{\prime \prime }+y^{\prime \prime }+y^{\prime \prime }+ty^{\prime \prime \prime }+x^{\prime \prime \prime }\)

\(x^{\left ( 5\right ) }\left ( t\right ) =x^{\left ( 4\right ) }y+x^{\prime \prime \prime }y^{\prime }+x^{\prime \prime \prime }y^{\prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime \prime \prime }y^{\prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime }y^{\prime \prime \prime }+x^{\prime \prime \prime }y^{\prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime }y^{\prime \prime \prime }+x^{\prime \prime }y^{\prime \prime }+x^{\prime }y^{\prime \prime \prime }+x^{\prime }y^{\prime \prime \prime }+x^{\prime }y^{\left ( 4\right ) }\)

\(y^{\left ( 5\right ) }\left ( t\right ) =y^{\prime \prime \prime }+y^{\prime \prime \prime }+y^{\prime \prime \prime }+y^{\prime \prime \prime }+ty^{\left ( 4\right ) }+x^{\left ( 4\right ) }\)

The numerical value of the above derivatives at \(t=0\) is now calculated and given in another table below




derivative at \(t=0\) value



\(x^{\prime }\left ( 0\right ) \) \(\left ( 1\right ) \left ( -1\right ) +0\) \(-1\)



\(y^{\prime }\left ( 0\right ) \) \(\left ( 0\right ) \left ( -1\right ) +1\) \(1\)



\(x^{\prime \prime }\left ( 0\right ) \) \(\left ( -1\right ) \left ( -1\right ) +\left ( 1\right ) \left ( 1\right ) +1=1+1+1\) \(3\)



\(y^{\prime \prime }\left ( 0\right ) \) \(\left ( -1\right ) +0\left ( 1\right ) -1\) \(-2\)



\(x^{\prime \prime \prime }\left ( 0\right ) \) \(\left ( 3\right ) \left ( -1\right ) +\left ( -1\right ) \left ( 1\right ) +\left ( -1\right ) \left ( 1\right ) +\left ( 1\right ) \left ( -2\right ) \) \(-7\)



\(y^{\prime \prime \prime }\left ( 0\right ) \) \(1+1+0+2\) \(4\)



\(x^{\left ( 4\right ) }\left ( 0\right ) \) \(\left ( -7\right ) \left ( -1\right ) +2+2+\left ( -1\right ) \left ( -2\right ) +2+\left ( -1\right ) \left ( -2\right ) +\left ( -1\right ) \left ( -2\right ) +4\) \(23\)



\(y^{\left ( 4\right ) }\left ( 0\right ) \) \(\left ( -2\right ) +\left ( -2\right ) +\left ( -2\right ) +0+\left ( -7\right ) \) \(-13\)



\(x^{\left ( 5\right ) }\left ( 0\right ) \) \(\left ( -13\right ) \left ( -1\right ) +4+4+\left ( 2\right ) \left ( -2\right ) +\left ( -7\right ) +\left ( 2\right ) \left ( -2\right ) +\left ( 2\right ) \left ( -2\right ) +\left ( -1\right ) \left ( 4\right ) +\left ( -7\right ) +\left ( 2\right ) \left ( -2\right ) +\left ( 2\right ) \left ( -2\right ) +4+\left ( 2\right ) \left ( -2\right ) +4+4-13\) \(-22\)



\(y^{\left ( 5\right ) }\left ( 0\right ) \) \(\left ( 4\right ) +\left ( 4\right ) +\left ( 4\right ) +\left ( 4\right ) +0+23\) \(39\)



The Taylor expansion for \(x\left ( t\right ) ,y\left ( t\right ) \) is\begin{align*} x\left ( t\right ) & =x\left ( 0\right ) +tx^{\prime }\left ( 0\right ) +\frac{t^{2}}{2}x^{\prime \prime }\left ( 0\right ) +\frac{t^{3}}{3!}x^{\prime \prime \prime }\left ( 0\right ) +\frac{t^{4}}{4!}x^{\left ( 4\right ) }+\frac{t^{5}}{5!}x^{\left ( 5\right ) }+O\left ( t^{6}\right ) \\ y\left ( t\right ) & =y\left ( 0\right ) +ty^{\prime }\left ( 0\right ) +\frac{t^{2}}{2}y^{\prime \prime }\left ( 0\right ) +\frac{t^{3}}{3!}y^{\prime \prime \prime }\left ( 0\right ) +\frac{t^{4}}{4!}y^{\left ( 4\right ) }+\frac{t^{5}}{5!}y^{\left ( 5\right ) }+O\left ( t^{6}\right ) \end{align*}

Applying values from the table above gives\begin{align*} x\left ( t\right ) & =1-t+\frac{3}{2}t^{2}-7\frac{t^{3}}{3!}+23\frac{t^{4}}{4!}-22\frac{t^{5}}{5!}+O\left ( t^{6}\right ) \\ & =1-t+\frac{3}{2}t^{2}-\frac{7}{6}t^{3}+\frac{23}{24}t^{4}-\frac{22}{120}t^{5}+O\left ( t^{6}\right ) \\ y\left ( t\right ) & =-1+t-t^{2}+4\frac{t^{3}}{3!}-13\frac{t^{4}}{4!}+39\frac{t^{5}}{5!}+O\left ( t^{6}\right ) \\ & =-1+t-t^{2}+\frac{2}{3}t^{3}-\frac{13}{24}t^{4}+\frac{39}{120}t^{5}+O\left ( t^{6}\right ) \end{align*}

2.1.2.1 Results

The following diagram shows the solution using Taylor series approximation and using ODE45

pict

We see from the above that Taylor series approximation was good for only small distance from the expansion point, \(t=0\). The \(x\left ( t\right ) \) solution using Taylor became worst faster than the \(y\left ( t\right ) \) solution did. The relative error was computed and plotted for up to \(t=0.5\) to better compare the results.

The relative error for both \(x\left ( t\right ) \) and \(y\left ( t\right ) \) was computed using \[ \frac{\left \vert \text{ODE45 solution-Taylor solution}\right \vert }{\left \vert \text{ODE45 solution}\right \vert } \]

pict

The above plot shows that the error compared to ODE45 increased as the distance from the expansion point becomes larger with \(y\left ( t\right ) \) solution showing a worst approximation than \(y\left ( t\right ) \).

2.1.2.2 Source code listing

2.1.3 Problem 3

   2.1.3.1 Results
   2.1.3.2 Source code listing

problem description

pict

pict

solution\begin{align} mx^{\prime \prime }+k_{11}\left ( x-l_{1}\theta \right ) +k_{12}\left ( x-l_{1}\theta \right ) ^{3}+k_{21}\left ( x+l_{2}\theta \right ) +k_{22}\left ( x+l_{2}\theta \right ) ^{3} & =0\tag{1}\\ J_{0}\theta ^{\prime \prime }-k_{11}\left ( x-l_{1}\theta \right ) l_{1}-k_{12}\left ( x-l_{1}\theta \right ) ^{3}l_{1}+k_{21}\left ( x+l_{2}\theta \right ) l_{2}+k_{22}\left ( x+l_{2}\theta \right ) ^{3}l_{2} & =0\nonumber \end{align}

The first step is to convert the two second order differential equations to a set of four first order differential equations, since ODE numerical solvers work with first order ode’s. We need to select the states to use. Using \begin{align*} x_{1} & =x\\ x_{2} & =x^{\prime }\\ x_{3} & =\theta \\ x_{4} & =\theta ^{\prime } \end{align*}

After taking time derivatives, the above becomes\begin{align*} \dot{x}_{1} & =x^{\prime }=x_{2}\\ \dot{x}_{2} & =x^{\prime \prime }=-\frac{1}{m}\left [ k_{11}\left ( x_{1}-l_{1}x_{3}\right ) +k_{12}\left ( x_{1}-l_{1}x_{3}\right ) ^{3}+k_{21}\left ( x_{1}+l_{2}x_{3}\right ) +k_{22}\left ( x_{1}+l_{2}x_{3}\right ) ^{3}\right ] \\ \dot{x}_{3} & =\theta ^{\prime }=x_{4}\\ \dot{x}_{4} & =\theta ^{\prime \prime }=-\frac{1}{J_{0}}\left [ -k_{11}\left ( x_{1}-l_{1}x_{3}\right ) l_{1}-k_{12}\left ( x_{1}-l_{1}x_{3}\right ) ^{3}l_{1}+k_{21}\left ( x_{1}+l_{2}x_{3}\right ) l_{2}+k_{22}\left ( x_{1}+l_{2}x_{3}\right ) ^{3}l_{2}\right ] \end{align*}

The above vector \(\dot{X}\) in the LHS, is what the Matlab ode45 function will return when called. The following shows the Matlab implementation and the plots generated. It was found that maximum displacement is \(x_{\max }=2.032\) mm and occurred at \(t=5.14\) seconds. For the angle, the maximum angular displacement was \(\theta _{\max }=1.213\) mrad (or \(0.069\) degree) and occurred at \(t=2.02\) seconds.

2.1.3.1 Results

The following are the \(x\) and \(\theta \) solutions plots for \(t=10\) seconds. The Matlab source code used is given in the next section.

pict

pict

2.1.3.2 Source code listing