2.2 HW 1

  2.2.1 Problem description
  2.2.2 Problem 1
  2.2.3 Problem 2
  2.2.4 Problem (3)
  2.2.5 Appendix (Source code)
PDF (letter size)
PDF (legal size)

2.2.1 Problem description

pict
Figure 2.1:problem description

2.2.2 Problem 1

   2.2.2.1 part a
   2.2.2.2 part (b)
   2.2.2.3 Part (c)
   2.2.2.4 part (d)

\(L\) is a second order differential operator defined by \(Lu\equiv u_{xx}\,\) with boundary conditions on \(u\) given as \(u_{x}\left ( 0\right ) =u_{x}\left ( 1\right ) =0\)

2.2.2.1 part a

Let \(\phi \left ( x\right ) \) be an eigenfunction of the operator \(L\) associated with an eigenvalue \(\lambda \). To obtain the eigenfunctions and eigenvalues, we solve an eigenvalue problem \(L\phi =\lambda \phi \) where \(\lambda \) is scalar. Hence the problem is to solve the differential equation \begin{equation} \phi _{xx}-\lambda \phi =0 \tag{1} \end{equation} with B.C. given as \(\phi ^{\prime }\left ( 0\right ) =\phi ^{\prime }\left ( 1\right ) =0\). The characteristic equation is \[ r^{2}-\lambda =0 \] The roots are \(r=\pm \sqrt{\lambda }\), therefore the solution to the eigenvalue problem (1) is\begin{equation} \phi \left ( x\right ) =c_{1}e^{\sqrt{\lambda }x}+c_{2}e^{-\sqrt{\lambda }x} \tag{2} \end{equation} Where \(c_{1}\),\(c_{2}\) are constants.\begin{equation} \phi ^{\prime }\left ( x\right ) =c_{1}\sqrt{\lambda }e^{\sqrt{\lambda }x}-\sqrt{\lambda }c_{2}e^{-\sqrt{\lambda }x} \tag{3} \end{equation} First we determine the allowed values of the eigenvalues \(\lambda \) which satisfies the boundary conditions.

1.
Assume \(\lambda =0\) The solution (2) becomes \(\phi \left ( x\right ) =c_{1}+c_{2}\). Hence the solution is a constant. In other words, when the eigenvalue is zero, the eigenfunction is a constant. Let us now see if this eigenfunction satisfies the B.C. Since \(\phi \left ( x\right ) \) is constant, then \(\phi ^{\prime }\left ( x\right ) =0\), and this does satisfy the B.C. at both \(x=0\) and \(x=1\). Hence \(\lambda =0\) is an eigenvalue with a corresponding eigenfunction being a constant. We can take the constant as \(1\).
2.
Assume \(\lambda >0\) From the first BC we have, from (3), that \(\phi ^{\prime }\left ( 0\right ) =0=c_{1}\sqrt{\lambda }-\sqrt{\lambda }c_{2}\) or \[ c_{1}=c_{2}\] and from the second BC we have that \(\phi ^{\prime }\left ( 1\right ) =0=c_{1}\sqrt{\lambda }e^{\sqrt{\lambda }}-\sqrt{\lambda }c_{2}e^{-\sqrt{\lambda }}\) or \[ c_{1}e^{\sqrt{\lambda }}-c_{2}e^{-\sqrt{\lambda }}=0 \] From the above 2 equations, we find that \(e^{\sqrt{\lambda }}=e^{-\sqrt{\lambda }}\) which is not possible for positive \(\lambda \). Hence \(\lambda \) can not be positive.
3.
Assume \(\lambda <0\). Let \(\lambda =-\beta ^{2}\) form some positive \(\beta \). Then the solution (2) becomes\[ \phi \left ( x\right ) =c_{1}e^{i\beta x}+c_{2}e^{-i\beta x}\] which can be transformed using the Euler relation to obtain\begin{align} \phi \left ( x\right ) & =c_{1}\cos \beta x+c_{2}\sin \beta x\nonumber \\ \phi ^{\prime }\left ( x\right ) & =-c_{1}\beta \sin \beta x+c_{2}\beta \cos \beta x \tag{4} \end{align}

Now consider the BC’s. Since \(\phi ^{\prime }\left ( 0\right ) =0\) we obtain \(c_{2}=0\) and from \(\phi ^{\prime }\left ( 1\right ) =0\) we obtain \(0=c_{1}\beta \sin \beta \) and hence for non trivial solution, i.e. for \(c_{1}\neq 0\), we must have that \[ \sin \beta =0 \] or \[ \beta =\pm n\pi \] but since \(\beta \) is positive, we consider only \(\beta _{n}=n\pi \), where \(n\) is positive integer \(n=1,2,3,\cdots \)

Conclusion: The eigenvalues are \[ \lambda _{n}=-\left ( \beta _{n}\right ) ^{2}=-\left ( n\pi \right ) ^{2}=\left \{ 0,-\pi ^{2},-\left ( 2\pi \right ) ^{2},-\left ( 3\pi \right ) ^{2},\cdots \right \} \] And the corresponding eigenfunctions are\(\phi _{n}\left ( x\right ) =\cos \beta _{n}x=\cos n\pi x=\left \{ 1,\cos \pi x,\cos 2\pi x,\cos 3\pi x,\cdots \right \} \)

where \(n=0,1,2,\cdots \)

2.2.2.2 part (b)

Given inner product defined as \(\left \langle u,v\right \rangle ={\displaystyle \int \limits _{0}^{1}} uvdx\), then \begin{align*} \left \langle \phi _{n},\phi _{m}\right \rangle & ={\displaystyle \int \limits _{0}^{1}} \left ( \cos \beta _{n}x\right ) \left ( \cos \beta _{m}x\right ) dx\\ & ={\displaystyle \int \limits _{0}^{1}} \left ( \cos n\pi x\right ) \left ( \cos m\pi x\right ) dx\\ & =\left \{ \begin{array} [c]{cc}0 & n\neq m\\ \frac{1}{2} & n=m \end{array} \right . \end{align*}

Also, the first eigenfucntion, \(\phi _{0}\left ( x\right ) =1\) is orthogonal to all other eigenfunctions, since \({\displaystyle \int \limits _{0}^{1}} \left ( \cos n\pi x\right ) dx=\frac{1}{n\pi }\left [ \sin n\pi x\right ] _{0}^{1}=0\) for any integer \(n>0\).

Hence all the eigenfunctions are orthogonal to each others in \(L^{2}\left [ 0,1\right ] \) space.

2.2.2.3 Part (c)

Given \[ u_{xx}=f \] \(u_{x}\left ( 0\right ) =u_{x}\left ( 1\right ) =0\). This is \(Lu=f\). We have found the eigenfunctions \(\phi \left ( x\right ) \) of \(L\) above. These are basis of the function space of \(L\) where \(f\) resides in. We can express \(f\) as a linear combination of the eigenfunctions of the operator \(L\), hence we write\[ f\left ( x\right ) ={\displaystyle \sum \limits _{n=0}^{\infty }} a_{n}\phi _{n}\left ( x\right ) \] where \(\phi _{n}\left ( x\right ) \) is the \(n^{th}\) eigenfunction of \(L\) and \(a_{n}\) is the corresponding coordinate (scalar).

Therefore the differential equation above can be written as \begin{equation} Lu=f\left ( x\right ) ={\displaystyle \sum \limits _{n=0}} a_{n}\phi _{n}\left ( x\right ) \tag{1} \end{equation} But since\[ L\phi _{n}=\lambda _{n}\phi _{n}\] Then \[ L^{-1}=\frac{1}{\lambda _{n}}\] Therefore, using (1), the solution is \begin{equation} u\left ( x\right ) ={\displaystyle \sum \limits _{n}} \left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}\left ( x\right ) \tag{2} \end{equation} Now to find \(a_{n}\), using \(f\left ( x\right ) ={\displaystyle \sum \limits _{n}} a_{n}\phi _{n}\left ( x\right ) ,\) we multiply each side by an eigenfunction, say \(\phi _{m}\left ( x\right ) \) and integrate\begin{align*}{\displaystyle \int \limits _{0}^{1}} \phi _{m}\left ( x\right ) f\left ( x\right ) dx & ={\displaystyle \int \limits _{0}^{1}} \phi _{m}\left ( x\right ){\displaystyle \sum \limits _{n}} a_{n}\phi _{n}\left ( x\right ) \left ( x\right ) dx\\ & ={\displaystyle \int \limits _{0}^{1}}{\displaystyle \sum \limits _{n}} a_{n}\phi _{m}\left ( x\right ) \phi _{n}\left ( x\right ) dx\\ & ={\displaystyle \sum \limits _{n}} a_{n}{\displaystyle \int \limits _{0}^{1}} \phi _{m}\left ( x\right ) \phi _{n}\left ( x\right ) dx \end{align*}

The RHS is \(1/2\) when \(n=m\) and zero otherwise, hence the above becomes\[{\displaystyle \int \limits _{0}^{1}} \phi _{n}\left ( x\right ) f\left ( x\right ) dx=\frac{a_{n}}{2}\] Or\begin{equation} a_{n}=2{\displaystyle \int \limits _{0}^{1}} \cos \left ( n\pi x\right ) f\left ( x\right ) dx \tag{3} \end{equation} Where \(a_{n}\) as given by (3).

If we know \(f\left ( x\right ) \) we can determine \(a_{n}\) and hence the solution is now found.

2.2.2.4 part (d)

The solution found above\[ u\left ( x\right ) ={\displaystyle \sum \limits _{n}} \left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}\left ( x\right ) \] Is not possible for all \(f\). Only an \(f\) which has \(a_{0}=0\) is possible. This is because \(\lambda _{0}=0\), then \(a_{0}\) has to be zero to obtain a solution (since \(L^{-1}\) does not exist if an eigenvalue is zero).

\(a_{0}=0\) implies, by looking at (3) above, that when \(n=0\) we have\[ 0={\displaystyle \int \limits _{0}^{1}} f\left ( x\right ) dx \] So only the functions \(f\left ( x\right ) \) which satisfy the above can be a solution to \(Lu=f\) with the B.C. given.

To review: We found that \(\lambda =0\) to be a valid eigenvalue due to the B.C. being Von Neumann boundary conditions. This in resulted in \(a_{0}\) having to be zero. This implied that \({\displaystyle \int \limits _{0}^{1}} f\left ( x\right ) dx=0\).

Having  a zero eigenvalue effectively removes one of the space dimensions that \(f\left ( x\right ) \) can resides in.

In addition to this restriction, the function \(f(x)\) is assumed to meet the Dirichlet conditions for Fourier series expansion, and these are

1.
\(f(x)\) must have a finite number of extrema in any given interval
2.
\(f(x)\) must have a finite number of discontinuities in any given interval
3.
\(f(x)\) must be absolutely integrable over a period.
4.
\(f(x)\) must be bounded

2.2.3 Problem 2

   2.2.3.1 Part (a)
   2.2.3.2 Part (b)

2.2.3.1 Part (a)

Applying the definition given\begin{equation} F^{\prime }\left ( u\right ) v=\lim _{\varepsilon \rightarrow 0}\frac{F\left ( u+\varepsilon v\right ) -F\left ( u\right ) }{\varepsilon } \tag{1} \end{equation} And using \(F\left ( u\right ) ={\displaystyle \int \limits _{0}^{1}} \frac{1}{2}\left ( u_{x}\right ) ^{2}+f\ u\ dx\), then (1) becomes\[ F^{\prime }\left ( u\right ) v=\lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\left ({\displaystyle \int \limits _{0}^{1}} \frac{1}{2}\left [ \left ( u+\varepsilon v\right ) _{x}\right ] ^{2}+f\ \left ( u+\varepsilon v\right ) \ dx-{\displaystyle \int \limits _{0}^{1}} \frac{1}{2}\left ( u_{x}\right ) ^{2}+f\ u\ dx\right ) \] Simplify the above, we obtain\[ F^{\prime }\left ( u\right ) v=\lim _{\varepsilon \rightarrow 0}\left ({\displaystyle \int \limits _{0}^{1}} \frac{\varepsilon }{2}v_{x}^{2}dx+{\displaystyle \int \limits _{0}^{1}} u_{x}v_{x}dx+{\displaystyle \int \limits _{0}^{1}} fvdx\right ) \] Hence, as \(\varepsilon \rightarrow 0\) only the first integral above vanishes (since \(v_{x}\) is bounded), and we have\begin{equation} F^{\prime }\left ( u\right ) v={\displaystyle \int \limits _{0}^{1}} u_{x}v_{x}+fvdx \tag{1A} \end{equation}

2.2.3.2 Part (b)

The solution to \(u_{xx}=f\left ( x\right ) \) with \(u\left ( 0\right ) =u\left ( 1\right ) =0\) was found in class to be \begin{equation} u\left ( x\right ) ={\displaystyle \sum \limits _{n}} \left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}\left ( x\right ) \tag{2} \end{equation} where\[ \phi _{n}\left ( x\right ) =\sin \left ( n\pi x\right ) \] are the eigenfunctions associated with the eigenvalues \(\lambda _{n}=-n^{2}\pi ^{2}.\)

Now we can use this solution in the definition of \(F^{\prime }\left ( u\right ) v\) found in (1A) from part (a). Substitute \(u\left ( x\right ) \) from (2) into (1A), and also substitute \(f=\) \({\displaystyle \sum \limits _{n}} a_{n}\phi _{n}\left ( x\right ) \) into (1A), we obtain\begin{equation} F^{\prime }\left ( u\right ) v={\displaystyle \int \limits _{0}^{1}} \left ({\displaystyle \sum \limits _{n}} \left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}\left ( x\right ) \right ) ^{\prime }v^{\prime }+\left ({\displaystyle \sum \limits _{n}} a_{n}\phi _{n}\left ( x\right ) \right ) v\ dx \tag{4} \end{equation} We need to show that the above becomes zero for any \(v\left ( x\right ) \in X\).\begin{align} F^{\prime }\left ( u\right ) v & ={\displaystyle \int \limits _{0}^{1}}{\displaystyle \sum \limits _{n}} v^{\prime }\left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}^{\prime }\left ( x\right ) +{\displaystyle \sum \limits _{n}} va_{n}\phi _{n}\left ( x\right ) dx\nonumber \\ & ={\displaystyle \int \limits _{0}^{1}}{\displaystyle \sum \limits _{n}} \left ( v^{\prime }\left ( \frac{a_{n}}{\lambda _{n}}\right ) \phi _{n}^{\prime }\left ( x\right ) +va_{n}\phi _{n}\left ( x\right ) \right ) dx\nonumber \\ & ={\displaystyle \sum \limits _{n}} a_{n}\left ({\displaystyle \int \limits _{0}^{1}} \frac{1}{\lambda _{n}}v^{\prime }\phi _{n}^{\prime }\left ( x\right ) +v\phi _{n}\left ( x\right ) dx\right ) \tag{5} \end{align}

Now we pay attention to the integral term above. If we can show this is zero, then we are done. \begin{align} I & =\frac{1}{\lambda _{n}}{\displaystyle \int \limits _{0}^{1}} v^{\prime }\phi _{n}^{\prime }\left ( x\right ) +{\displaystyle \int \limits _{0}^{1}} v\phi _{n}\left ( x\right ) dx\nonumber \\ & =I_{1}+I_{2} \tag{6} \end{align}

Integrate by parts \(I_{1}\)\begin{align*} I_{1} & =\frac{1}{\lambda _{n}}{\displaystyle \int \limits _{0}^{1}} \overset{udv}{\overbrace{\phi _{n}^{\prime }\left ( x\right ) v^{\prime }dx}}\\ & =\frac{1}{\lambda _{n}}\left ( \left [ \phi _{n}^{\prime }\left ( x\right ) v\right ] _{0}^{1}-{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}^{\prime \prime }\left ( x\right ) dx\right ) \\ & =\frac{1}{\lambda _{n}}\left ( \overset{\text{zero\ due\ to\ boundaries\ on}\ v(x)\in X}{\overbrace{\left [ v\left ( 1\right ) \phi _{n}^{\prime }\left ( 1\right ) -v\left ( 0\right ) \phi _{n}^{\prime }\left ( 0\right ) \right ] }}-{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}^{\prime \prime }\left ( x\right ) dx\right ) \\ & =-\frac{1}{\lambda _{n}}{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}^{\prime \prime }\left ( x\right ) dx \end{align*}

But since \(\phi _{n}\left ( x\right ) =\sin n\pi x\), then \(\phi _{n}^{\prime }\left ( x\right ) =n\pi \cos n\pi x\) and \(\phi _{n}^{\prime \prime }\left ( x\right ) =-n^{2}\pi ^{2}\sin n\pi x=-n^{2}\pi ^{2}\) \(\phi _{n}\left ( x\right ) \) then \[ I_{1}=\frac{n^{2}\pi ^{2}}{\lambda _{n}}{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}\left ( x\right ) dx \] But also \(\lambda _{n}=-n^{2}\pi ^{2}\) hence the above becomes\[ I_{1}=-{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}\left ( x\right ) dx \] Therefore (6) can be written as\begin{align*} I & =I_{1}+I_{2}\\ & =-{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}\left ( x\right ) dx+{\displaystyle \int \limits _{0}^{1}} v\left ( x\right ) \phi _{n}\left ( x\right ) dx\\ & =0 \end{align*}

Therefore, from (5), we see that\[ F^{\prime }\left ( u\right ) v=0 \] Hence we showed that if \(u\) is solution to \(u_{xx}=f\) with \(u\left ( 0\right ) =u\left ( 1\right ) =0\), then \(F^{\prime }\left ( u\right ) v=0.\)

2.2.4 Problem (3)

   2.2.4.1 Part (a)
   2.2.4.2 part (b) Refinement study
   2.2.4.3 Part(c)
   2.2.4.4 Part (d)

2.2.4.1 Part (a)

Notations used: let \(\tilde{f}\) to mean the approximate discrete solution at a grid point. Let \(f\) to mean the exact solution.

Using the method of undetermined coefficients, let the second derivative approximation be \begin{equation} \tilde{f}^{\prime \prime }\left ( x\right ) =af\left ( x-\frac{h}{2}\right ) +bf\left ( x\right ) +cf\left ( x+h\right ) \tag{1} \end{equation} Where \(a,b,c\) are constants to be found. Now using Taylor expansion, since\[ f\left ( x+\Delta \right ) =f\left ( x\right ) +\Delta f^{\prime }\left ( x\right ) +\frac{\Delta ^{2}}{2!}f^{\prime \prime }\left ( x\right ) +\frac{\Delta ^{3}}{3!}f^{\prime \prime \prime }\left ( x\right ) +O\left ( h^{4}\right ) \] Hence apply the above to each of the terms in the RHS of (1) and simplify\begin{align*} f\left ( x-\frac{h}{2}\right ) & =f\left ( x\right ) -\frac{h}{2}f^{\prime }\left ( x\right ) +\frac{\left ( -\frac{h}{2}\right ) ^{2}}{2!}f^{\prime \prime }\left ( x\right ) +\frac{\left ( -\frac{h}{2}\right ) ^{3}}{3!}f^{\prime \prime }\left ( x\right ) +\frac{\left ( -\frac{h}{2}\right ) ^{4}}{4!}f^{\left ( 4\right ) }\left ( x\right ) +O\left ( h^{5}\right ) \\ f\left ( x\right ) & =f\left ( x\right ) \\ f\left ( x+h\right ) & =f\left ( x\right ) +hf^{\prime }\left ( x\right ) +\frac{h^{2}}{2!}f^{\prime \prime }\left ( x\right ) +\frac{h^{3}}{3!}f^{\prime \prime }\left ( x\right ) +\frac{h^{4}}{4!}f^{\left ( 4\right ) }\left ( x\right ) +O\left ( h^{5}\right ) \end{align*}

Substitute the above 3 terms in (1)\begin{align*} \tilde{f}^{\prime \prime }\left ( x\right ) & =a\left ( f\left ( x\right ) -\frac{h}{2}f^{\prime }\left ( x\right ) +\frac{h^{2}}{8}f^{\prime \prime }\left ( x\right ) -\frac{h^{3}}{8\times 6}f^{\prime \prime \prime }\left ( x\right ) +\frac{h^{4}}{16\times 24}f^{\left ( 4\right ) }\left ( x\right ) +O\left ( h^{5}\right ) \right ) \\ & +bf\left ( x\right ) \\ & +c\left ( f\left ( x\right ) +hf^{\prime }\left ( x\right ) +\frac{h^{2}}{2!}f^{\prime \prime }\left ( x\right ) +\frac{h^{3}}{6}f^{\prime \prime \prime }\left ( x\right ) +\frac{h^{4}}{24}f^{\left ( 4\right ) }\left ( x\right ) +O\left ( h^{5}\right ) \right ) \end{align*}

Collect terms\begin{align} \tilde{f}^{\prime \prime }\left ( x\right ) & =\left ( a+b+c\right ) f\left ( x\right ) +f^{\prime }\left ( x\right ) h\left ( -\frac{a}{2}+c\right ) +f^{\prime \prime }\left ( x\right ) h^{2}\left ( \frac{a}{8}+\frac{c}{2}\right ) +f^{\prime \prime \prime }\left ( x\right ) h^{3}\left ( \frac{-a}{8\times 6}+\frac{c}{6}\right ) \tag{2}\\ & +f^{\left ( 4\right ) }h^{4}\left ( \frac{a}{16\times 24}+\frac{c}{24}\right ) +O\left ( h^{5}\right ) \nonumber \end{align}

Hence for \(\tilde{f}^{\prime \prime }\left ( x\right ) \) to best approximate \(f^{\prime \prime }\left ( x\right ) \), we need \begin{align*} \left ( a+b+c\right ) & =0\\ -\frac{a}{2}+c & =0\\ h^{2}\left ( \frac{a}{8}+\frac{c}{2}\right ) & =1 \end{align*}

Solving the above 3 equations we find \begin{align*} a & =\frac{8}{3h^{2}}\\ b & =-\frac{4}{h^{2}}\\ c & =\frac{4}{3h^{2}} \end{align*}

Hence (1) becomes\begin{align*} \tilde{f}^{\prime \prime }\left ( x\right ) & =af\left ( x-\frac{h}{2}\right ) +bf\left ( x\right ) +cf\left ( x+h\right ) \\ & =\frac{8}{3h^{2}}f\left ( x-\frac{h}{2}\right ) -\frac{4}{h^{2}}f\left ( x\right ) +\frac{4}{3h^{2}}f\left ( x+h\right ) \end{align*}

To examine the local truncation error, from (2), and using the solution we just found for \(a,b,c\) we find\begin{align*} \tilde{f}^{\prime \prime }\left ( x\right ) & =f^{\prime \prime }\left ( x\right ) +f^{\prime \prime \prime }\left ( x\right ) h^{3}\left ( \frac{-\left ( \frac{8}{3h^{2}}\right ) }{8\times 6}+\frac{\left ( \frac{4}{3h^{2}}\right ) }{6}\right ) +f^{\left ( 4\right ) }h^{4}\left ( \frac{\left ( \frac{8}{3h^{2}}\right ) }{16\times 24}+\frac{\left ( \frac{4}{3h^{2}}\right ) }{24}\right ) +O\left ( h^{5}\right ) \\ & =f^{\prime \prime }\left ( x\right ) +f^{\prime \prime \prime }\left ( x\right ) h^{3}\left ( \frac{1}{6h^{2}}\right ) +f^{\left ( 4\right ) }h^{4}\left ( \frac{1}{16h^{2}}\right ) +O\left ( h^{5}\right ) \\ & =f^{\prime \prime }\left ( x\right ) +f^{\prime \prime \prime }\left ( x\right ) \left ( \frac{h}{6}\right ) +f^{\left ( 4\right ) }h^{2}\left ( \frac{1}{16}\right ) +O\left ( h^{5}\right ) \end{align*}

We can truncate at either \(f^{\prime \prime \prime }\left ( x\right ) \) or \(f^{\left ( 4\right ) }\). In the first case, we obtain\[ \tilde{f}^{\prime \prime }\left ( x\right ) =f^{\prime \prime }\left ( x\right ) +O\left ( h\right ) \] Where \(O\left ( h\right ) =\frac{f^{\prime \prime \prime }\left ( x\right ) }{6}h\), hence \(p=1\) in this case, and with the truncation error \(\tau \) \(=\frac{f^{\prime \prime \prime }\left ( x_{j}\right ) }{6}h\) at each grid point.

In the second case, we obtain\[ \tilde{f}^{\prime \prime }\left ( x\right ) =f^{\prime \prime }\left ( x\right ) +\frac{f^{\prime \prime \prime }\left ( x\right ) }{6}h+O\left ( h^{2}\right ) \] Where \(O\left ( h^{2}\right ) =\frac{f^{\left ( 4\right ) }}{16}h^{2}\) and \(p=2\) in this case, and with the truncation error \(\tau \) \(=\frac{f^{\left ( 4\right ) }\left ( x_{j}\right ) }{16}h^{2}\) at each grid point. We see that \(\tau \) is smaller if we use \(p=2\) than \(p=1\).

The accuracy then depends on where we decide to truncate. For example, at \(p=1\), the error is dominated by \(O\left ( h\right ) \), and at \(p=2\), it is \(O\left ( h^{2}\right ) \).

2.2.4.2 part (b) Refinement study

Given \(f\left ( x\right ) =\cos \left ( 2\pi x\right ) \), first, let us find the accuracy of this scheme. The finite difference approximation formula found is \begin{equation} \tilde{f}^{\prime \prime }\left ( x\right ) =\frac{8}{3h^{2}}f\left ( x-\frac{h}{2}\right ) -\frac{4}{h^{2}}f\left ( x\right ) +\frac{4}{3h^{2}}f\left ( x+h\right ) \tag{1} \end{equation} And the exact value is \begin{equation} \frac{d^{2}}{dx^{2}}\cos \left ( 2\pi x\right ) =-4\pi ^{2}\cos 2\pi x \tag{2} \end{equation} To find the local error \(\tau \) \[ \tau =\tilde{f}^{\prime \prime }\left ( x\right ) -f^{\prime \prime }\left ( x\right ) \] Substitute \(f\left ( x\right ) =\cos \left ( 2\pi x\right ) \) in the RHS of (1) to find the approximation of the second derivative and subtract the exact result value of the second derivative from it.  

Plug \(f\left ( x\right ) =\cos \left ( 2\pi x\right ) \) in RHS of (1) we obtain\begin{align*} \tilde{f}^{\prime \prime }\left ( x\right ) & =\frac{8}{3h^{2}}\cos \left ( 2\pi \left ( x-\frac{h}{2}\right ) \right ) -\frac{4}{h^{2}}\cos \left ( 2\pi x\right ) +\frac{4}{3h^{2}}\cos \left ( 2\pi \left ( x+h\right ) \right ) \\ & =\frac{8}{3h^{2}}\cos \left ( 2\pi x-\pi h\right ) -\frac{4}{h^{2}}\cos \left ( 2\pi x\right ) +\frac{4}{3h^{2}}\cos \left ( 2\pi x+2\pi h\right ) \end{align*}

Hence the local error \(\tau \) is \begin{align*} \tau & =\tilde{f}^{\prime \prime }\left ( x\right ) -f^{\prime \prime }\left ( x\right ) \\ & =\left [ \frac{8}{3h^{2}}\cos \left ( 2\pi x-\pi h\right ) -\frac{4}{h^{2}}\cos \left ( 2\pi x\right ) +\frac{4}{3h^{2}}\cos \left ( 2\pi x+2\pi h\right ) \right ] +4\pi ^{2}\cos 2\pi x \end{align*}

We notice that \(\tau \) depends on \(h\) and \(x.\) At \(x=1\), \begin{align*} \tau & =\left [ \frac{8}{3h^{2}}\cos \left ( 2\pi -\pi h\right ) -\frac{4}{h^{2}}+\frac{4}{3h^{2}}\cos \left ( 2\pi +2\pi h\right ) \right ] +4\pi ^{2}\\ & =\frac{4}{3h^{2}}\left ( \cos \left ( 2\pi h\right ) +2\cos \left ( \pi h\right ) +3h^{2}\pi ^{2}-3\right ) \end{align*}

In the following we plot local error \(\tau \) as a function of \(h\) in linear scale and log scale. Here is the result.

pict
Figure 2.2:matlab HW1 partb

We notice that the log plot shows the slope \(p=2\) and not \(p=1\). This is because the \(O\left ( h\right ) \) part turned out to be zero at \(x=1\,\) this is because \(O\left ( h\right ) =\frac{f^{\prime \prime \prime }\left ( x\right ) }{6}h=\) \(\frac{(8\pi ^{3}\sin 2\pi x)}{6}h\) and this term is zero at \(x=1\), so the dominant error term became \(O\left ( h^{2}\right ) \) which is \(\frac{f^{\left ( 4\right ) }\left ( x_{j}\right ) }{16}h^{2}=\frac{16\pi ^{4}\cos 2\pi x}{16}h^{2}\) or \(\pi ^{4}h^{2}\) or \(O\left ( h^{2}\right ) \).

This is why we obtained \(p=2\) and not \(p=1\) at \(x=1\).

The following table show the ratio of the local error between each 2 successive halving of the spacing \(h\). Each time \(h\) is halved, and the ratio of the error (absolute local error) is shown. We see for \(x=1\) that the ratio approaches \(4\). This indicates that \(p=2\).

2.2.4.3 Part(c)

The refinement study in part (b) showed that the local error became smaller as \(h\) become smaller, and the error was \(O\left ( h^{2}\right ) \) since \(p=2\) in the log plot.

But this is not a good test as it was done only for one point \(x=1\). We need to examine the approximation scheme at other points as well. The reason is the local error at an \(x\) location is \[ \tau =\left [ \frac{8}{3h^{2}}\cos \left ( 2\pi x-\pi h\right ) -\frac{4}{h^{2}}\cos \left ( 2\pi x\right ) +\frac{4}{3h^{2}}\cos \left ( 2\pi x+2\pi h\right ) \right ] +4\pi ^{2}\cos 2\pi x \] which can be seen to be a function of \(x\) and \(h\). In (b) we found that at \(x=1\), \(\tau =O\left ( h^{2}\right ) \) and this was because the dominant error term \(O(h)\) happened to vanish at \(x=1\).

But if we examine \(\tau \) at different point, say \(x=0.2\), then we will see that \(\tau \) is \(O\left ( h\right ) \) and \(p=1\).

Here is a plot of \(\tau \) at \(x=0.2\) and at \(x=1.\) Both showing what happens as \(h\) becomes smaller. We see that the at \(x=1\) the approximation was more accurate (\(p=2\)) but at \(x=0.2\) the approximation was less accurate (\(p=1\))\(.\) What we conclude from this, is that a single test is not sufficient for determine the accuracy for all points. More tests are needed at other points to have more confidence. To verify that at \(x=0.2\) we indeed have \(p=1\), we generate the error table as shown above, but for \(x=0.2\) this time.

pict
Figure 2.3:matlab HW1 partc

We see that the ratio becomes \(2\) this time, not \(4\) as we half the spacing each time. This mean \(p=1\). This means the accuracy of the formula used can depend on the location.

2.2.4.4 Part (d)

The points that we need to interpolate are \(\left [ \left [ x-\frac{h}{2},u\left ( x-\frac{h}{2}\right ) \right ] ,\left [ x,u\left ( x\right ) \right ] ,\left [ x+h,u\left ( x+h\right ) \right ] \right ] \) where \(u=\cos \left ( 2\pi x\right ) \)

Since we require a quadratic polynomial, then we write\[ p\left ( x\right ) =a+bx+cx^{2}\] Where \(p\left ( x\right ) \) is the interpolant. Evaluate the above at each of the 3 points. Choose \(x=1\), hence the points are\[\begin{pmatrix} 1-\frac{h}{2},u\left ( 1-\frac{h}{2}\right ) \\ 1,u\left ( x\right ) \\ 1+h,u\left ( 1+h\right ) \end{pmatrix} \] Evaluate \(p\left ( x\right ) \) at each of these points \begin{align*} p\left ( 1-\frac{h}{2}\right ) & =\cos \left ( 2\pi \left ( 1-\frac{h}{2}\right ) \right ) =a+b\left ( 1-\frac{h}{2}\right ) +c\left ( 1-\frac{h}{2}\right ) ^{2}\\ p\left ( 1\right ) & =\cos \left ( 2\pi \right ) =a+b+c\\ p\left ( 1+h\right ) & =\cos \left ( 2\pi \left ( 1+h\right ) \right ) =a+b\left ( 1+h\right ) +c\left ( 1+h\right ) ^{2} \end{align*}

or\begin{align*} \begin{pmatrix} \left ( 1-\frac{h}{2}\right ) ^{2} & \left ( 1-\frac{h}{2}\right ) & 1\\ 1 & 1 & 1\\ \left ( 1+h\right ) ^{2} & \left ( 1+h\right ) & 1 \end{pmatrix}\begin{pmatrix} c\\ b\\ a \end{pmatrix} & =\begin{pmatrix} \cos \left ( 2\pi \left ( 1-\frac{h}{2}\right ) \right ) \\ \cos \left ( 2\pi 1\right ) \\ \cos \left ( 2\pi \left ( 1+h\right ) \right ) \end{pmatrix} \\ Av & =b \end{align*}

Solving the above Vandermonde system, we obtain\begin{align*} a & =\frac{1}{3h^{2}}\left ( 4\left ( 1+h\right ) \cos \left ( h\pi \right ) +\left ( h-2\right ) \left ( 3+3h-\cos \left ( 2\pi h\right ) \right ) \right ) \\ b & =\frac{-1}{3h^{2}}4\left ( \left ( h-4\right ) \cos \left ( \pi h\right ) -h-8\right ) \sin ^{2}\left ( \frac{\pi h}{2}\right ) \\ c & =\frac{2}{3h^{2}}\left ( 2\cos \left ( \pi h\right ) -3+\cos \left ( 2\pi h\right ) \right ) \end{align*}

Hence \begin{align} p\left ( x\right ) & =\left [ \frac{1}{3h^{2}}\left ( 4\left ( 1+h\right ) \cos \left ( h\pi \right ) +\left ( h-2\right ) \left ( 3+3h-\cos \left ( 2\pi h\right ) \right ) \right ) \right ] \tag{1}\\ & -\left [ \frac{1}{3h^{2}}4\left ( \left ( h-4\right ) \cos \left ( \pi h\right ) -h-8\right ) \sin ^{2}\left ( \frac{\pi h}{2}\right ) \right ] x\nonumber \\ & +\left [ \frac{2}{3h^{2}}\left ( 2\cos \left ( \pi h\right ) -3+\cos \left ( 2\pi h\right ) \right ) \right ] x^{2}\nonumber \end{align}

Recall, that we found, for \(u=\cos \left ( 2\pi x\right ) \), the finite difference formula was \begin{equation} \tilde{u}^{\prime \prime }\left ( x\right ) =\left [ \frac{8}{3h^{2}}\cos \left ( 2\pi x-\pi h\right ) -\frac{4}{h^{2}}\cos \left ( 2\pi x\right ) +\frac{4}{3h^{2}}\cos \left ( 2\pi x+2\pi h\right ) \right ] \tag{2} \end{equation} Take the second derivative of \(p\left ( x\right ) \) shown in (1) above\begin{equation} p^{\prime \prime }\left ( x\right ) =\frac{4}{3h^{2}}\left ( 2\cos \left ( \pi h\right ) +\cos \left ( 2\pi h\right ) -3\right ) \tag{3} \end{equation} But we notice that \(\tilde{u}^{\prime \prime }\left ( x\right ) \) evaluated at \(x=1\) is\[ \tilde{u}^{\prime \prime }\left ( 1\right ) =\frac{4}{3h^{2}}\left ( 2\cos \left ( \pi h\right ) +\cos \left ( 2\pi h\right ) -3\right ) \] which is the same as \(p^{\prime \prime }\left ( x\right ) .\)

Therefore, \(p^{\prime \prime }\left ( x\right ) \) is the same as as the finite difference approximation evaluated at the central point of the 3 points, used to generate \(p\).

In other words, given 3 points\[\begin{pmatrix} x_{0}-\frac{h}{2},u\left ( x_{0}\right ) \\ x_{0},u\left ( x_{0}\right ) \\ x_{0}+h,u\left ( x_{0}+h\right ) \end{pmatrix} \] Where \(u\left ( x\right ) \) is some function (here it was \(\cos \left ( 2\pi x\right ) \)), and we generate a quadratic interpolant polynomial \(p\left ( x\right ) \) using the above 3 points, then \(p^{\prime \prime }\left ( x\right ) \) will given the same value as the finite difference formula evaluated at \(x_{0}\). \[ \left . p^{\prime \prime }\left ( x\right ) \right \vert _{x=x_{0}}=\left . \tilde{u}\left ( x\right ) \right \vert _{x=x_{0}}\] For this to be valid, \(p\left ( x\right ) \) must have been generated with the center point being \(x_{0}\). If we pick another center point \(x_{1}\), and therefore have the 3 points \(x_{1}-h/2,x_{1},x_{1}+h\), and then generate a polynomial \(q\left ( x\right ) \) as above, then we will find

\[ \left . q^{\prime \prime }\left ( x\right ) \right \vert _{x=x_{1}}=\left . \tilde{u}\left ( x\right ) \right \vert _{x=x_{1}}\] This is illustrated by the following diagram

pict
Figure 2.4:prob3 c

2.2.5 Appendix (Source code)

   2.2.5.1 Matlab
   2.2.5.2 Mathematica

2.2.5.1 Matlab

2.2.5.2 Mathematica

pict

pict

pict