6 general notes

\(\blacksquare \) How to find \(\frac {\partial }{\partial y}\int \left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx\) (added May 26, 2025)

This came up when solving an exact first order ode. To evaluate this, we do

\begin{align*} \frac {\partial }{\partial y}\int \left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx & =\int \frac {\partial }{\partial y}\left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx\\ & =\int \frac {\partial }{\partial y}F\left ( ay^{2}+x^{2}\right ) xdx-\int \frac {\partial }{\partial y}ydx\\ & =\int x\left ( 2ay\right ) F^{\prime }-\int dx\\ & =\int x\left ( 2ay\right ) F^{\prime }-x \end{align*}

Now let \(ay^{2}+x^{2}=u\) so the above becomes

\[ \frac {\partial }{\partial y}\int \left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx=\int x\left ( 2ay\right ) \frac {dF}{du}dx-x \]

And then we see that \(\frac {du}{dx}=2x\) or \(x=\frac {du}{2dx}\). Subtituting this into the above gives

\begin{align*} \frac {\partial }{\partial y}\int \left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx & =\int \frac {du}{2dx}\left ( 2ay\right ) \frac {dF}{du}dx-x\\ & =\int \frac {1}{2}\left ( 2ay\right ) dF-x\\ & =ay\int dF-x\\ & =ayF-x \end{align*}

Hence

\[ \frac {\partial }{\partial y}\int \left ( F\left ( ay^{2}+x^{2}\right ) x-y\right ) dx=ayF\left ( ay^{2}+x^{2}\right ) -x \]

\(\blacksquare \) Some rules to remember. This is in the real domain

  1. \(\sqrt {ab}=\sqrt {a}\sqrt {b}\) only for \(a\geq 0,b\geq 0\). In general \(\left ( ab\right ) ^{\frac {1}{n}}=a^{\frac {1}{n}}b^{\frac {1}{n}}\) for \(a\geq 0,b\geq 0\) where \(n\) is positive integer.
  2. \(\sqrt {y}=x\) implies \(y=x^{2}\) only when \(x>0\). So be careful when squaring both sides to get rid of sqrt root on one side. To see this, let \(\sqrt {y}=4\) then \(y=16\) because \(4\) is positive. But if we had \(\sqrt {y}=-4\) then we can’t say that \(y=16\) since \(\sqrt {16}\) is \(4\) and not \(-4\). (we always take the positive root). So each time we square both sides of equation to get rid of \(\sqrt {}\) on one side, always say this is valid when the other side is not negative.
  3. Generalization of the above: given \(\left ( ab\right ) ^{\frac {n}{m}}\) where both \(n,m\) integers then \(\left ( ab\right ) ^{\frac {n}{m}}=a^{\frac {n}{m}}b^{\frac {n}{m}}\) only when \(a\geq 0,b\geq 0\). This applies if \(\frac {n}{m}<1\) such as \(\frac {2}{3}\) or when \(\frac {n}{m}>1\) such as \(\frac {3}{2}\). Only time we can write \(\left ( ab\right ) ^{n}=a^{n}b^{n}\) for any \(a,b\) is when \(n\) is an integer (positive or negative). When the power is ratio of integers, then was can split it only under the condition that all terms are positive.
  4. \(\sqrt {\frac {1}{b}}=\frac {1}{\sqrt {b}}\) only for \(b>0\). This can be used for example to simplify \(\sqrt {\frac {1}{1-x^{2}}}\sqrt {1-x^{2}}\) to \(1\) under the condition \(1-x^{2}>0\) or \(-1<x<1\). Because in this case the input becomes \(\frac {1}{\sqrt {1-x^{2}}}\sqrt {1-x^{2}}=1\).
  5. Generalization of the above:\(\sqrt {\frac {a}{b}}=\frac {\sqrt {a}}{\sqrt {b}}\) only for \(a\geq 0,b>0\)
  6. \(\sqrt {x^{2}}=x\) only for \(x\geq 0\)
  7. Generalization of the above: \(\left ( x^{n}\right ) ^{\frac {1}{n}}=x\) only when \(x\geq 0\) (assuming \(n\) is integer).

\(\blacksquare \) Given \(u\equiv u\left ( x,y\right ) \) then total differential of \(u\) is

\[ du=\frac {\partial u}{\partial x}dx+\frac {\partial u}{\partial y}dy \]

\(\blacksquare \) Lyapunov function is used to determine stability of an equilibrium point. Taking this equilibrium point to be zero, and someone gives us a set of differential equations \(\begin {pmatrix} x^{\prime }\left ( t\right ) \\ y^{\prime }\left ( t\right ) \\ z^{\prime }\left ( t\right ) \end {pmatrix} =\begin {pmatrix} f_{1}\left ( x,y,z,t\right ) \\ f_{2}\left ( x,y,z,t\right ) \\ f_{2}\left ( x,y,z,t\right ) \end {pmatrix} \) and assuming \(\left ( 0,0,0\right ) \) is an equilibrium point. The question is, how to determine if it stable or not? There are two main ways to do this. One by linearization of the system around origin. This means we find the Jacobian matrix, evaluate it at origin, and check the sign of the real parts of the eigenvalues. This is the common way to do this. Another method, called Lyapunov, is more direct. There is no linearization needed. But we need to do the following. We need to find a function \(V\left ( x,y,z\right ) \) which is called Lyapunov function for the system which meets the following conditions

  1. \(V\left ( x.y,z\right ) \) is continuously differentiable function in \(\mathbb {R} ^{3}\) and \(V\left ( x.y,z\right ) \geq 0\) (positive definite or positive semidefinite) for all \(x,y,z\) away from the origin, or everywhere inside some fixed region around the origin. This function represents the total energy of the system (For Hamiltonian systems). Hence \(V\left ( x,y,z\right ) \) can be zero away from the origin. But it could never be negative.
  2. \(V\left ( 0,0,0\right ) =0\). This says the system has no energy when it is at the equilibrium point. (rest state).
  3. The orbital derivative \(\frac {dV}{dt}\leq 0\) (i.e. negative definite or negative semi-definite) for all \(x,y,z\), or inside some fixed region around the origin. The orbital derivative is same as \(\frac {dV}{dt}\) along any solution trajectory. This condition says that the total energy is either constant in time (the zero case) or the total energy is decreasing in time (the negative definite case). Both of which indicate that the origin is a stable equilibrium point.

If \(\frac {dV}{dt}\) is negative semi-definite then the origin is stable in Lyapunov sense. If \(\frac {dV}{dt}\) is negative definite then the origin is asymptotically stable equilibrium. Negative semi-definite means the system, when perturbed away from the origin, a trajectory will remain around the origin since its energy do not increase nor decrease. So it is stable. But asymptotically stable equilibrium is a stronger stability. It means when perturbed from the origin the solution will eventually return back to the origin since the energy is decreasing. Global stability means \(\frac {dV}{dt}\leq 0\) everywhere, and not just in some closed region around the origin. Local stability means \(\frac {dV}{dt}\leq 0\) in some closed region around the origin. Global stability is stronger stability than local stability.

Main difficulty with this method is to find \(V\left ( x.y,z\right ) \). If the system is Hamiltonian, then \(V\) is the same as total energy. Otherwise, one will guess. Typically a quadratic function such as \(V=ax^{2}+cxy+dy^{2}\) is used (for system in \(x,y\)) then we try to find \(a,c,d\) which makes it positive definite everywhere away from origin, and also more importantly makes \(\frac {dV}{dt}\leq 0\). If so, we say origin is stable. Most of the problems we had starts by giving us \(V\) and then asks to show it is Lyapunov function and what kind of stability it is.

To determine if \(V\) is positive definite or not, the common way is to find the Hessian and check the sign of the eigenvalues. Another way is to find the Hessian and check the sign of the minors. For \(2\times 2\) matrix, this means the determinant is positive and the entry \(\left ( 1,1\right ) \) in the matrix is positive. Similar thing to check if \(\frac {dV}{dt}\leq 0\). We find the Hessian of \(\frac {dV}{dt}\) and do the same thing. But now we check for negative eigenvalues instead.

\(\blacksquare \) Methods to find Green function are

  1. Fredholm theory
  2. methods of images
  3. separation of variables
  4. Laplace transform

reference Wikipedia I need to make one example and apply each of the above methods on it.

\(\blacksquare \) In solving an ODE with constant coefficient just use the characteristic equation to solve the solution.

\(\blacksquare \) In solving an ODE with coefficients that are functions that depends on the independent variable, as in \(y^{\prime \prime }\left ( x\right ) +q\left ( x\right ) y^{\prime }\left ( x\right ) +p\left ( x\right ) y\left ( x\right ) =0\), first classify the point \(x_{0}\) type. This means to check how \(p\left ( x\right ) \) and \(q\left ( x\right ) \) behaves at \(x_{0}\). We are talking about the ODE here, not the solution yet.

There are 3 kinds of points. \(x_{0}\) can be normal, or regular singular point, or irregular singular point. Normal point \(x_{0}\) means \(p\left ( x\right ) \) and \(q\left ( x\right ) \) have Taylor series expansion \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}\left ( x-x_{0}\right ) ^{n}\) that converges to \(y\left ( x\right ) \) at \(x_{0}\).
Regular singular point \(x_{0}\) means that the above test fails, but \(\lim _{x\rightarrow x_{0}}\left ( x-x_{0}\right ) q\left ( x\right ) \) has a convergent Taylor series, and also that \(\lim _{x\rightarrow x_{0}}\left ( x-x_{0}\right ) ^{2}p\left ( x\right ) \) now has a convergent Taylor series at \(x_{0}\). This also means the limit exist.

All this just means we can get rid of the singularity. i.e. \(x_{0}\) is a removable singularity. If this is the case, then the solution at \(x_{0}\) can be assumed to have a Frobenius series \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}\left ( x-x_{0}\right ) ^{n+\alpha }\) where \(a_{0}\neq 0\) and \(\alpha \) is the root of the Frobenius indicial equation. There are three cases to consider. See https://math.usask.ca/~cheviakov/courses/m338/text/Frobenius_Case3_ill.pdf for more discussion on this.

The third type of point, is the hard one. Called irregular singular point. We can’t get rid of it using the above. So we also say the ODE has an essential singularity at \(x_{0}\) (another fancy name for irregular singular point). What this means is that we can’t approximate the solution at \(x_{0}\) using either Taylor nor Frobenius series.

If the point is an irregular singular point, then use the methods of asymptotic. See advanced mathematical methods for scientists and engineers chapter 3. For normal point, use \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}x^{n}\), for regular singular point use \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}x^{n+r}\). Remember, to solve for \(r\) first. This should give two values. If you get one root, then use reduction of order to find second solution.

\(\blacksquare \) Asymptotic series \(S\left ( z\right ) =c_{0}+\frac {c_{1}}{z}+\frac {c_{2}}{z^{2}}+\cdots \) is series expansion of \(f\left ( z\right ) \) which gives good and rapid approximation for large \(z\) as long as we know when to truncate \(S\left ( z\right ) \) before it becomes divergent. This is the main difference Asymptotic series expansion and Taylor series expansion.

\(S\left ( z\right ) \) is used to approximate a function for large \(z\) while Taylor (or power series) is used for local approximation or for small distance away from the point of expansion. \(S\left ( z\right ) \) will become divergent, hence it  needs to be truncated at some \(n\) to use, where \(n\) is the number of terms in \(S_{n}\left ( z\right ) \). It is optimally truncated when \(n\approx \left \vert z\right \vert ^{2}\).

\(S\left ( x\right ) \) has the following two important properties

  1. \(\lim _{\left \vert z\right \vert \rightarrow \infty }z^{n}\left ( f\left ( z\right ) -S_{n}\left ( z\right ) \right ) =0\) for fixed \(n\).
  2. \(\lim _{n\rightarrow \infty }z^{n}\left ( f\left ( z\right ) -S_{n}\left ( z\right ) \right ) =\infty \) for fixed \(z\).

We write \(S\left ( z\right ) \sim f\left ( z\right ) \) when \(S\left ( z\right ) \) is the asymptotic series expansion of \(f\left ( z\right ) \) for large \(z\). Most common method to find \(S\left ( z\right ) \) is by integration by parts. At least this is what we did in the class I took.

\(\blacksquare \) For Taylor series, leading behavior is \(a_{0}\) no controlling factor? For Frobenius series, leading behavior term is \(a_{0}x^{\alpha }\) and controlling factor is \(x^{\alpha }\). For asymptotic series, controlling factor is assumed to be \(e^{S\left ( x\right ) }\) always. proposed by Carlini (1817)

\(\blacksquare \) Method to find the leading behavior of the solution \(y\left ( x\right ) \) near irregular singular point using asymptotic is called the dominant balance method.

\(\blacksquare \) When solving \(\epsilon y^{\prime \prime }+p\left ( x\right ) y^{\prime }+q\left ( x\right ) y=0\) for very small \(\epsilon \) then use WKB method, if there is no boundary layer between the boundary conditions. If the ODE non-linear, can’t use WKB, has to use boundary layer (B.L.).  Example \(\epsilon y^{\prime \prime }+yy^{\prime }-y=0\) with \(y\left ( 0\right ) =0,y\left ( 1\right ) =-2\) then use BL.

\(\blacksquare \) good exercise is to solve say \(\epsilon y^{\prime \prime }+(1+x)y^{\prime }+y=0\) with \(y\left ( 0\right ) =y\left ( 1\right ) \) using both B.L. and WKB and compare the solutions, they should come out the same. \(y\sim \frac {2}{1+x}-\exp \left ( \frac {-x}{\epsilon }-\frac {x^{2}}{2\epsilon }\right ) +O\left ( \epsilon \right ) .\) with BL had to do the matching between the outer and the inner solutions. WKB is easier. But can’t use it for non-linear ODE.

\(\blacksquare \) When there is rapid oscillation over the entire domain, WKB is better. Use WKB to solve Schrodinger equation where \(\epsilon \) becomes function of \(\hslash \) (Planck’s constant, \(6.62606957\times 10^{-34}\) m\(^{2}\)kg/s)

\(\blacksquare \) In second order ODE with non constant coefficient, \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\), if we know one solution \(y_{1}\left ( x\right ) \), then a method called the reduction of order can be used to find the second solution \(y_{2}\left ( x\right ) \). Write \(y_{2}\left ( x\right ) =u\left ( x\right ) y_{1}\left ( x\right ) \), plug this in the ODE, and solve for \(u\left ( x\right ) \). The final solution will be \(y\left ( x\right ) =c_{1}y_{1}\left ( x\right ) +c_{2}y_{2}\left ( x\right ) \). Now apply I.C.’s to find \(c_{1},c_{2}\).

\(\blacksquare \) To find particular solution to \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =f\left ( x\right ) \), we can use a method called undetermined coefficients.  But a better method is called variation of parameters, In this method, assume \(y_{p}\left ( x\right ) =u_{1}\left ( x\right ) y_{1}\left ( x\right ) +u_{2}\left ( x\right ) y_{2}\left ( x\right ) \) where \(y_{1}\left ( x\right ) ,y_{2}\left ( x\right ) \) are the two linearly independent solutions of the homogeneous ODE and \(u_{1}\left ( x\right ) ,u_{2}\left ( x\right ) \) are to be determined. This ends up with \(u_{1}\left ( x\right ) =-\int \frac {y_{2}\left ( x\right ) f\left ( x\right ) }{W}dx\) and \(u_{2}\left ( x\right ) =\int \frac {y_{1}\left ( x\right ) f\left ( x\right ) }{W}dx\). Remember to put the ODE in standard form first, so \(a=1\), i.e. \(ay^{\prime \prime }\left ( x\right ) +\cdots \). In here, \(W\) is the Wronskian \(W=\begin {vmatrix} y_{1}\left ( x\right ) & y_{2}\left ( x\right ) \\ y_{1}^{\prime }\left ( x\right ) & y_{2}^{\prime }\left ( x\right ) \end {vmatrix} \)

\(\blacksquare \) Two solutions of \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\) are linearly independent if \(W\left ( x\right ) \neq 0\), where \(W\) is the Wronskian.

\(\blacksquare \) For second order linear ODE defined over the whole real line, the Wronskian is either always zero, or not zero. This comes from Abel formula for Wronskian, which is \(W\left ( x\right ) =k\exp \left ( -\int \frac {B\left ( x\right ) }{A\left ( x\right ) }dx\right ) \) for ODE of form \(A\left ( x\right ) y^{\prime \prime }+B\left ( x\right ) y^{\prime }+C\left ( x\right ) y=0\). Since \(\exp \left ( -\int \frac {B\left ( x\right ) }{A\left ( x\right ) }dx\right ) >0\), then it is decided by \(k\). The constant of integration. If \(k=0\), then \(W\left ( x\right ) =0\) everywhere, else it is not zero everywhere.

\(\blacksquare \) For linear PDE, if boundary condition are time dependent, can not use separation of variables. Try Transform method (Laplace or Fourier) to solve the PDE.

\(\blacksquare \) If unable to invert Laplace analytically, try numerical inversion or asymptotic methods. Need to find example of this.

\(\blacksquare \) Green function takes the homogeneous solution and the forcing function and constructs a particular solution. For PDE’s, we always want a symmetric Green’s function.

\(\blacksquare \) To get a symmetric Green’s function given an ODE, start by converting the ODE to a Sturm-Liouville form first. This way the Green’s function comes out symmetric.

\(\blacksquare \) For numerical solutions of field problems, there are basically two different problems: Those with closed boundaries and those with open boundaries but with initial conditions. Closed boundaries are elliptical problems which can be cast in the form \(Au=f\), and the other are either hyperbolic or parabolic.

\(\blacksquare \) For numerical solution of elliptical problems, the basic layout is something like this:

Always start with trial solution \(u(x)\) such that \(u_{trial}(x)=\sum _{i=0}^{i=N}C_{i}\phi _{i}(x)\) where the \(C_{i}\) are the unknowns to be determined and the \(\phi _{i}\) are set of linearly independent functions (polynomials) in \(x\).

How to determine those \(C_{i}\) comes next. Use either residual method (Galerkin) or variational methods (Ritz). For residual, we make a function based on the error \(R=A-u_{trial}f\). It all comes down to solving \(\int f(R)=0\) over the domain. This is a picture

| 
+---------------+-------------------------------------+ 
|                                                     | 
residual                       Variational (sub u_trial in I(u) 
|                          where I(u) is functional to minimize. 
| 
+----------------+-------------+----------+ 
|                |             |          | 
Absolute error   collocation   subdomain orthogonality 
.... 
+----------------------+------------+ 
|                      |            | 
method of moments   Galerkin     least squares
 

\(\blacksquare \) Geometric probability distribution. Use when you want an answer to the question: What is the probability you have to do the experiment \(N\) times to finally get the output you are looking for, given that a probability of \(p\) showing up from doing one experiment.

For example: What is the probability one has to flip a fair coin \(N\) times to get a head? The answer is \(P(X=N)=(1-p)^{k-1}p\). So for a fair coin, \(p=\frac {1}{2}\) that a head will show up from one flip. So the probability we have to flip a coin \(10\) times to get a head is \(P(X=10)=(1-0.5)^{9}(0.5)=0.00097\) which is very low as expected.

\(\blacksquare \) To generate random variable drawn from some distribution different from uniform distribution, by only using uniform distribution \(U(0,1)\) do this: Lets say we want to generate random number from exponential distribution with mean \(\mu \).

This distribution has \(pdf(X)=\frac {1}{\mu }e^{\frac {-x}{\mu }}\), the first step is to find the cdf of exponential distribution, which is known to be \(F(x)=P(X<=x)=1-e^{\frac {-x}{\mu }}\).

Now find the inverse of this, which is \(F^{-1}(x)=-\mu \ln (1-x)\). Then generate a random number from the uniform distribution \(U(0,1)\). Let this value be called \(z\).

Now plug this value into \(F^{-1}(z)\), this gives a random number from exponential distribution, which will be \(-\mu \ \ln (1-z)\) (take the natural log of both side of \(F(x)\)).

This method can be used to generate random variables from any other distribution by knowing on \(U(0,1)\). But it requires knowing the CDF and the inverse of the CDF for the other distribution. This is called the inverse CDF method. Another method is called the rejection method

\(\blacksquare \) Given \(u\), a r.v. from uniform distribution over [0,1], then to obtain \(v\), a r.v. from uniform distribution over [A,B], then the relation is \(v=A+(B-A)u\).

\(\blacksquare \) When solving using F.E.M. is best to do everything using isoparametric element (natural coordinates), then find the Jacobian of transformation between the natural and physical coordinates to evaluate the integrals needed. For the force function, using Gaussian quadrature method.

\(\blacksquare \) A solution to differential equation is a function that can be expressed as a convergent series. (Cauchy. Briot and Bouquet, Picard)

\(\blacksquare \) To solve a first order ODE using integrating factor.

\[ x^{\prime }(t)+p(t)x(t)=f(t) \]

then as long as it is linear and \(p(t),f(t)\) are integrable functions in \(t\), then follow these steps

  1. multiply the ODE by function \(I(t)\), this is called the integrating factor.

    \[ I(t)x^{\prime }(t)+I(t)p(t)x(t)=I(t)f(t) \]
  2. We solve for \(I(t)\) such that the left side satisfies

    \[ \frac {d}{dt}\left ( I(t)x(t)\right ) =I(t)x^{\prime }(t)+I(t)p(t)x(t) \]
  3. Solving the above for \(I(t)\) gives

    \begin{align*} I^{\prime }(t)x(t)+I(t)x^{\prime }(t) & =I(t)x^{\prime }(t)+I(t)p(t)x(t)\\ I^{\prime }(t)x(t) & =I(t)p(t)x(t)\\ I^{\prime }(t) & =I(t)p(t)\\ \frac {dI}{I} & =p(t)dt \end{align*}

    Integrating both sides gives

    \begin{align*} \ln (I) & =\int {p(t)dt}\\ I(t) & =e^{\int {p(t)dt}}\end{align*}
  4. Now equation (1) can be written as

    \[ \frac {d}{dt}\left ( I(t)x(t)\right ) =I(t)f(t) \]
    We now integrate the above to give
    \begin{align*} I(t)x(t) & =\int {I(t)f(t)\,dt}+C\\ x(t) & =\frac {\int {I(t)f(t)\,dt}+C}{I(t)}\end{align*}

    Where \(I(t)\) is given by (2). Hence

    \[ x(t)=\frac {\int {e^{\int {p(t)dt}}f(t)\,dt}+C}{e^{\int {p(t)dt}}}\]
    \(\blacksquare \) A polynomial is called ill-conditioned if we make small change to one of its coefficients and this causes large change to one of its roots.

\(\blacksquare \) To find rank of matrix \(A\) by hand, find the row echelon form, then count how many zero rows there are. subtract that from number of rows, i.e. \(n\).

\(\blacksquare \) To find the basis of the column space of \(A\), find the row echelon form and pick the columns with the pivots, there are the basis (the linearly independent columns of \(A\)).

\(\blacksquare \) For symmetric matrix \(A\), its second norm is its spectral radius \(\rho (A)\) which is the largest eigenvalue of \(A\) (in absolute terms).

\(\blacksquare \) The eigenvalues of the inverse of matrix \(A\) is the inverse of the eigenvalues of \(A\).

\(\blacksquare \) If matrix \(A\) of order \(n\times n\), and it has \(n\) distinct eigenvalues, then it can be diagonalized  \(A=V\Lambda V^{-1}\), where

\[ \Lambda =\begin {pmatrix} e^{\lambda _{1}} & 0 & 0\\ 0 & \ddots & 0\\ 0 & 0 & e^{\lambda n}\end {pmatrix} \]

and \(V\) is matrix that has the \(n\) eigenvectors as its columns.

\(\blacksquare \) \(\lim _{k\rightarrow \infty }\int _{x_{1}}^{x_{2}}f_{k}\left ( x\right ) dx=\int _{x_{1}}^{x_{2}}\lim _{k\rightarrow \infty }f_{k}\left ( x\right ) dx\) only if \(f_{k}\left ( x\right ) \) converges uniformly over \(\left [ x_{1},x_{2}\right ] \).

\(\blacksquare \) \(A^{3}=I\), has infinite number of \(A\) solutions. Think of \(A^{3}\) as 3 rotations, each of \(120^{0}\), going back to where we started. Each rotation around a straight line. Hence infinite number of solutions.

\(\blacksquare \) How to integrate \(I=\int \frac {\sqrt {x^{3}-1}}{x}\,dx\).

Let \(u=x^{3}+1\), then \(du=3x^{2}dx\) and the above becomes

\[ I=\int \frac {\sqrt {u}}{3x^{3}}\,du=\frac {1}{3}\int \frac {\sqrt {u}}{u-1}\,du \]

Now let \(u=\tan ^{2}v\) or \(\sqrt {u}=\tan v\), hence \(\frac {1}{2}\frac {1}{\sqrt {u}}du=\sec ^{2}v\,dv\) and the above becomes

\begin{align*} I & =\frac {1}{3}\int \frac {\sqrt {u}}{\tan ^{2}v-1}\left ( 2\sqrt {u}\sec ^{2}v\right ) \,dv\\ & =\frac {2}{3}\int \frac {u}{\tan ^{2}v-1}\sec ^{2}v\,dv\\ & =\frac {2}{3}\int \frac {\tan ^{2}v}{\tan ^{2}v-1}\sec ^{2}v\,dv \end{align*}

But \(\tan ^{2}v-1=\sec ^{2}v\) hence

\begin{align*} I & =\frac {2}{3}\int \tan ^{2}v\,dv\\ & =\frac {2}{3}\left ( \tan v-v\right ) \end{align*}

Substituting back

\[ I=\frac {2}{3}\left ( \sqrt {u}-\arctan \left ( \sqrt {u}\right ) \right ) \]

Substituting back

\[ I=\frac {2}{3}\left ( \sqrt {x^{3}+1}-\arctan \left ( \sqrt {x^{3}+1}\right ) \right ) \]

\(\blacksquare \) (added Nov. 4, 2015) Made small diagram to help me remember long division terms used.

\(\blacksquare \) If a linear ODE is equidimensional, as in \(a_{n}x^{n}y^{(n)}+a_{n-1}x^{n-1}y^{(n01)}+\dots \) for example \(x^{2}y^{\prime \prime }-2y=0\) then use ansatz \(y=x^{r}\) this will give equation in \(r\) only. Solve for \(r\) and obtain \(y_{1}=x^{r_{1}},y_{2}=x^{r_{2}}\) and the solution will be

\[ y=c_{1}y_{1}+c_{2}y_{2}\]

For example, for the above ode, the solution is \(c_{1}x^{2}+\frac {c_{2}}{x}\). This ansatz works only if ODE is equidimensional. So can’t use it on \(xy^{\prime \prime }+y=0\) for example.

If \(r\) is multiple root, use \(x^{r},x^{r}\log (x),x^{r}(\log (x))^{2}\dots \) as solutions.

\(\blacksquare \) for \(x^{i}\), where \(i=\sqrt {-1}\), write it as \(x=e^{\log {x}}\) hence \(x^{i}=e^{i\,\log {x}}=\cos (\log {x})+i\,\sin (\log {x})\)

\(\blacksquare \) Some integral tricks: \(\int \sqrt {a^{2}-x^{2}}dx\) use \(x=a\sin \theta \). For \(\int \sqrt {a^{2}+x^{2}}dx\) use \(x=a\tan \theta \) and for \(\int \sqrt {x^{2}-a^{2}}dx\) use \(x=a\sec \theta \).

\(\blacksquare \) \(y^{\prime \prime }+x^{n}y=0\) is called Emden-Fowler form.

\(\blacksquare \) For second order ODE, boundary value problem, with eigenvalue (Sturm-Liouville), remember that having two boundary conditions is not enough to fully solve it.

One boundary condition is used to find the first constant of integration, and the second boundary condition is used to find the eigenvalues.

We still need another input to find the second constant of integration. This is normally done by giving the initial value. This problem happens as part of initial value, boundary value problem. The point is, with boundary value and eigenvalue also present, we need 3 inputs to fully solve it. Two boundary conditions is not enough.

\(\blacksquare \) If given ODE \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\) and we are asked to classify if it is singular at \(x=\infty \), then let \(x=\frac {1}{t}\) and check what happens at \(t=0\). The \(\frac {d^{2}}{dx^{2}}\) operator becomes \(\left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) \) and \(\frac {d}{dx}\) operator becomes \(-t^{2}\frac {d}{dt}\). And write the ode now where \(t\) is the independent variable, and follow standard operating procedures. i.e. look at \(\lim _{t\rightarrow 0}xp\left ( t\right ) \) and \(\lim _{t\rightarrow 0}x^{2}q\left ( t\right ) \) and see if these are finite or not. To see how the operator are mapped, always start with \(x=\frac {1}{t}\) then write \(\frac {d}{dx}=\frac {d}{dt}\frac {dt}{dx}\) and write \(\frac {d^{2}}{dx^{2}}=\left ( \frac {d}{dx}\right ) \left ( \frac {d}{dx}\right ) \). For example, \(\frac {d}{dx}=-t^{2}\frac {d}{dt}\) and

\begin{align*} \frac {d^{2}}{dx^{2}} & =\left ( -t^{2}\frac {d}{dt}\right ) \left ( -t^{2}\frac {d}{dt}\right ) \\ & =-t^{2}\left ( -2t\frac {d}{dt}-t^{2}\frac {d^{2}}{dt^{2}}\right ) \\ & =\left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) \end{align*}

Then the new ODE becomes

\begin{align*} \left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) y\left ( t\right ) +p\left ( t\right ) \left ( -t^{2}\frac {d}{dt}y\left ( t\right ) \right ) +q\left ( t\right ) y\left ( t\right ) & =0\\ t^{4}\frac {d^{2}}{dt^{2}}y+\left ( -t^{2}p\left ( t\right ) +2t^{3}\right ) \frac {d}{dt}y+q\left ( t\right ) y & =0\\ \frac {d^{2}}{dt^{2}}y+\frac {\left ( -p\left ( t\right ) +2t\right ) }{t^{2}}\frac {d}{dt}y+\frac {q\left ( t\right ) }{t^{4}}y & =0 \end{align*}

The above is how the ODE will always become after the transformation. Remember to change \(p\left ( x\right ) \) to \(p\left ( t\right ) \) using \(x=\frac {1}{t}\) and same for \(q\left ( x\right ) \). Now the new \(p\) is \(\frac {\left ( -p\left ( t\right ) +2t\right ) }{t^{2}}\) and the new \(q\) is \(\frac {q\left ( t\right ) }{t^{4}}\). Then do \(\lim _{t\rightarrow 0}t\frac {\left ( p\left ( t\right ) +2t^{3}\right ) }{t^{4}}\) and \(\lim _{t\rightarrow 0}t^{2}\frac {q\left ( t\right ) }{t^{4}}\) as before.

\(\blacksquare \) If the ODE \(a\left ( x\right ) y^{\prime \prime }+b\left ( x\right ) y^{\prime }+c\left ( x\right ) y=0\),  and say \(0\leq x\leq 1\), and there is essential singularity at either end, then use boundary layer or WKB. But Boundary layer method works on non-linear ODE’s (and also on linear ODE) and only if the boundary layer is at end of the domain, i.e. at \(x=0\) or \(x=1\).

WKB method on the other hand, works only on linear ODE, but the singularity can be any where (i.e. inside the domain). As rule of thumb, if the ODE is linear, use WKB. If the ODE is non-linear, we must use boundary layer.

Another difference, is that with boundary layer, we need to do matching phase at the interface between the boundary layer and the outer layer in order to find the constants of integrations. This can be tricky and is the hardest part of solving using boundary layer.

Using WKB, no matching phase is needed. We apply the boundary conditions to the whole solution obtained. See my HWs for NE 548 for problems solved from Bender and Orszag text book.

\(\blacksquare \) In numerical, to find if a scheme will converge, check that it is stable and also check that if it is consistent.

It could also be conditionally stable, or unconditionally stable, or unstable.

To check it is consistent, this is the same as finding the LTE (local truncation error) and checking that as the time step and the space step both go to zero, the LTE goes to zero. What is the LTE? You take the scheme and plug in the actual solution in it. An example is better to explain this part. Lets solve \(u_{t}=u_{xx}\). Using forward in time and centered difference in space, the numerical scheme (explicit) is

\[ U_{j}^{n+1}=U_{j}^{n}+\frac {k}{h^{2}}\left ( U_{j-1}^{n}-2U_{j}^{n}+U_{j+1}^{n}\right ) \]

The LTE is the difference between these two (error)

\[ LTE=U_{j}^{n+1}-\left ( U_{j}^{n}+\frac {k}{h^{2}}\left ( U_{j-1}^{n}-2U_{j}^{n}+U_{j+1}^{n}\right ) \right ) \]

Now plug-in \(u\left ( t^{n},x_{j}\right ) \) in place of \(U_{j}^{n}\) and \(u\left ( t^{n}+k,x_{j}\right ) \) in place of \(U_{j}^{n+1}\) and plug-in \(u\left ( t^{n},x+h\right ) \) in place of \(U_{j+1}^{n}\) and plug-in \(u\left ( t^{n},x-h\right ) \) in place of \(U_{j-1}^{n}\) in the above. It becomes

\begin{equation} LTE=u\left ( t+k,x_{j}\right ) -\left ( u\left ( t^{n},x_{j}\right ) +\frac {k}{h^{2}}\left ( u\left ( t,x-h\right ) -2u\left ( t^{n},x_{j}\right ) +u\left ( t,x+h\right ) \right ) \right ) \tag {1}\end{equation}

Where in the above \(k\) is the time step (also written as \(\Delta t\)) and \(h\) is the space step size. Now comes the main trick. Expanding the term \(u\left ( t^{n}+k,x_{j}\right ) \) in Taylor,

\begin{equation} u\left ( t^{n}+k,x_{j}\right ) =u\left ( t^{n},x_{j}\right ) +k\left . \frac {\partial u}{\partial t}\right \vert _{t^{n}}+\frac {k^{2}}{2}\left . \frac {\partial ^{2}u}{\partial t^{2}}\right \vert _{t^{n}}+O\left ( k^{3}\right ) \tag {2}\end{equation}

And expanding

\begin{equation} u\left ( t^{n},x_{j}+h\right ) =u\left ( t^{n},x_{j}\right ) +h\left . \frac {\partial u}{\partial x}\right \vert _{x_{j}}+\frac {h^{2}}{2}\left . \frac {\partial ^{2}u}{\partial x^{2}}\right \vert _{x_{j}}+O\left ( h^{3}\right ) \tag {3}\end{equation}

And expanding

\begin{equation} u\left ( t^{n},x_{j}-h\right ) =u\left ( t^{n},x_{j}\right ) -h\left . \frac {\partial u}{\partial x}\right \vert _{x_{j}}+\frac {h^{2}}{2}\left . \frac {\partial ^{2}u}{\partial x^{2}}\right \vert _{x_{j}}-O\left ( h^{3}\right ) \tag {4}\end{equation}

Now plug-in (2,3,4) back into (1). Simplifying, many things drop out, and we should obtain that

\[ LTE=O(k)+O\left ( h^{2}\right ) \]

Which says that \(LTE\rightarrow 0\) as \(h\rightarrow 0,k\rightarrow 0\). Hence it is consistent.

To check it is stable, use Von Neumann method for stability. This check if the solution at next time step does not become larger than the solution at the current time step. There can be condition for this. Such as it is stable if \(k\leq \frac {h^{2}}{2}\). This says that using this scheme, it will be stable as long as time step is smaller than \(\frac {h^{2}}{2}\). This makes the time step much smaller than space step.

\(\blacksquare \) For \(ax^{2}+bx+c=0\), with roots \(\alpha ,\beta \) then the relation between roots and coefficients is

\begin{align*} \alpha +\beta & =-\frac {b}{a}\\ \alpha \beta & =\frac {c}{a}\end{align*}

\(\blacksquare \) Leibniz rules for integration

\begin{align*} \frac {d}{dx}\int _{a\left ( x\right ) }^{b\left ( x\right ) }f\left ( t\right ) dt & =f\left ( b\left ( x\right ) \right ) b^{\prime }\left ( x\right ) -f\left ( a\left ( x\right ) \right ) a^{\prime }\left ( x\right ) \\ \frac {d}{dx}\int _{a\left ( x\right ) }^{b\left ( x\right ) }f\left ( t,x\right ) dt & =f\left ( b\left ( x\right ) \right ) b^{\prime }\left ( x\right ) -f\left ( a\left ( x\right ) \right ) a^{\prime }\left ( x\right ) +\int _{a\left ( x\right ) }^{b\left ( x\right ) }\frac {\partial }{\partial x}f\left ( t,x\right ) dt \end{align*}

\(\blacksquare \) \(\int _{a}^{b}f\left ( x\right ) dx=\int _{a}^{b}f\left ( a+b-x\right ) dx\)

\(\blacksquare \) Differentiable function implies continuous. But continuous does not imply differentiable. Example is \(\left \vert x\right \vert \) function.

\(\blacksquare \) Mean curvature being zero is a characteristic of minimal surfaces.

\(\blacksquare \) How to find phase difference between 2 signals \(x_{1}(t),x_{2}(t)\)? One way is to find the DFT of both signals (in Mathematica this is Fourier, in Matlab fft()), then find where the bin where peak frequency is located (in either output), then find the phase difference between the 2 bins at that location. Value of DFT at that bin is complex number. Use Arg in Mathematica to find its phase. The difference gives the phase difference between the original signals in time domain. See https://mathematica.stackexchange.com/questions/11046/how-to-find-the-phase-difference-of-two-sampled-sine-waves for an example.

\(\blacksquare \) Watch out when squaring both sides of equation. For example, given \(y=\sqrt {x}\). squaring both sides gives \(y^{2}=x\). But this is only true for \(y\geq 0\). Why? Let us take the square root of this in order to get back to the original equation. This gives \(\sqrt {y^{2}}=\sqrt {x}\). And here is the problem, \(\sqrt {y^{2}}=y\) only for \(y\geq 0\). Why? Let us assume \(y=-1\). Then \(\sqrt {y^{2}}=\sqrt {\left ( -1\right ) ^{2}}=\sqrt {1}=1\) which is not \(-1\). So when taking the square of both sides of the equation, remember this condition.

\(\blacksquare \) do not replace \(\sqrt {x^{2}}\) by \(x\), but by \(|x|\), since \(x=\sqrt {x^{2}}\) only for non negative \(x\).

\(\blacksquare \) Given an equation, and we want to solve for \(x\). We can square both sides in order to get rid of sqrt if needed on one side. But be careful. Even though after squaring both sides, the new equation is still true, the solutions of the new equation can introduce extraneous solution that does not satisfy the original equation. Here is an example I saw on the internet which illustrate this. Given \(\sqrt {x}=x-6\). And we want to solve for \(x\). Squaring both sides gives \(x=\left ( x-6\right ) ^{2}\). This has solutions \(x=9,x=4\). But only \(x=9\) is valid solution for the original equation before squaring. The solution \(x=4\) is extraneous. So need to check all solutions found after squaring against the original equation, and remove those extraneous one. In summary, if \(a^{2}=b^{2}\) then this does not mean that \(a=b\). But if \(a=b\) then it means that \(a^{2}=b^{2}\). For example \(\left ( -5\right ) ^{2}=5^{2}\). But \(-5\neq 5\).

\(\blacksquare \) How to find Laplace transform of product of two functions?

There is no formula for the Laplace transform of product \(f\left ( t\right ) g\left ( t\right ) \). (But if this was convolution, it is different story). But you could always try the definition and see if you can integrate it. Since \(\mathcal {L}\left ( f\left ( t\right ) \right ) =\int _{0}^{\infty }e^{-st}f\left ( t\right ) dt\) then \(\mathcal {L}\left ( f\left ( t\right ) g\left ( t\right ) \right ) =\int _{0}^{\infty }e^{-st}f\left ( t\right ) g\left ( t\right ) dt\). Hence for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t\) this becomes

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }e^{-st}te^{at}dt\\ & =\int _{0}^{\infty }te^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }te^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t\right ) \\ & =\frac {1}{z^{2}}\\ & =\frac {1}{\left ( s-a\right ) ^{2}}\end{align*}

Similarly for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{2}\)

\begin{align*}\mathcal {L}\left ( t^{2}e^{at}\right ) & =\int _{0}^{\infty }e^{-st}t^{2}e^{at}dt\\ & =\int _{0}^{\infty }t^{2}e^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }t^{2}e^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t^{2}\right ) \\ & =\frac {2}{z^{3}}\\ & =\frac {2}{\left ( s-a\right ) ^{3}}\end{align*}

Similarly for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{3}\)

\begin{align*}\mathcal {L}\left ( t^{2}e^{at}\right ) & =\int _{0}^{\infty }e^{-st}t^{3}e^{at}dt\\ & =\int _{0}^{\infty }t^{3}e^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }t^{3}e^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t^{3}\right ) \\ & =\frac {6}{z^{4}}\\ & =\frac {6}{\left ( s-a\right ) ^{4}}\end{align*}

And so on. Hence we see that for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{n}\)

\[\mathcal {L}\left ( t^{n}e^{at}\right ) =\frac {n!}{\left ( s-a\right ) ^{n+1}}\]