2.3 Math 121 A notes

  2.3.1 Chapter 1. Series
  2.3.2 Chapter 14. Complex functions
  2.3.3 Chapter 7. Fourier Series
  2.3.4 Chapter 15. Integral transforms (Laplace and Fourier transforms)
  2.3.5 Chapter 2. Complex Numbers
  2.3.6 Chapter 9. Calculus of variations

2.3.1 Chapter 1. Series

\(a+ar+ar^{2}+\cdots +ar^{n}+\cdots =\frac {a\left (1-r^{n}\right ) }{1-r}\), Now, if \(\left \vert r\right \vert <1\), then the above is convergent, hence we get \(S_{n}=\frac {a\ }{1-r}\). Always start by looking for a constant term \(a\) here, and then a term that is multiplied each time, \(r\) here.

2.3.2 Chapter 14. Complex functions

   2.3.2.1 How to find the residue?

2.3.2.1 How to find the residue?

seek book page 598

2.3.3 Chapter 7. Fourier Series

   2.3.3.1 Parseval’s theorem for fourier series

Expand a periodic function (must be periodic) in sin and cos functions.

Let the function angular velocity be \(\omega \), which is defined as angles (radian) per second, i.e. \(\omega =\) \(\frac {2\pi }{T}\) where \(T\) is the period in time, which is the time to make \(2\pi \) angle.

\begin {align*} f\relax (x) & =\frac {1}{2}a_{0}+a_{1}\cos \omega x+a_{2}\cos 2\omega x+\\ & \cdots +b_{1}\sin \omega x+b_{2}\sin 2\omega x+\cdots \end {align*}

So, for a function whose period is \(2\pi \), i.e. \(\omega =1\), the above can be written as

\begin {align*} f\relax (x) & =\frac {1}{2}a_{0}+a_{1}\cos x+a_{2}\cos 2x+\\ & \cdots +b_{1}\sin x+b_{2}\sin 2x+\cdots \end {align*}

Now, to find \(a_{n}\) and \(b_{n}\)

\begin {align*} a_{n} & =\frac {2}{T}\int _{T}f\relax (x) \cos \omega nx\ dx\\ b_{n} & =\frac {2}{T}\int _{T}f\relax (x) \sin \omega nx\ dx \end {align*}

So, I only need to remember ONE formula

note: Remember, when finding \(a_{n}\), for \(a_{0}\), do it separately, set \(n=0\) in the integral first and integrate that, do not set \(n=0\) in the result, leave that for \(n\neq 0\). For \(b_{n}\) we do not need to worry about this, since for \(\sin \) series it starts at \(n=1\)

note: When will this expansion converge to \(f\left ( x\right ) ?\) when the function meet the Dirichlet conditions. Basically it needs to be periodic of period \(2\pi \), single valued, has finite number of jumps. At jumps, the series converges to average of the function there.

In these kind of problems, we are given a function \(f\relax (x) \) and asked to find its F. series. So need to apply the above formulas to find the coefficients. Need to know some tricks for quickly evaluating the integrals.

Now there is a complex form of all the above equations.

\[ f\relax (x) =c_{0}+c_{1}e^{ix}+c_{-1}e^{-ix}+c_{2}e^{2ix}+c_{-2}e^{-2ix}+\cdots ={\sum \limits _{-\infty }^{\infty }}\]

\[ c_{n}=\frac {1}{T}\int _{T}f\relax (x) \ e^{-inx\omega }\ dx \]

Now, \(\omega \), is the angular velocity. i.e. \(\theta =\omega t\), so for ONE period \(T\), \(\theta =2\pi \), hence \(\omega =\frac {2\pi }{T}\), so \(c_{n}\) can be written as

\[ c_{n}=\frac {1}{T}\int _{T}f\relax (x) \ e^{-inx\frac {2\pi }{T}}\ dx \]

Notice that in this chapters we use distance for period (i.e. wave length \(\lambda \)) instead of time as period \(T\). it does not matter, they are the same, choose one. i.e. we can say that the function repeats every \(\lambda \), or the function repeats every one period \(T\).

When using \(\lambda \) for period, say \(-l,l\,\ \) or \(-\pi ,\pi \,\ \) the above equation becomes

\[ c_{n}=\frac {1}{2l}\int _{-l}^{l}f\relax (x) \ e^{-inx\frac {2\pi }{2l}}\ dx=\frac {1}{2l}\int _{-l}^{l}f\relax (x) \ e^{-inx\frac {\pi }{l}}\ dx \]

 

note: Above integral for \(c_{n}\) is for negative \(n\) as well as positive \(n.\) In non-complex exponential expansion, there is no negative \(n\), only positive.

note: \(c_{-n}=\bar {c}_{n}\)

note: there is a relation between the \(a_{n},b_{n}\), and the \(c_{n}\) which is

\(a_{n}=c_{-n}+c_{n}\) and \(b_{n}=i\left (-c_{-n}+c_{n}\right ) \)

 

IF given \(f\relax (x) \), defined over \(\left (0,L\right ) \), The algorithm to find Fourier series is this:

IF asked to find a(n) i.e. the COSIN series, THEN
extend f(x) so that it is EVEN (this makes b(n)=0)
and period now is 2L
ELSE
IF asked to find b(n), i.e. the SIN series, THEN
extend f(x) to be ODD (this makes a(n)=0)
and period now is 2L
ELSE we want the standard SIN/COSIN
period remains L, and use the c(n) formula
(and remember to do the c(0) separatly for the DC term)
END IF
END IF

2.3.3.1 Parseval’s theorem for fourier series

This theory gives a relation between the average of the square of \(f\left ( x\right ) \) over a period and the fourier coefficients. Physically, it says that this:

the total energy of a wave is the sum of the energies of the individual harmonics it carries

Average of \(\left [ f\relax (x) \right ] ^{2}=\left (\frac {1}{2}a_{0}\right ) ^{2}+\frac {1}{2}\sum _{1}^{\infty }a_{n}^{2}+\frac {1}{2}\sum _{1}^{\infty }b_{n}^{2}\) over ONE period.

In complex form, Average of \(\left \vert f\relax (x) ^{2}\right \vert =\ \sum _{-\infty }^{\infty }\left \vert c_{n}\right \vert ^{2}\).  Think of this like pythagoras theorem.

For example, given \(f\relax (x) =x\), then \(\left [ f\relax (x) \right ] ^{2}=\frac {1}{2}\int _{-1}^{1}x^{2}dx=\frac {1}{3}\), then \(\frac {1}{3}=\sum _{-\infty }^{\infty }\left \vert c_{n}\right \vert ^{2}\)

In the above we used the standard formula for average of a function, which is

average of \(f\relax (x) =\frac {1}{T}\int _{T}f\relax (x) \ dx\), here we should need to square \(f\relax (x) \)

2.3.4 Chapter 15. Integral transforms (Laplace and Fourier transforms)

   2.3.4.1 Laplace and Fourier transforms definitions
   2.3.4.2 Inverse Fourier and Laplace transform formulas
   2.3.4.3 Using Laplace transform to solve ODE
   2.3.4.4 Partial fraction decomposition
   2.3.4.5 convolution
   2.3.4.6 Parseval’s theorem
   2.3.4.7 Dirac delta and Green function for solving ODE

2.3.4.1 Laplace and Fourier transforms definitions

\begin {align*} F\ f\relax (x) & =F\relax (p) =\int _{0}^{\infty }f\left ( x\right ) \ e^{-px}\ dx\text {\ \ \ \ \ \ \ \ \ \ }p>0\\ \ F\ g\relax (x) & =g\left (\alpha \right ) =\frac {1}{2\pi }\int _{-\infty }^{\infty }f\relax (x) \ e^{-i\alpha x}\ dx \end {align*}

Associate Fourier with \(\frac {1}{2\pi }\). (mind pic: Fourier=Fraction i.e.\(\rightarrow \frac {1}{2\pi }\)) and Fourier goes from \(-\infty \) to \(+\infty \) (mind pic: Fourier=whole Floor), Fourier imaginary exponent, Laplace real exponent.

Note: Laplace transform is linear operator, hence \(L\left [ f\left ( t\right ) +g\relax (t) \right ] =Lf\relax (t) +Lg\left ( t\right ) \) and \(L\left [ c\ f\relax (t) \right ] =c\ Lf\left ( t\right ) \)

2.3.4.2 Inverse Fourier and Laplace transform formulas

(We do not really use the inverse Laplace formula directly (called Bromwich integral), we find inverse Laplace using other methods, see below)

\begin {align*} f\relax (x) & =\frac {1}{2\pi i}\int _{c-i\ \infty }^{c+i\ \infty }F\relax (z) \ e^{zt}\ dz\text {\ \ \ \ \ \ \ \ \ \ }t>0\ \ \ \ \ \ \ \text {Inverse laplace}\\ \ f\relax (x) & =\frac {1}{2\pi }\int _{-\infty }^{\infty }g\left ( \alpha \right ) \ e^{i\alpha x}\ d\alpha \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text {Inverse fourier} \end {align*}

The Fourier transform has 2 other siblings to it (which Laplace does not), these are the sin and cos transform and inverse transform. I’ll add these later but I do not think we will get these in the exam.

Note: To get the inverse Laplace transform the main methods are

  1. using partial fractions to break the expression to smaller ones we can lookup in tables
  2. Use Convolution. i.e. given \(Y=L\left (f_{1}\right ) \ L\left ( f_{2}\right ) \rightarrow y=\int _{0}^{t}g\left (t-\tau \right ) \ f\left ( \tau \right ) \ \ d\tau =g\otimes f\) use this as an alternative to partial fraction decomposition if easier. mind pic: \(t\) one time, \(\tau \) 2 times.
  3. Use the above integral (Bromwich) directly (hardly done)
  4. To find \(f\relax (t) \) from the Laplace transform, instead of using the above formula, we can write

    \(f\relax (t) =\) sum of residues of \(F\relax (z) e^{zt}\) at all poles. For example given \(F\relax (z) \), we multiply it by \(e^{zt}\), and then find all the poles of the resulting function (i.e. the zeros of the denominator), then add these.

Note: To find Fourier transform , \(g\left (\alpha \right ) \), must carry the integration (i.e. apply the integral directly, no tables like with Laplace).

Note: we use Laplace transform as a technique to solve ODE. Why do we need Fourier transform? To represent an arbitrary function (must be periodic or extend to be period if not) as a sequence of sin/cosine functions. And why do we do this? To make it easier to analyze it and find what frequency components it has. For continuous function, use fourier transform (integral).  

note: Function must satisfy Dirichlet conditions to use in fourier transform or Fourier series.

note: Fourier series expansion of a function will accurately fit the function as more terms are added. But in places where there is a jump, it will go to the average value of the function at the jump.

question: When do we use fourier series, and when to use fourier transform? Why do we need F. transform if we can use F. Series? We use F. transform for continuous frequencies. What does this really mean?

2.3.4.3 Using Laplace transform to solve ODE

Remember

\begin {align*} L\relax (y) & =Y\\ L\left (y^{\prime }\right ) & =pY-y_{0}\\ L\left (y^{\prime \prime }\right ) & =p^{2}Y-py_{0}-y_{0}^{\prime } \end {align*}

note: \(p\) has same power as order of derivative. do not mix up where the \(p\) goes in the \(y^{\prime \prime }\) equation. remember the \(y_{0}^{\prime }\) has no \(p\) with it. mind pic: think of the \(y_{0}\) as the senior guy since coming from before so it is the one who gets the \(p\).

note: if \(y_{0}=y_{0}^{\prime }=0\,\ \)(which most HW problem was of this sort), then the above simplifies to

\begin {align*} L\left (y^{\prime }\right ) & =pY\ \\ L\left (y^{\prime \prime }\right ) & =p^{2}Y\ \end {align*}

So given an ODE such as \(y^{\prime \prime }+4y^{\prime }+y=f\relax (t) \rightarrow \left (p^{2}+4p+4\right ) Y=L(f\relax (t) )\)

i.e. just replace \(y^{\prime \prime }\) by \(p^{2}\), etc... This saves lots of time in exams. Now we get an equation with \(Y\) in terms of \(p,\) now solve to find \(y\relax (t) \) from \(Y\) using tables. Notice that solution of ODE this way gives a particular solution, since we used the boundary conditions already.

For an ODE such as

\[ Ay^{\prime \prime }+By^{\prime }+Cy=h\relax (t) \]

its Laplace transform can be written immediately as

\begin {align*} Ap^{2}Y+BpY+CY & =L\ h\relax (t) \\ Y & =\frac {L\ h\relax (t) }{Ap^{2}+Bp+C} \end {align*}

whenever the B.C. are \(y_{0}^{\prime }=0\) and \(y_{0}=0\)

2.3.4.4 Partial fraction decomposition

When denominator is linear time quadratic or quadratic time quadratic PFD is probably needed.

This is how to do PFD for common cases

\begin {align*} \frac {1}{\left (x+c\right ) \left (x^{2}+x+6\right ) } & =\frac {A}{\left ( x+c\right ) }+\frac {Bx+C}{\left (x^{2}+x+6\right ) }\text { \ \ (quadratic in denominator case)}\\ & \\ \frac {1}{\left (x^{2}+3x+4\right ) \left (x^{2}+x+6\right ) } & =\frac {Ax+B}{\left (x^{2}+3x+4\right ) }+\frac {Cx+D}{\left (x^{2}+x+6\right ) }\text { \ \ (quadratic in denominator case)}\\ & \\ \frac {1}{\left (x+c\right ) \left (x+d\right ) } & =\frac {A}{\left ( x+1\right ) }+\frac {B\ }{\left (x+d\right ) }\\ & \\ \frac {x^{2}+\ x+b}{\left (x+c\right ) \left (x-d\right ) ^{2}} & =\frac {A}{\left (x+1\right ) }+\frac {B\ }{\left (x-d\right ) }+\frac {B\ }{\left ( x-d\right ) ^{2}}\ \ \ \ \ \text {(repeated roots case)} \end {align*}

we get some equations which we solve for \(A,B,\) etc... This part can be time consuming in exam.

2.3.4.5 convolution

Main use of convolution in this class is to find the inverse laplace transform.

If we are given the transform itself (i.e. frequency domain) function, and asked to find the inverse, i.e. the time domain function. Then look at the function given, if it made of 2 functions multiplied by each others, then good chance we use convolution.

pict

Example:

Given this equation

\[ Y\relax (p) =G\relax (p) \ \ H\relax (p) \]

We first find the inverse of \(G\relax (p) \ \)and\(\ H\relax (p) \) separately. i.e. we find \(g\relax (t) \) and \(h\relax (t) \). we usually do this by looking up tables. Once we do this step, the next step is to take the convolution of these 2 time domain functions.

The result, will be \(y\relax (t) \,,\) i.e. the inverse of \(Y\left ( z\right ) .\)

Notice that you can NOT just say \(y\relax (t) =g\left ( t\right ) \ h\relax (t) \), DO NOT DO THIS. But we must use convolution to find \(y\relax (t) \):

\begin {align*} y\relax (t) & =\ g\relax (t) \circledast h\relax (t) \\ y\relax (t) & =\int _{0}^{t}g\left (\tau \right ) \ h\left ( t-\tau \right ) \ d\tau \\ & =\int _{0}^{t}g\left (t-\tau \right ) \ h\left (\tau \right ) \ d\tau \end {align*}

Notice, choose the simpler function to put the \(\left (t-\tau \right ) \) in. It does not matter if it is the \(f\) or the \(h.\) remember, the \(\tau \) occur 2 times in the integral, the \(t\) one time.

The above means

\begin {align*} \mathcal {L}\ y & =\mathcal {L}\ g\relax (t) \ \mathcal {L}\ h\relax (t) =\mathcal {L}\left [ \ g\relax (t) \ \circledast h\left (t\ \right ) \right ] \ \ \\ y & =g\relax (t) \ \circledast h\left (t\ \right ) \end {align*}

The above comes when we want to solve  an ODE. Usually we know \(g\left ( t\right ) \) which is the transfer function, and \(h\relax (t) \) is given (the forcing function of the ODE).

For Fourier transform, convolution can be used as well. it is very similar equation:

\[ F\ \left (g\relax (t) \right ) \ \ \ F\ \left (h\relax (t) \right ) =\frac {1}{2\pi }\ \mathcal {F}\left [ \ g\relax (t) \ \circledast h\left (t\ \right ) \right ] \ \]

So difference is the \(\frac {1}{2\pi }\)

2.3.4.6 Parseval’s theorem

(total energy in a signal equal the sum of the energies in the harmonics that make up the signal).

\[ \int _{-\infty }^{\infty }\left \vert g\left (\alpha \right ) \right \vert ^{2}\ d\alpha =\frac {1}{2\pi }\int _{-\infty }^{\infty }\left \vert f\left ( x\right ) \right \vert ^{2}\ dx \]

2.3.4.7 Dirac delta and Green function for solving ODE

Dirac delta function is a function defined for \(t\), who has an area of 1 and zero width and \(\infty \) value at \(t\). (not a real function). Used to represent an impulse force being applied at \(t.\)

When multiplied with any other function inside an integral will given that other function at the time the impulse was applied. i.e. \(\int f\left ( t\right ) \ \delta \left (t-t_{0}\right ) \ dt=f\left (t_{0}\right ) \), here \(t_{0}\) is the time the impulse is applied.

note: Fourier transform of delta function: \(g\left (\alpha \right ) =\frac {1}{2\pi }\int _{-\infty }^{\infty }\delta \left (x-x_{0}\right ) e^{-i\alpha x}\ dx=e^{-i\alpha x_{0}}\)

note: Green function \(G\left (t,t^{\prime }\right ) \) is the response of a system (solution of an ODE) when the force (input) is an impulse at time \(t=t^{\prime }\)

How to use Green function to solve an ODE? Given \(G\left (t,t^{\prime }\right ) \), \(y\relax (t) =\int _{0}^{\infty }G\left (t,t^{\prime }\right ) \ \ f\left (t^{\prime }\right ) \ dt^{\prime }\) Where \(f\left ( t\right ) \) is the force on the system (the RHS to the ODE). Usually we are given the Green function and asked to solve the ODE. So just need to apply the above integral.

Question: ask about if the above is correct for the finals or is it possible we need to find G as well?

Solving an ODE using green method Here we are given an ODE, with a forcing function (i.e. nonhomogeneous ODE). And given 2 solutions to it, and asked to find the particular solution.

Example, \(y^{\prime \prime }-y=f\relax (t) \) and solutions are \(y_{1},y_{2}\) then the particular solution is \(y_{p}=y_{2}\int \frac {y_{1}\ f\relax (t) }{W}dt-y_{1}\int \frac {y_{2}\ f\relax (t) }{W}dt\) where \(W=\begin {vmatrix} y_{1}^{\prime } & y_{2}^{\prime }\\ y_{1} & y_{2}\end {vmatrix} \)

2.3.5 Chapter 2. Complex Numbers

note: When given a problem such as evaluate \(\left (-2-2i\right ) ^{\frac {1}{5}}\), always start by finding the length of the complex number, then extract it out before converting to the \(re^{in\theta \text { }}\)form. For example, \(\left (-2-2i\right ) ^{\frac {1}{5}}=2\sqrt {2}\left (\frac {-1}{\sqrt {2}}-\frac {i}{\sqrt {2}}\right ) \,\), the reason is that now the stuff inside the brackets has length ONE. So we now get \(2\sqrt {2}\left ( \frac {-1}{\sqrt {2}}-\frac {i}{\sqrt {2}}\right ) =2\sqrt {2}e^{-\frac {3}{4}\pi i}\) and only now apply the last raising of power to get \(\left (2\sqrt {2}e^{-\frac {3}{4}\pi i}\right ) ^{\frac {1}{5}}=2^{\frac {3}{10}}e^{\frac {\frac {3}{4}\pi i+2n\pi }{5}}\) for \(n=0,1,2,3,\cdots \) make sure not to forget the \(2n\pi \), I seem to forget that.

2.3.6 Chapter 9. Calculus of variations

   2.3.6.1 Euler equation
   2.3.6.2 Lagrange equations
   2.3.6.3 Solving Euler-Lagrange with constraints

2.3.6.1 Euler equation

How to construct Euler equation \(\frac {d}{dx}\left (\frac {\partial F}{\partial y^{\prime }}\right ) -\frac {\partial F}{\partial y}=0\). If integrand does not depend on \(x\) then change to \(y\). Example \(\int _{x_{2}}^{x_{1}}y^{\prime 2}\ y\ dx\rightarrow \int _{y_{2}}^{y_{1}}\frac {1}{x^{\prime 2}}\ y\ \left (x^{\prime }\ dy\right ) \rightarrow \int _{y_{2}}^{y_{1}}\frac {1}{x^{\prime }}\ y\ \ dy\)  this is done by making the substitution \(y^{\prime }=\frac {1}{x^{\prime }}\) and \(dx=x^{\prime }\ dy\). Now Euler equation changes from \(\frac {d}{dx}\left (\frac {\partial F}{\partial y^{\prime }}\right ) -\frac {\partial F}{\partial y}=0\) to \(\frac {d}{dy}\left ( \frac {\partial F}{\partial x^{\prime }}\right ) -\frac {\partial F}{\partial x}=0\).

Normally, \(\frac {\partial F}{\partial y}\) will be zero. Hence we end up with \(\frac {d}{dx}\left (\frac {\partial F}{\partial y^{\prime }}\right ) =0\,\) and this means \(\frac {\partial F}{\partial y^{\prime }}=c\), and so we only need to do ONE integral (i.e. solve a first order ODE). If I find myself with a 2 order ODE (for this course!) , I have done something wrong since all problems we had are of this sort.

2.3.6.2 Lagrange equations

are just Euler equations, but one for each dimension.

\(F\) is now called \(L\). where\(\ L=T-V\) where \(T=K.E.\) and \(V=P.E.\), \(\ T=\frac {1}{2}mv^{2}\), \(\ V=mgh\)

So given a problem, need to construct \(L\) ourselves. Then solve the Euler-Lagrange equations

\begin {align*} \frac {d}{dt}\left (\frac {\partial L}{\partial \dot {x}}\right ) -\frac {\partial L}{\partial x} & =0\\ \frac {d}{dt}\left (\frac {\partial L}{\partial \dot {y}}\right ) -\frac {\partial L}{\partial y} & =0\\ \frac {d}{dt}\left (\frac {\partial L}{\partial \dot {z}}\right ) -\frac {\partial L}{\partial z} & =0 \end {align*}

The tricky part is finding \(v^{2}\) for different coordinates. This is easy if you know \(ds^{2}\), so just remember those

\begin {align*} ds^{2} & =dr^{2}+r^{2}d\theta ^{2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text {(polar)}\\ ds^{2} & =dr^{2}+r^{2}d\theta ^{2}+dz^{2}\ \ \ \ \ \ \ \ \ \ \text {(cylindrical)}\\ ds^{2} & =dr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta d\phi ^{2}\ \ \ \ \text {(spherical)} \end {align*}

So to find \(v^{2}\) just divide by \(dt^{2}\) and it follows right away the following

\begin {align*} v^{2} & =\dot {r}^{2}+r^{2}\dot {\theta }^{2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text {(polar)}\\ v^{2} & =\dot {r}^{2}+r^{2}\dot {\theta }^{2}+\dot {z}^{2}\ \ \ \ \ \ \ \ \ \ \text {(cylindrical)}\\ v^{2} & =\dot {r}^{2}+r^{2}\dot {\theta }^{2}+r^{2}\sin ^{2}\theta \ \dot {\phi }^{2}\ \ \ \ \text {(spherical)} \end {align*}

To help remember these: Note \(ds^{2}\) all start with \(dr^{2}+r^{2}d\theta ^{2} \) for each coordinates system. So just need to remember the third terms. (think of polar as subset to the other two). Also see that each variable is squared. So the only hard think is to remember the last term for the spherical.

Remember that in a system with particles, need to find the KE and PE for each particle, and then sum these to find the whole system KE and PE, and this will give one \(L\) for the whole system before we start using the Lagrange equations.

2.3.6.3 Solving Euler-Lagrange with constraints

The last thing to know is this chapter is how to solve constraint problems. This is just like solving for Euler, expect now we have an additional integral to deal with.

So in these problems we are given 2 integrals instead of one. One of these will be equal to some number say \(l\).

So we need to minimize \(I=\int _{x_{2}}^{x_{1}}F(x,y^{\prime },y)\ dx\)  subject to constraint that \(g=\int _{x_{2}}^{x_{1}}G(x,y^{\prime },y)\ dx=l\)

Follow the same method as Euler, but now we write

\[ \frac {d}{dx}\left (\frac {\partial }{\partial y^{\prime }}\left (F+\lambda G\right ) \right ) -\frac {\partial }{\partial y}\left (F+\lambda G\right ) =0 \]

So replace \(F\) by \(F+\lambda G\)

This will give as equation with 3 unknowns, 2 for integration constants, and one with \(\lambda \), we solve for these given the Boundary conditions, and \(l \) but we do not have to do this, just need to derive the equations themselves.

Some integrals useful to know in solving the final integrals for the Euler problems are these

\begin {align*} \int \frac {c}{\sqrt {y^{2}-c^{2}}}dy & =c\cosh ^{-1}\left (\frac {y}{c}\right ) +k\\ & \\ \int \frac {c}{\sqrt {1-c^{2}\ y^{2}}}dy & =c\sin ^{-1}\left (c\ y\right ) +k\\ & \\ \int \frac {c}{y\sqrt {y^{2}-c^{2}}}dy & =\frac {1}{c}\cos ^{-1}\left ( \frac {c\ }{y}\right ) +k \end {align*}