Sometime in 2010 Compiled on January 29, 2024 at 2:47am

This note shows how to use the idea of eigenvalues and eigenfunctions to help guide ﬁnding a solution to a diﬀerential equation. There are many ways to solve this ODE, and this is a nicer more general way to looking at solving it.

Given \begin {equation} \frac {d^{2}u}{dx^{2}}=f\left ( x\right ) \tag {1} \end {equation}

With some boundary conditions \(u\left ( 0\right ) =u_{0}\) and \(u\left ( L\right ) =u_{L}.\)

We start by rewriting this ODE as \(Lu=f\) where \(L\) is an operator applied on \(u\). This is just a rewrite of the ODE, we did not do anything new here, but this way it makes the equation look more like an \(Ax=b\), and this helps, for later when we discretize it and apply FDM (ﬁnite diﬀerence method), that what we will end up with. Also writing it as \(Lu=f\) is more cool, and makes one look like a real math person.

Now that we have \(Lu=f\) , what to do next? The whole point is to now ﬁnd the eigenfunctions and eigenvalues of the operator \(L\) (Recall, an operator has a matrix as a representation, \(L\) is a mapping operator after all, so it is not far fetch to talk about an eigenvalues and eigenfunctions of an operator.).

Let us now call the eigenfunctions of \(L\) as \(g_{n}\) and the eigenvalues as \(\lambda _{n}\). Now we can write \[ Lg_{n}=\lambda _{n}g_{n}\] But how to ﬁnd these \(g_{n}\)? For the above ODE, it is done by inspection as it is clear that \(g_{n}=\sin \left ( n\pi x\right ) \) is an eigenfunction. We can see that because if we apply \(L\) to it, we obtain\begin {align*} L\left ( \sin \left ( n\pi x\right ) \right ) & =\frac {d^{2}}{dx^{2}}\sin \left ( n\pi x\right ) \\ & =-n^{2}\pi ^{2}\sin \left ( n\pi x\right ) \end {align*}

Hence it is now in the form \(Lg_{n}=\) \(\lambda _{n}g_{n}\), where \(\lambda _{n}\), a scalar, and in this case \[ \lambda _{n}=-n^{2}\pi ^{2}\] This is cool. We found the eigenfunctions and eigenvalues of \(L.\) Now what to do with them? Well, Since from (1) we see that \(f\left ( x\right ) \) is in the domain of the operator, because \(Lu=f\left ( x\right ) \), and we just found the eigenfunctions of the operator, so then this is like saying that we found the basis vectors of the domain of \(L\) where \(f\left ( x\right ) \) lives, and we need to use the basis vectors of this domain to represent \(f\left ( x\right ) \). In other words, \begin {equation} f\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}g_{n}\tag {2} \end {equation} The is just like in normal euclidean space, where we represent a vector as \[ \mathbf {v}=v_{x}\mathbf {i}+v_{y}\mathbf {j}+v_{z}\mathbf {k}\] The eigenfunctions \(g_{n}\) are like the basis vectors \(\mathbf {i,j,k}\) and \(a_{n}\) are like the coordinates of the vector \(\mathbf {v}\). And \(f\left ( x\right ) \) is like the vector \(\mathbf {v}\).

So far so good. We found the eigenfunctions of \(L\), and we rewrote \(f\left ( x\right ) \) in terms of these eigenfunctions. But wait a minute, we now have to ﬁnd the \(a_{n}\). These are like the coordinates of \(f\left ( x\right ) \) when viewed in function space.

Here comes to the rescue something new that we need in order to make more progress. These eigenfunctions are not just some random things we pulled out of the sky. They are special functions and must adhere to some things. This is mathematics after all, and we must have some order.

These eigenfunctions must be orthogonal to each others and we deﬁne them on square integrable space \(L^{2}\left [ 0,L\right ] .\) We just made this restriction of the space to be able to make more headway in solving this problem.

What all this means, is that \(g_{n}\) must be orthogonal to each others (just like \(\mathbf {i,j,k\,}\) are as a special case in the Euclidean space). Being in this space, we need to deﬁne an inner product on them. We need to know how to perform an inner product between \(g_{n}\) and \(g_{m}\).

You might feel tricked now, because we did not say any of this stuﬀ about the eigenfunctions \(g_{n}=\sin \left ( n\pi x\right ) \) when we found them above by inspection. But it is OK, luckily for us \(g_{n}=\sin \left ( n\pi x\right ) \) does meet these requirements. How? because if we deﬁne the inner product between \(g_{n}\) and \(g_{m}\) using\[ \left \langle g_{n},g_{m}\right \rangle ={\displaystyle \int \limits _{a}^{b}} g_{n}g_{m}dx={\displaystyle \int \limits _{a}^{b}} \sin \left ( n\pi x\right ) \sin \left ( m\pi x\right ) dx \] Then the above becomes\[ \left \langle g_{n},g_{m}\right \rangle =\left \{ \begin {array} [c]{cc}0 & n\neq m\\ \frac {1}{2} & m=m \end {array} \right . \] So, \(g_{n},g_{m}\) are orthogonal to each others. This is what orthogonal means. If we inner product any 2 diﬀerent eigenfunctions with each others, we get zero, but if we inner product an eigenfunction with itself, we do not get zero.

Now we are really happy. We found that the eigenfunctions \(g_{n}=\sin \left ( n\pi x\right ) \) are orthogonal to each others, and we can express \(f\left ( x\right ) \) in term of them. We use this inner product property to ﬁnd \(a_{n}\). We go back to (2) above, and multiply each side by \(g_{m}\neq g_{n}\) and obtain\begin {align*} f\left ( x\right ) g_{m} & =g_{m}\sum _{n=0}^{\infty }a_{n}g_{n}\\ & =\sum _{n=0}^{\infty }a_{n}g_{m}g_{n} \end {align*}

Integrating each side gives\begin {align*} {\displaystyle \int \limits _{a}^{b}} f\left ( x\right ) g_{m}dx & ={\displaystyle \int \limits _{a}^{b}} \sum _{n=0}^{\infty }a_{n}g_{m}g_{n}dx\\ & =\sum _{n=0}^{\infty }a_{n}{\displaystyle \int \limits _{a}^{b}} g_{m}g_{n}dx \end {align*}

But now we see that \({\displaystyle \int \limits _{a}^{b}} g_{m}g_{n}=\frac {1}{2}\) for \(n=m\) and zero for all other terms so the above reduces to\[{\displaystyle \int \limits _{a}^{b}} f\left ( x\right ) g_{m}dx=\frac {a_{m}}{2}\] Hence we just found \begin {equation} a_{n}=2{\displaystyle \int \limits _{a}^{b}} f\left ( x\right ) g_{n}dx\tag {3} \end {equation} We take this \(a_{n}\) and use it in (2). So, we have just found an expansion of \(f\left ( x\right ) \) in terms of the eigenfunctions \(g_{n}\). i.e. we have found a complete representation of \(f\left ( x\right ) \) as a function in the space of \(L\), with its basis vectors and the coordinates \(a_{n}\).

This is all so wonderful. But how does this help us to ﬁnd the solution to \(\frac {d^{2}u}{dx^{2}}=f\left ( x\right ) ?\) well, if now just write \(Lu=f\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}g_{n}\), then we have\[ Lu=\sum _{n=0}^{\infty }a_{n}g_{n}\] But hold a minute, this means that \(\sum _{n=0}^{\infty }a_{n}g_{n}\equiv \lambda u\). And because \(Lu_{n}=\lambda _{n}u_{n}\) then

\[ u=\sum _{n=0}^{\infty }\frac {a_{n}}{\lambda _{n}}g_{n}\] And this is the solution to the ode.

Hence given a diﬀerential operator \(L\), once we know its eigenfunctions and its eigenvalues, the problem is solved.

We just have to express the forcing function in terms of the eigenfunctions, and once this is done, the problem is solved. the solution is found. In real life, we obtain the matrix representation of \(L\), and we work on the matrix representation and ﬁnd the eigenfunctions and eigenvalues. So, solving this ODE becomes a problem of ﬁnding eigenvalues and eigenfunctions. But we need to remember that this all worked only because we were able to represent \(f\left ( x\right ) \) in terms of the eigenfunctions. If somehow we could not represent \(f\left ( x\right ) \) this way, then this whole approach falls apart.