\(\blacksquare \) if we have a function \(f\left ( x\right ) \) represented as series (say power series or Fourier series), then we say the series converges to \(f\left ( x\right ) \) uniformly in region \(D\), if given \(\varepsilon >0\), we can number \(N\) which depends only on \(\varepsilon \), such that \(\left \vert f\left ( x\right ) -S_{N}\left ( x\right ) \right \vert <\varepsilon \).
Where here \(S_{N}\left ( x\right ) \) is the partial sum of the series using \(N\) terms. The difference between uniform convergence and non-uniform convergence, is that with uniform the number \(N\) only depends on \(\varepsilon \) and not on which \(x\) we are trying to approximate \(f\left ( x\right ) \) at. In uniform convergence, the number \(N\) depends on both \(\varepsilon \) and \(x\). So this means at some locations in \(D\) we need much larger \(N\) than in other locations to convergence to \(f\left ( x\right ) \) with same accuracy.
Uniform convergence is better. It depends on the basis functions used to approximate \(f\left ( x\right ) \) in the series.
If the function \(f\left ( x\right ) \) is discontinuous at some point, then it is not possible to find uniform convergence there. As we get closer and closer to the discontinuity, more and more terms are needed to obtained same approximation away from the discontinuity, hence not uniform convergence. For example, Fourier series approximation of a step function can not be uniformly convergent due to the discontinuity in the step
function.
This work for positive and negative \(n\), rational or not. The sum converges when only for \(\left \vert x\right \vert <1\). From this, we can derive the above sums also for the geometric series. For example, for \(n=-1\) the above becomes
And now since \(\left \vert \frac {1}{x}\right \vert <1\), we can use binomial expansion to expand the term \(\left ( 1+\frac {1}{x}\right ) ^{n}\) in the above and obtain a convergent series, since now \(\left \vert \frac {1}{x}\right \vert <1\,.\) This will give the following expansion
So everything is the same, we just change \(x\) with \(\frac {1}{x}\) and remember to multiply the whole expansion with \(x^{n}\). For example, for \(n=-1\)
Where \(R_{n}\) is remainder \(R_{n}=\frac {\left ( x-a\right ) ^{n+1}}{\left ( n+1\right ) !}f^{\left ( n+1\right ) }\left ( x_{0}\right ) \) where \(x_{0}\) is some point between \(x\) and \(a\).
\(\blacksquare \) \(\ \)Maclaurin series: Is just Taylor expanded around zero. i.e. \(a=0\)
\(\blacksquare \) \(\ \)This diagram shows the different convergence of series and the relation between them
The above shows that an absolutely convergent series (\(B\)) is also convergent. Also a uniformly convergent series (\(D\)) is also convergent. But the series \(B\) is absolutely convergent and not uniform convergent. While \(D\) is uniform convergent and not absolutely convergent.
The series \(C\) is both absolutely and uniformly convergent. And finally the series \(A\) is convergent, but not absolutely (called conditionally convergent). Examples of \(B\) (converges absolutely but not uniformly) is
For uniform convergence, we really need to have an \(x\) in the series and not just numbers, since the idea behind uniform convergence is if the series convergence to within an error tolerance \(\varepsilon \)
using the same number of terms independent of the point \(x\) in the region.
\(\blacksquare \) The sequence \(\sum _{n=1}^{\infty }\frac {1}{n^{a}}\) converges for \(a>1\) and diverges for \(a\leq 1\). So \(a=1\) is the flip value. For example
Diverges, since \(a=1\), also \(1+\frac {1}{\sqrt {2}}+\frac {1}{\sqrt {3}}+\frac {1}{\sqrt {4}}+\cdots \) diverges, since \(a=\frac {1}{2}\leq 1\). But \(1+\frac {1}{4}+\frac {1}{9}+\frac {1}{16}+\cdots \) converges, where \(a=2\) here and the sum is \(\frac {\pi ^{2}}{6}\).
\(\blacksquare \) Using partial sums. Let \(\sum _{n=0}^{\infty }a_{n}\) be some sequence. The partial sum is \(S_{N}=\sum _{n=0}^{N}a_{n}\). Then
If \(\lim _{N\rightarrow \infty }S_{n}\) exist and finite, then we can say that \(\sum _{n=0}^{\infty }a_{n}\) converges. So here we use set up a sequence who terms are partial sum, and them look at what happens in the limit to such a term as \(N\rightarrow \theta \). Need to find an example where this method is easier to use to test for convergence than the other method
below.
\(\blacksquare \) Given a series, we are allowed to rearrange order of terms only when the series is absolutely convergent. Therefore for the alternating series \(1-\frac {1}{2}+\frac {1}{3}-\frac {1}{4}+\cdots \), do not rearrange terms since this is not absolutely convergent. This means the series sum is independent of the order in which terms are added only when the series is absolutely convergent.
\(\blacksquare \) In an infinite series of complex numbers, the series converges, if the real part of the series and also the complex part of the series, each converges on their own.
\(\blacksquare \) Power series: \(f\left ( z\right ) =\sum _{n=0}^{\infty }a_{n}\left ( z-z_{0}\right ) ^{n}\). This series is centered at \(z_{0}\). Or expanded around \(z_{0}\). This has radius of convergence \(R\) is the series converges for \(\left \vert z-z_{0}\right \vert <R\) and diverges for \(\left \vert z-z_{0}\right \vert >R\).
\(\blacksquare \) Tests for convergence.
Always start with preliminary test. If \(\lim _{n\rightarrow \infty }a_{n}\) does not go to zero, then no need to do anything else. The series \(\sum _{n=0}^{\infty }a_{n}\) does not converge. It diverges. But if \(\lim _{n\rightarrow \infty }a_{n}=0\), it still can diverge. So this is a necessary but not sufficient condition for convergence. An example is \(\sum \frac {1}{n}\). Here \(a_{n}\rightarrow 0\) in the limit, but we know that this series does not converge.
For Uniform convergence, there is a test called the weierstrass M test, which can be used to check if the series is uniformly convergent. But if this test fails, this does not necessarily mean the series is not uniform convergent. It still can be uniform convergent. (need an example).
To test for absolute convergence, use the ratio test. If \(L=\lim _{n\rightarrow \infty }\left \vert \frac {a_{n+1}}{a_{n}}\right \vert <1\) then absolutely convergent. If \(L=1\) then inconclusive. Try the integral test. If \(L>1\) then not absolutely convergent. There is also the root test. \(L=\lim _{n\rightarrow \infty }\sqrt [n]{\left \vert a_{n}\right \vert }=\lim _{n\rightarrow \infty }\left \vert a_{n}\right \vert ^{\frac {1}{n}}\).
The integral test, use when ratio test is inconclusive. \(L=\lim _{n\rightarrow \infty }\int ^{n}f\left ( x\right ) dx\) where \(a\left ( n\right ) \) becomes \(f\left ( x\right ) \). Remember to use this only of the terms of the sequence are monotonically decreasing and are all positive. For example, \(\sum _{n=1}^{\infty }\ln \left ( 1+\frac {1}{n}\right ) \), then use \(L=\lim _{N\rightarrow \infty }\int ^{N}\ln \left ( 1+\frac {1}{x}\right ) dx=\left ( \left ( 1+x\right ) \ln \left ( 1+x\right ) -x\ln \left ( x\right ) -1\right ) ^{N}\). Notice, we only use the upper limit in the integral. This becomes (after simplifications) \(\lim _{N\rightarrow \infty }\frac {N}{N+1}=1\). Hence the limit \(L\) is finite, then the series converges.
Radius of convergence is called \(R=\frac {1}{L}\) where \(L\) is from (3) above.
Comparison test. Compare the series with one we happen to already know it converges. Let \(\sum b_{n}\) be a series which we know is convergent (for example \(\sum \frac {1}{n^{2}}\)), and we want to find if \(\sum a_{n}\) converges. If all terms of both series are positive and if \(a_{n}\leq b_{n}\) for each \(n\), then we conclude that \(\sum a_{n}\) converges also.
\(\blacksquare \) For Laurent series, lets say singularity is at \(z=0\) and \(z=1\). To expand about \(z=0\), get \(f\left ( z\right ) \) to look like \(\frac {1}{1-z}\) and use geometric series for \(\left \vert z\right \vert <1\). To expand about \(z=1\), there are two choices, to the inside and to the outside. For the outside, i.e. \(\left \vert z\right \vert >1\), get \(f\left ( z\right ) \) to have \(\frac {1}{1-\frac {1}{z}}\) form, since this now valid for \(\left \vert z\right \vert >1\).
\(\blacksquare \) Can only use power series \(\sum a_{n}\left ( z-z_{0}\right ) ^{n}\) to expand \(f\left ( z\right ) \) around \(z_{0}\) only if \(f\left ( z\right ) \) is analytic at \(z_{0}\). If \(f\left ( z\right ) \) is not analytic at \(z_{0}\) need to use Laurent series. Think of Laurent series as an extension of power series to handle singularities.
Let us find the Laurent series for \(f\left ( z\right ) =\frac {5z-2}{z\left ( z-1\right ) }\). There is a singularity of order \(1\) at \(z=0\) and \(z=1\).
This makes \(g\left ( z\right ) \) analytic around \(z\), since \(g\left ( z\right ) \) do not have a pole at \(z=0\), then it is analytic around \(z=0\) and therefore it has a power series expansion around \(z=0\) given by
The residue is \(2\). The above expansion is valid around \(z=0\) up and not including the next singularity, which is at \(z=1\). Now we find the expansion of \(f\left ( z\right ) \) around \(z=1\). Let
This makes \(g\left ( z\right ) \) analytic around \(z=1\), since \(g\left ( z\right ) \) do not have a pole at \(z=1\). Therefore it has a power series expansion about \(z=1\) given by
The residue is \(3\). The above expansion is valid around \(z=1\) up and not including the next singularity, which is at \(z=0\) inside a circle of radius \(1\).
Putting the above two regions together, then we see there is a series expansion of \(f\left ( z\right ) \) that is shared between the two regions, in the shaded region below.
Let check same series in the shared region give same values. Using the series expansion about \(f\left ( 0\right ) \) to find \(f\left ( z\right ) \) at point \(z=\frac {1}{2}\), gives \(-2\) when using \(10\) terms in the series. Using series expansion around \(z=1\) to find \(f\left ( \frac {1}{2}\right ) \) using \(10\) terms also gives \(-2\). So both series are valid produce same result.
15.2.2 Method Two
This method is simpler than the above, but it results in different regions. It is based on converting the expression in order to use geometric series expansion on it.
The above is valid for \(0<\left \vert z\right \vert <1\) which agrees with result of method 1.
Now, to find expansion for \(\left \vert z\right \vert >1\), we need a term that looks like \(\left ( \frac {1}{1-\frac {1}{z}}\right ) \). Since now it can be expanded for \(\left \vert \frac {1}{z}\right \vert <1\) or \(\left \vert z\right \vert >1\) which is what we want. Therefore, writing \(f\left ( z\right ) \) as
With residue \(5\). The above is valid for \(\left \vert z\right \vert >1\). The following diagram illustrates the result obtained from method 2.
15.2.3 Method Three
For expansion about \(z=0\), this uses same method as above, giving same series valid for \(\left \vert z\right \vert <1\,.\) This method is a little different for those points other than zero. The idea is to replace \(z\) by \(z-z_{0}\) where \(z_{0}\) is the point we want to expand about and do this replacement in \(f\left ( z\right ) \) itself. So for \(z=1\) using this example, we let \(\xi =z-1\) hence \(z=\xi +1\). Then \(f\left ( z\right ) \) becomes
The above is valid for \(\left \vert \xi \right \vert <1\) or \(\left \vert z-1\,\right \vert <1\) or \(\,-1<\left ( z-1\right ) <1\) or \(0<z<2\). This gives same series and for same region as in method one. But this is little faster as it uses Binomial series short cut to find the expansion instead of calculating derivatives as in method one.
15.2.4 Conclusion
Method one and method three give same series and for same regions. Method three uses binomial expansion as short cut and requires one to convert \(f\left ( z\right ) \) to form to allow using Binomial expansion. Method one does not use binomial expansion but requires doing many derivatives to evaluate the terms of the power series. It is more direct method.
Method two also uses binomial expansion, but gives different regions that method one and three.
If one is good in differentiation, method one seems the most direct. Otherwise, the choice is between method two or three as they both use Binomial expansion. Method two seems a little more direct than method three. It also depends what the problem is asking form. If the problem asks to expand around \(z_{0}\) vs. if it is asking to find expansion in \(\left \vert z\right \vert >1\) for example, then this decides which method to
use.