Cauchy Criterion for Numerical Sequences
Cauchy Criterion for Numerical Series
Uniformly Convergent Series Continuity
Uniformly Convergent Series Integration
Weierstrass Convergence Theorem
Let’s consider series with complex numbers, series of the form:
S = \sum_{k=1}^\infty a_k
where \{a_k\} is a sequence of numbers with complex number. Such series is convergent if any sequence \{S_n\} of the partial sums:
S_n = \sum_{k=1}^n a_k
is convergent, and the series:
r_n = \sum_{k=n+1}^\infty a_k
is called the reminder of the series. If the series converges, then:
S = S_n + r_n
and, for every \varepsilon > 0, it is possible to find an index N such as that |r_n| < \varepsilon for n \ge N.
By this definition of convergent series, a necessary and sufficient condition for the convergence is the Cauchy condition for a numerical sequence.
A sequence of numbers \{ z_n \} in the complex plane \mathbb{C} is said to be a convergent sequence if there exists a number L \in \mathbb{C} such that for every real number \varepsilon > 0, there exists a natural number N such that for all n > N, the inequality holds:
|z_n - L| < \varepsilon
An alternative and equivalent condition for convergence is the Cauchy criterion. A sequence \{ z_n \} converges if and only if for every \varepsilon > 0, there exists a natural number N such that for all n > N and for every positive integer m, the following condition is satisfied:
| z_n - z_{n+m} | < \varepsilon
Proof: it is presented in two parts, necessity and sufficiency.
First, we assume the sequence \{ z_n \} is convergent and show that it must be a Cauchy sequence.
Let the sequence \{ z_n \} converge to a limit L \in \mathbb{C}. According to the definition of convergence, for any arbitrarily small positive value, for instance \varepsilon/2, there exists a corresponding integer N such that for all integers n > N, we have:
|z_n - L| < \frac{\varepsilon}{2}
Now, consider two indices n and n+m, where n > N and m is any positive integer.
Since n > N, it follows that n+m > n > N.
Consequently, the condition of convergence applies to both the n^{th} and the (n+m)^{th} term of the sequence:
\begin{aligned} & |z_n - L| < \frac{\varepsilon}{2} \\ & |z_{n+m} - L| < \frac{\varepsilon}{2} \end{aligned}
We are interested in the magnitude of the difference |z_n - z_{n+m}|.
By adding and subtracting the limit L inside the absolute value and applying the triangle inequality, we obtain:
\begin{aligned} |z_n - z_{n+m}| &= |(z_n - L) + (L - z_{n+m})| \\ &\le |z_n - L| + |L - z_{n+m}| \\ &= |z_n - L| + |z_{n+m} - L| \end{aligned}
Substituting the bounds from our convergence assumption yields the desired result:
|z_n - z_{n+m}| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon
This inequality holds for all n > N and all m \in \mathbb{N}. We have shown that if a sequence converges, it must satisfy the Cauchy criterion.
Next, we assume that \{ z_n \} is a Cauchy sequence and demonstrate that it must converge to a limit in \mathbb{C}.
This direction of the proof relies on the Bolzano-Weierstrass theorem.
Our first step is to establish that any Cauchy sequence is bounded. From the definition of a Cauchy sequence, let us choose a specific \varepsilon = 1.
There must exist an integer N_1 such that for all n > N_1 and for all m \ge 1:
|z_n - z_{n+m}| < 1
In particular, let us fix one index greater than N_1, for example N_1 + 1. Then for any n > N_1, we have:
|z_n - z_{N_1+1}| < 1
Using the reverse triangle inequality, |a| - |b| \le |a-b|, we find:
|z_n| - |z_{N_1+1}| \le |z_n - z_{N_1+1}| < 1
This implies that for all n > N_1, the terms are bounded by |z_n| < 1 + |z_{N_1+1}|.
The entire sequence is therefore bounded, as we can define a bound M for all terms of the sequence by taking the maximum of the absolute values of the first N_1 terms and the bound for the tail of the sequence:
M = \max \left\{ |z_1|, |z_2|, \ldots, |z_{N_1}|, 1 + |z_{N_1+1}| \right\}
Every term z_n in the sequence satisfies |z_n| \le M.
With the sequence \{ z_n \} established as bounded, we can invoke the Bolzano-Weierstrass theorem. This theorem states that every bounded sequence in \mathbb{R}^k (and \mathbb{C} is isomorphic to \mathbb{R}^2) has a convergent subsequence.
Therefore, there exists a subsequence \{ z_{n_k} \} and a point L \in \mathbb{C} such that:
\lim_{k \to \infty} z_{n_k} = L
The final part of the proof is to demonstrate that the entire sequence \{ z_n \} converges to this same limit L.
We need to show that for a given \varepsilon > 0, we can find an N such that for all n > N, |z_n - L| < \varepsilon.
We use the triangle inequality again:
|z_n - L| = |z_n - z_{n_k} + z_{n_k} - L| \le |z_n - z_{n_k}| + |z_{n_k} - L|
We can make both terms on the right-hand side arbitrarily small.
Since \{ z_n \} is a Cauchy sequence, for any \varepsilon/2 > 0, there is an integer N_C such that if n, j > N_C, then |z_n - z_j| < \varepsilon/2 and since the subsequence \{ z_{n_k} \} converges to L, for that same \varepsilon/2, there is an integer K such that if k > K, then |z_{n_k} - L| < \varepsilon/2.
Now, we can choose an integer k large enough to satisfy two conditions simultaneously. We choose k such that k > K (to satisfy the subsequence convergence) and also such that the index n_k > N_C (which is possible because the indices n_k increase without bound). Let’s fix such a k.
For any n > N_C, both n and our chosen n_k are greater than N_C. From the Cauchy property of the main sequence, it follows that:
|z_n - z_{n_k}| < \frac{\varepsilon}{2}
From the convergence of the subsequence, since we chose k > K, we have:
|z_{n_k} - L| < \frac{\varepsilon}{2}
Combining these results into our triangle inequality, we find that for any n > N_C:
|z_n - L| \le |z_n - z_{n_k}| + |z_{n_k} - L| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon
We have shown that for any \varepsilon > 0, there exists an integer N (namely, N_C) such that for all n > N, |z_n - L| < \varepsilon. This is the definition of convergence for the sequence \{ z_n \}.
The convergence of an infinite series is defined through the convergence of its associated sequence of partial sums. This connection allows us to translate the Cauchy criterion for sequences into a tool for analyzing the convergence of series.
We can apply the Cauchy criterion for sequences directly to the sequence \{ S_n \}. The sequence \{ S_n \} converges if and only if for every \varepsilon > 0, there exists a natural number N such that for all integers n > N and for any positive integer p, the following inequality is satisfied:
| S_{n+p} - S_n | < \varepsilon
We can express the difference S_{n+p} - S_n in terms of the elements of the original sequence \{ a_k \}. The difference is the sum of terms from index n+1 to n+p:
\begin{aligned} S_{n+p} - S_n &= \left( \sum_{k=1}^{n+p} a_k \right) - \left( \sum_{k=1}^{n} a_k \right) \\ &= a_{n+1} + a_{n+2} + \dots + a_{n+p} \\ &= \sum_{k=n+1}^{n+p} a_k \end{aligned}
By substituting this result into the Cauchy condition for the sequence \{S_n\}, we arrive at the Cauchy criterion for numerical series.
A series converges if and only if for every \varepsilon > 0, there exists an integer N such that for all n > N and for all positive integers p, the following condition holds:
\left| \sum_{k=n+1}^{n+p} a_k \right| < \varepsilon
This criterion is a test for convergence that does not require knowledge of the series’ sum. It expresses the intuitive idea that for a series to converge, the sum of any block of consecutive terms far down the series must be arbitrarily close to zero.
A primary consequence derived from this criterion is a necessary condition for the convergence of any series. By setting p=1 in the Cauchy criterion, the condition becomes: for every \varepsilon > 0, there exists an N such that for all n > N,
|a_{n+1}| = \left| S_{n+1} - S_n \right| < \varepsilon
This is the formal definition for the limit of the sequence \{a_k\} being zero.
\lim_{k \to \infty} a_k = 0
Therefore, for any series to converge, its terms must necessarily approach zero. It is important to recognize that this condition is not sufficient for convergence.
The classic example is the harmonic series, \sum_{k=1}^\infty \frac{1}{k}. The terms a_k = 1/k approach zero as k \to \infty. However, the series diverges. We can demonstrate this divergence using the Cauchy criterion. Let us consider the block of terms from n+1 to 2n by choosing p=n. The sum is:
\sum_{k=n+1}^{2n} \frac{1}{k} = \frac{1}{n+1} + \frac{1}{n+2} + \dots + \frac{1}{2n}
There are n terms in this sum, and the smallest term is \frac{1}{2n}. We can establish a lower bound for the sum:
\sum_{k=n+1}^{2n} \frac{1}{k} > n \cdot \left( \frac{1}{2n} \right) = \frac{1}{2}
This inequality shows that the sum of this block of terms is always greater than 1/2, regardless of how large n becomes. Therefore, for any \varepsilon \le 1/2, the Cauchy criterion cannot be satisfied. The harmonic series fails the Cauchy test and is divergent.
Theorem: if the series \sum_{k=1}^\infty |a_k| converges, then the series \sum_{k=1}^\infty a_k also converges.
A series \sum a_k for which \sum |a_k| converges is said to be absolutely convergent. This theorem states that absolute convergence is a sufficient condition for convergence.
Proof: we can use the Cauchy criterion for series that we have established.
We assume that the series \sum_{k=1}^\infty |a_k| is convergent. Since it is a series of non-negative real numbers, its convergence implies that it satisfies the Cauchy criterion.
Therefore, for any given \varepsilon > 0, there exists an integer N such that for all n > N and for all positive integers p, we have:
\sum_{k=n+1}^{n+p} |a_k| = \left| \sum_{k=n+1}^{n+p} |a_k| \right| < \varepsilon
Our objective is to show that the original series, \sum a_k, also satisfies the Cauchy criterion.
Let’s examine the absolute value of the sum of a corresponding block of terms from the original series. Using the triangle inequality for a finite sum, we can state:
\left| \sum_{k=n+1}^{n+p} a_k \right| \le \sum_{k=n+1}^{n+p} |a_k|
By combining these two inequalities, we find that for any n > N and any positive integer p:
\left| \sum_{k=n+1}^{n+p} a_k \right| \le \sum_{k=n+1}^{n+p} |a_k| < \varepsilon
This is the Cauchy criterion for the series \sum a_k. Since the space of complex numbers \mathbb{C} is complete, any sequence (or series) that satisfies the Cauchy criterion must converge.
We have therefore proven that the convergence of \sum |a_k| guarantees the convergence of \sum a_k.
This theorem is what allows us to use convergence tests designed for non-negative real series, such as D’Alembert’s ratio test or Cauchy’s root test, to determine the convergence of a series with complex or alternating terms.
The procedure is as follows: to test the convergence of a series \sum a_k, we instead investigate the convergence of the associated series of non-negative real numbers, \sum |a_k|.
D’Alembert’s ratio test
We compute the limit of the ratio of the absolute values of successive terms:
L = \lim_{k \to \infty} \frac{|a_{k+1}|}{|a_k|}
Cauchy’s root test
We compute the limit superior of the k^{th} root of the absolute value of the terms:
L = \limsup_{k \to \infty} \sqrt[k]{|a_k|}
In summary, the concept of absolute convergence allows us to analyze the convergence of a series \sum a_k by studying a related series \sum |a_k| composed of non-negative real terms.
If this simpler series converges, the convergence of the original series is guaranteed.
The converse however is not true. A series can converge without converging absolutely; such a series is termed conditionally convergent.
We now transition from numerical series to functional series, where the terms of the series are functions of a complex variable. Consider an infinite sequence of functions \{u_n(z)\}_{n=1}^\infty, where each function is defined on a common domain \mathcal{D} \subseteq \mathbb{C}. The expression:
\sum_{n=1}^\infty u_n(z)
is called a functional series. For any specific point z_0 \in \mathcal{D}, this expression becomes a numerical series, \sum_{n=1}^\infty u_n(z_0), which may either converge or diverge.
The functional series is said to be convergent in the domain \mathcal{D} if the corresponding numerical series converges for every point z \in \mathcal{D}. When this condition is met, we can define a sum function, f(z), whose value at each point is the sum of the series at that point. We write:
f(z) = \sum_{n=1}^\infty u_n(z)
Let S_n(z) = \sum_{k=1}^n u_k(z) be the n^{th} partial sum of the series. The convergence of the series to f(z) in \mathcal{D} means that for every z \in \mathcal{D} and for every \varepsilon > 0, it is possible to find an integer N such that for all n > N:
|f(z) - S_n(z)| < \varepsilon
In this general case, known as pointwise convergence, the choice of N may depend on both \varepsilon and the specific point z. We denote this dependency by writing N(\varepsilon, z).
A stronger and more useful mode of convergence is uniform convergence. This notion is important because it preserves certain analytic properties of the functions u_n(z), most notably continuity.
A pointwise convergent series of continuous functions can converge to a discontinuous function. However, if a series of continuous functions converges uniformly, its sum function is guaranteed to be continuous.
A functional series is said to converge uniformly to f(z) in the domain \mathcal{D} if for every \varepsilon > 0, it is possible to find an integer N, depending only on \varepsilon, such that for all n > N the inequality |f(z) - S_n(z)| < \varepsilon holds for every point z \in \mathcal{D}. The independence of N from z is the defining characteristic of uniformity.
Let us define the remainder term of the series as:
r_n(z) = f(z) - S_n(z) = \sum_{k=n+1}^\infty u_k(z)
The condition for uniform convergence can then be expressed compactly as: for every \varepsilon > 0, there exists an N(\varepsilon) such that for all n > N(\varepsilon),
|r_n(z)| < \varepsilon
for all z \in \mathcal{D}.
This is equivalent to stating that the sequence of functions \{r_n(z)\} converges uniformly to zero, or \sup_{z \in \mathcal{D}} |r_n(z)| \to 0 as n \to \infty.
A practical sufficient condition for establishing uniform convergence is provided by the Weierstrass M-test.
Theorem: let \{u_n(z)\} be a sequence of functions defined on a domain \mathcal{D}. If there exists a sequence of non-negative real numbers \{M_n\} such that |u_n(z)| \le M_n for all z \in \mathcal{D} and the numerical series \sum_{n=1}^\infty M_n converges, then the functional series \sum_{n=1}^\infty u_n(z) converges uniformly and absolutely in \mathcal{D}.
Proof: he hypothesis states that the series of real numbers \sum_{n=1}^\infty M_n converges. By the Cauchy criterion for numerical series, for any given \varepsilon > 0, there exists an integer N such that for all n > N and for all positive integers p:
\sum_{k=n+1}^{n+p} M_k < \varepsilon
Now, let us examine the partial sums of the functional series \sum u_k(z). For any n > N, any p \ge 1, and any z \in \mathcal{D}, we can bound the absolute value of a block of terms using the triangle inequality and the hypothesis:
\begin{aligned} |S_{n+p}(z) - S_n(z)| &= \left| \sum_{k=n+1}^{n+p} u_k(z) \right| \\ &\le \sum_{k=n+1}^{n+p} |u_k(z)| \\ &\le \sum_{k=n+1}^{n+p} M_k \end{aligned}
Combining our results, we find that for any n > N and for all p \ge 1:
|S_{n+p}(z) - S_n(z)| < \varepsilon
for all z \in \mathcal{D}.
The choice of N was determined by the convergent series \sum M_n and is therefore independent of z. The inequality demonstrates that the series \sum u_n(z) satisfies the Cauchy criterion for uniform convergence, which we will prove below is a necessary and sufficient condition. Therefore, the series converges uniformly in \mathcal{D}. The condition |u_n(z)| \le M_n and the convergence of \sum M_n also implies that \sum|u_n(z)| converges for all z \in \mathcal{D}, establishing absolute convergence.
The following theorem provides a condition that is both necessary and sufficient for a functional series to converge uniformly.
Theorem: the functional series \sum_{n=1}^\infty u_n(z) converges uniformly in a domain \mathcal{D} if and only if for every \varepsilon > 0, there exists an integer N, depending only on \varepsilon, such that for all n > N, all positive integers p, and all z \in \mathcal{D}, the following inequality holds:
|S_{n+p}(z) - S_n(z)| = \left| \sum_{k=n+1}^{n+p} u_k(z) \right| < \varepsilon
Proof: it is presented in two parts, necessity and sufficiency.
First, we assume the series \sum u_n(z) converges uniformly to a function f(z) in \mathcal{D}.
By the definition of uniform convergence, for any given \varepsilon/2 > 0, there must exist an integer N(\varepsilon) such that for any integer k > N, the inequality |f(z) - S_k(z)| < \varepsilon/2 holds for all z \in \mathcal{D}.
Let n > N and p be any positive integer. It follows that n+p > n > N. We can apply the uniform convergence condition to both S_n(z) and S_{n+p}(z):
\begin{aligned} & |f(z) - S_n(z)| < \frac{\varepsilon}{2} \\ & |f(z) - S_{n+p}(z)| < \frac{\varepsilon}{2} \end{aligned}
Using the triangle inequality, we examine the difference between these partial sums:
\begin{aligned} |S_{n+p}(z) - S_n(z)| &= |(S_{n+p}(z) - f(z)) + (f(z) - S_n(z))| \\ &\le |S_{n+p}(z) - f(z)| + |f(z) - S_n(z)| \\ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon \end{aligned}
Since N is independent of z, this inequality holds for all n > N, all p \ge 1, and all z \in \mathcal{D}. This establishes the necessity of the Cauchy condition.
Then, we assume the uniform Cauchy criterion holds. For any fixed point z_0 \in \mathcal{D}, the condition implies that the sequence of numerical partial sums \{S_n(z_0)\} is a Cauchy sequence. Since the space of complex numbers \mathbb{C} is complete, every Cauchy sequence converges.
This guarantees the pointwise convergence of the series for every z \in \mathcal{D} to some limit function, which we denote by f(z):
f(z) = \lim_{n \to \infty} S_n(z)
We now must show that this convergence is uniform. Our starting point is the hypothesis: for every \varepsilon > 0, there exists an N(\varepsilon) such that for all n > N and all p \ge 1:
|S_{n+p}(z) - S_n(z)| < \varepsilon
for all z \in \mathcal{D}.
Let us fix an integer n > N. In the inequality above, we can take the limit as p \to \infty. For a fixed z, the term S_{n+p}(z) approaches f(z). The term S_n(z) remains constant with respect to p. Since the absolute value function is continuous, we can pass the limit inside:
\lim_{p \to \infty} |S_{n+p}(z) - S_n(z)| = |\lim_{p \to \infty} S_{n+p}(z) - S_n(z)| = |f(z) - S_n(z)|
A strict inequality may become non-strict upon taking a limit. Applying the limit to our hypothesis yields:
|f(z) - S_n(z)| \le \varepsilon
This inequality holds for every n > N and for all z \in \mathcal{D}. The integer N was chosen based on \varepsilon alone. This demonstrates that for any \varepsilon > 0, we can find an N such that for all n>N, |f(z) - S_n(z)| \le \varepsilon for all z \in \mathcal{D}. This is the definition of uniform convergence, completing the proof.
Theorem: If \{u_n(z)\}_{n=1}^\infty is a sequence of functions that are continuous in a domain \mathcal{D}, and the series \sum_{n=1}^\infty u_n(z) converges uniformly to a function f(z) in \mathcal{D}, then the sum function f(z) is also continuous in \mathcal{D}.
Proof: if f(z) is continuous in \mathcal{D}, then for any point z \in \mathcal{D} and for any \varepsilon > 0, there exists a \delta > 0 such that for any point z + \Delta z \in \mathcal{D} with |\Delta z| < \delta, the inequality |f(z + \Delta z) - f(z)| < \varepsilon is satisfied.
We begin by decomposing the expression |f(z + \Delta z) - f(z)| by introducing the N^{th} partial sum, S_N(z) = \sum_{k=1}^N u_k(z). Using the triangle inequality, we can write:
\begin{aligned} |f(z + \Delta z) - f(z)| &= |(f(z + \Delta z) - S_N(z + \Delta z)) + (S_N(z + \Delta z) - S_N(z)) + (S_N(z) - f(z))| \\ &\le |f(z + \Delta z) - S_N(z + \Delta z)| + |S_N(z + \Delta z) - S_N(z)| + |f(z) - S_N(z)| \end{aligned}
Our strategy is to show that each of the three terms on the right-hand side can be made smaller than \varepsilon/3.
From the uniform convergence of the series, for any given value, such as \varepsilon/3, we can find an integer N (which depends only on \varepsilon) such that for all n > N, the remainder term |f(w) - S_n(w)| < \varepsilon/3 for all w \in \mathcal{D}.
By choosing such an N, we ensure the first and third terms are bounded:
\begin{aligned} & |f(z + \Delta z) - S_N(z + \Delta z)| < \frac{\varepsilon}{3} \\ & |f(z) - S_N(z)| < \frac{\varepsilon}{3} \end{aligned}
Now we consider the middle term, |S_N(z + \Delta z) - S_N(z)|. The partial sum S_N(z) is a finite sum of functions u_k(z), each of which is continuous in \mathcal{D} by hypothesis.
A finite sum of continuous functions is itself continuous. Therefore, for our chosen N, the function S_N(z) is continuous at the point z.
By the definition of continuity for S_N(z), for the value \varepsilon/3, there exists a \delta > 0 such that for |\Delta z| < \delta:
|S_N(z + \Delta z) - S_N(z)| < \frac{\varepsilon}{3}
We have now determined the necessary N from uniform convergence and the corresponding \delta from the continuity of the finite partial sum. By substituting these bounds back into our main inequality, we find that for |\Delta z| < \delta:
|f(z + \Delta z) - f(z)| < \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon
Since for any \varepsilon > 0 we can find such a \delta > 0, the function f(z) is continuous at z. As z was an arbitrary point in \mathcal{D}, the function f(z) is continuous throughout the domain.
Another significant property of uniform convergence is that it permits the interchange of the summation and integration operators.
Theorem: Let \{u_n(z)\} be a sequence of continuous functions on a domain \mathcal{D}. If the series \sum_{n=1}^\infty u_n(z) converges uniformly to a function f(z) in \mathcal{D}, then for any piecewise smooth curve C contained entirely within \mathcal{D}, the integral of f(z) along C is the sum of the integrals of the individual terms:
\int_C f(z) \, \mathrm dz = \sum_{n=1}^\infty \int_C u_n(z) \, \mathrm dz
Proof: The goal is to demonstrate that the sequence of partial sums of the integrals converges to the integral of the sum function. We need to show that for any \varepsilon > 0, there exists an integer N such that for all n > N:
\left| \int_C f(z) \, \mathrm dz - \sum_{k=1}^n \int_C u_k(z) \, \mathrm dz \right| < \varepsilon
By the linearity of the integral operator, the finite sum of integrals is equal to the integral of the finite sum:
\sum_{k=1}^n \int_C u_k(z) \, \mathrm dz = \int_C \sum_{k=1}^n u_k(z) \, \mathrm dz = \int_C S_n(z) \, \mathrm dz
Substituting this into our expression, we analyze the absolute value of the difference:
\left| \int_C f(z) \, \mathrm dz - \int_C S_n(z) \, \mathrm dz \right| = \left| \int_C (f(z) - S_n(z)) \, \mathrm dz \right| = \left| \int_C r_n(z) \, \mathrm dz \right|
Here, r_n(z) = f(z) - S_n(z) is the remainder term. We can bound this integral using the standard estimation theorem for complex integrals (the ML-inequality). Let L be the length of the curve C. The theorem states:
\left| \int_C r_n(z) \, \mathrm dz \right| \le L \cdot \sup_{z \in C} |r_n(z)|
The series converges uniformly on \mathcal{D}, which implies it converges uniformly on the curve C \subset \mathcal{D}. By the definition of uniform convergence, for any positive real number, we can find a suitable N.
Let us choose the positive number \varepsilon/L (assuming L>0; the case L=0 is trivial). There exists an integer N, dependent on \varepsilon/L, such that for all n > N:
|r_n(z)| = |f(z) - S_n(z)| < \frac{\varepsilon}{L}
for all z \in C. This implies that:
\sup_{z \in C} |r_n(z)| \le \varepsilon/L
Combining these results, we find that for all n > N:
\left| \int_C f(z) \, \mathrm dz - \sum_{k=1}^n \int_C u_k(z) \, \mathrm dz \right| \le L \cdot \sup_{z \in C} |r_n(z)| \le L \cdot \frac{\varepsilon}{L} = \varepsilon
This confirms that the sequence of sums of integrals converges to the integral of the sum function, which completes the proof.
The Weierstrass convergence theorem states that if a sequence of analytic functions converges uniformly on every compact subset of a domain, the limit function itself must be analytic within that domain.
This preservation of analyticity under uniform limits is a property not generally true for real-differentiable functions.
Under the same conditions, the sequence of derivatives of the functions also converges uniformly on compact subsets to the derivative of the limit function. This means one can interchange the operations of differentiation and taking the limit.
I demonstrate it here.
For the evaluation of parameter-dependent integrals (here) we considered only curves C of a finite length. Weierstrass convergence theorem allows to generalize to the case of improper integral. Defining the function F(z) defined by the integral over the unbounded contour C:
F(z) = \int_C g(z, \zeta) \mathrm{d}\zeta = \lim_{n \to \infty} \int_{C_n} g(z, \zeta) \mathrm{d}\zeta
is analytic in the domain \mathcal{D}. Furthermore, its derivative can be obtained by differentiating under the integral sign:
F^\prime(z) = \int_C \frac{\partial g}{\partial z}(z, \zeta) \mathrm{d}\zeta
Since for any choice of a sequence of finite curves C_n, the sequence:
F(z) = \int_{C_n} g(z, \zeta) \mathrm{d}\zeta
uniformly converges to the function F(z) in \overline{\mathcal D}^\prime \in \mathcal D for C_n \to C. Then, for the Weierstrass convergence theorem, the function F(z) is analytic in \mathcal D and, exchanging the derivative and the integral:
F^\prime(z) = \int_C \frac{\partial g}{\partial z}(z, \zeta) \mathrm{d}\zeta
The general theory of functional series finds one of its most important applications in the study of power series.
These are functional series of a particular structured form. Given a point z_0 \in \mathbb{C}, called the center of the series, and a sequence of complex coefficients \{c_n\}_{n=0}^\infty, a power series is an expression of the form:
\sum_{n=0}^\infty c_n(z - z_0)^n
Each term of this series, u_n(z) = c_n(z - z_0)^n, is an entire function, meaning it is analytic throughout the entire complex plane.
As a result, the theorems established for functional series of analytic functions, such as the Weierstrass theorems on continuity and differentiability, are directly applicable wherever the series can be shown to converge uniformly.
One of the main tasks in the study of a power series is to determine its domain of convergence. This domain is dictated entirely by the behavior of the sequence of coefficients \{c_n\}. The nature of this dependence is illustrated by considering two examples. Let’s consider the series:
\sum_{n=0}^\infty n!(z - z_0)^n
To investigate its convergence, we apply the ratio test to the series of absolute values. Let a_n = n!(z-z_0)^n. The ratio of the moduli of consecutive terms is:
\frac{|a_{n+1}|}{|a_n|} = \frac{|(n+1)!(z - z_0)^{n+1}|}{|n!(z - z_0)^n|} = \frac{(n+1)!}{n!} \frac{|z - z_0|^{n+1}}{|z - z_0|^n} = (n+1)|z - z_0|
The limit as n \to \infty is:
L = \lim_{n \to \infty} (n+1)|z - z_0|
If z = z_0, the series becomes c_0 + 0 + 0 + \dots and converges trivially. If z \neq z_0, then |z-z_0| is a positive constant, and the limit L is infinite. Since L > 1, the series diverges for all z \neq z_0. The domain of convergence for this series is the single point \{z_0\}.
Conversely, let’s consider the series that defines the exponential function:
\sum_{n=0}^\infty \frac{(z - z_0)^n}{n!}
Applying the ratio test to this series gives:
\frac{|a_{n+1}|}{|a_n|} = \frac{|(z-z_0)^{n+1}/(n+1)!|}{|(z-z_0)^n/n!|} = \frac{n!}{(n+1)!} \frac{|z - z_0|^{n+1}}{|z - z_0|^n} = \frac{|z - z_0|}{n+1}
The limit of this ratio as n \to \infty is:
L = \lim_{n \to \infty} \frac{|z - z_0|}{n+1} = 0
Since L = 0 < 1 for every value of z \in \mathbb{C}, this series converges absolutely for all complex numbers. Its domain of convergence is the entire complex plane.
These two cases represent the extreme behaviors. To systematically establish the domain of convergence for any power series, we employ the following theorem.
The geometric structure of the convergence domain of a power series is not arbitrary. Abel’s theorem establishes that for any power series, there exists a unique extended real number R \in [0, \infty], known as the radius of convergence, that characterizes its convergence.
The series converges absolutely within an open disk of radius R centered at z_0, and diverges at all points outside the corresponding closed disk. The convergence behavior on the boundary circle itself, |z-z_0|=R, requires separate investigation. This theorem provides the framework for analyzing power series in the complex plane.
I demonstrate it here.
Taylor’s theorem in complex analysis provides a fundamental link between the property of a function being analytic in a region and its representation as a power series.
It asserts that if a function is analytic within a disk, it can be expressed as a convergent power series centered within that disk.
This result establishes that the local behavior of any analytic function is that of a polynomial of potentially infinite degree.
I demonstrate it here.