Analytic Functions

Analytic Functions
Complex Infinite Differentiability

Analytic Functions

Cauchy-Riemann Conditions

Cartesian Coordinates

Polar Coordinates

Properties of analytic functions

Average Value Property

Estimation Inequality (ML-bound)

Maximum Modulus Principle

Minimum Modulus Principle

Parameter-dependent Integrals

Higher Order Derivatives

Morera’s Theorem

Liouville’s theorem

Uniqueness

Zeros

Identity theorem

Uniqueness theorem

A complex function f(z) is said to be analytic (or holomorphic) in an open set \mathcal{D} \subseteq \mathbb{C} if f(z) is single-valued and possesses a finite complex derivative f^\prime(z) at every point z \in \mathcal{D}. The derivative is defined by the limit:

f^\prime(z) = \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}

The limit must exist and be unique, regardless of the path along which \Delta z approaches zero.

A theorem in complex analysis establishes that a function is analytic in an open set \mathcal{D} if and only if it is infinitely differentiable in \mathcal{D} and, for every z_0 \in \mathcal{D}, f(z) can be represented by its Taylor series expansion around z_0:

f(z) = \sum_{n=0}^{\infty} a_n (z - z_0)^n = \sum_{n=0}^{\infty} \frac{f^{(n)}(z_0)}{n!} (z - z_0)^n

This series converges to f(z) for all z in some open disk (a neighborhood) centered at z_0 and contained within \mathcal{D}. The coefficients a_n = \frac{f^{(n)}(z_0)}{n!} are complex numbers, where f^{(n)}(z_0) is the n^{th} complex derivative of f at z_0.

The terms “analytic” and “holomorphic” are as a consequence used interchangeably. The set of all analytic functions on a given open set D is often denoted by \mathcal{C}^{\omega}(D) or H(D).

If a function is analytic on the entire complex plane \mathbb{C}, it is called an entire function.

Function domain \mathcal{D}

Some examples of analytic functions are the following.

Polynomials, functions of the form:

P(z) = a_m z^m + \dots + a_1 z + a_0

where a_k \in \mathbb{C}, are entire functions. For example, f(z) = z^n for n \in \mathbb{N}_0.

Power series with infinite radius of convergence: for instance, the complex exponential f(z) = e^z, and the complex trigonometric functions f(z) = \sin(z) and f(z) = \cos(z), are entire functions.

Principal branch of the complex logarithm: f(z) = \operatorname{Log}(z) is analytic on its principal branch, typically defined on the domain \mathbb{C} \setminus (-\infty, 0] (the complex plane excluding the non-positive real axis, which serves as the branch cut).

Rational functions, functions of the form:

R(z) = \frac{P(z)}{Q(z)}

where P(z) and Q(z) are polynomials, are analytic in any domain where Q(z) \neq 0.

Some examples of Functions that are not analytic (or not everywhere analytic) are the following.

Functions that are not complex differentiable: for example, the complex conjugate function f(z) = \bar{z}, which is nowhere analytic. Similarly, f(z) = \Re(z), f(z) = \Im(z), and f(z) = |z| are nowhere analytic.

Functions with singularities: For example, f(z) = 1/z is analytic on \mathbb{C} \setminus \{0\} but is not analytic at the singularity z=0. Therefore, it is not an entire function.

Functions with discontinuities: Since to be differentiable a function need to be continue, a function that is discontinuous at a point cannot be analytic at that point.

Multivalued functions: Functions like f(z) = \sqrt{z} or z^a (for non-integer a), when considered as mappings from a single z to multiple w values, are not single-valued and therefore not analytic in this sense. However, specific branches of these multivalued functions can be defined (by introducing branch cuts) which are single-valued and analytic on their respective domains.

Let’s consider for example an analytic function:

f(z) = z^2

The derivative is:

\begin{aligned} f^\prime(z) & = \lim_{\Delta z \rightarrow 0} \frac{f (z + \Delta z) - f (z)}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{(z + \Delta z)^2 - z^2}{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} \frac{z^2 + 2 z \Delta z + (\Delta z)^2 - z^2}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{2 z \Delta z + (\Delta z)^2 }{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} (2z + \Delta z) = 2z \end{aligned}

The limit exists and is uniquely 2z, regardless of how \Delta z \to 0. It is possible to illustrate path independence explicitly.

Let z = x+iy and \Delta z = \Delta x + i\Delta y.

\lim_{\Delta z \rightarrow 0} (2(x+iy) + (\Delta x + i \Delta y)) = 2(x+iy)

If we approach along the real axis (\Delta y = 0, \Delta z = \Delta x \to 0):

\lim_{\Delta x \rightarrow 0} (2z + \Delta x) = 2z

If we approach along the imaginary axis (\Delta x = 0, \Delta z = i\Delta y \to 0):

\lim_{i\Delta y \rightarrow 0} (2z + i\Delta y) = 2z

The result is 2z in all cases, confirming f(z)=z^2 is analytic (it is, in fact, an entire function).

Let’s consider for example a function which is not analytic:

f(z) = \bar{z} = x - iy

The derivative is:

\begin{aligned} f^\prime(z) & = \lim_{\Delta z \rightarrow 0} \frac{f (z + \Delta z) - f (z)}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{\overline{z + \Delta z} - \bar{z}}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{\bar{z} + \overline{\Delta z} - \bar{z}}{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} \frac{\overline{\Delta z}}{\Delta z} \end{aligned}

Let \Delta z = \Delta x + i\Delta y. Then \overline{\Delta z} = \Delta x - i\Delta y.

f^\prime(z) = \lim_{\substack{\Delta x \to 0 \\ \Delta y \to 0}} \frac{\Delta x - i \Delta y}{\Delta x + i \Delta y}

This limit depends on the path of approach:

Let \Delta z \to 0 along the real axis (so \Delta y = 0, \Delta z = \Delta x):

\lim_{\Delta x \rightarrow 0} \frac{\Delta x - i(0)}{\Delta x + i(0)} = \lim_{\Delta x \rightarrow 0} \frac{\Delta x}{\Delta x} = 1

Let \Delta z \to 0 along the imaginary axis (so \Delta x = 0, \Delta z = i\Delta y):

\lim_{i\Delta y \rightarrow 0} \frac{0 - i \Delta y}{0 + i \Delta y} = \lim_{\Delta y \rightarrow 0} \frac{-i \Delta y}{i \Delta y} = -1

Since the limit yields different values depending on the path of approach (1 \neq -1), the derivative f^\prime(z) does not exist for any z, and therefore f(z) = \bar{z} is nowhere analytic.

Cauchy-Riemann conditions

The Cauchy-Riemann conditions provide a test for the check if a complex function is analytic by relating its real and imaginary parts.

Cartesian coordinates

Let a complex function f(z) be defined as f(z) = u(x,y) + i v(x,y), where z = x + iy, and u(x,y) = \Re(f(z)) and v(x,y) = \Im(f(z)) are real-valued functions of two real variables x and y.

For f(z) to be analytic at a point z, its derivative f^\prime(z) must exist and be unique:

f^\prime(z)= \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}

The existence of this limit implies that it must be the same regardless of the path along which \Delta z = \Delta x + i \Delta y approaches zero.

If we assume the partial derivatives of u and v exist, the increment \Delta f = f(z+\Delta z) - f(z) can be written as:

\Delta f = (u(x+\Delta x, y+\Delta y) - u(x,y)) + i(v(x+\Delta x, y+\Delta y) - v(x,y))

Assuming u and v are differentiable as functions of two real variables, we have:

\begin{aligned} u(x+\Delta x, y+\Delta y) - u(x,y) & = \frac{\partial u}{\partial x}\Delta x + \frac{\partial u}{\partial y}\Delta y + \epsilon_1 |\Delta z| \\ v(x+\Delta x, y+\Delta y) - v(x,y) & = \frac{\partial v}{\partial x}\Delta x + \frac{\partial v}{\partial y}\Delta y + \epsilon_2 |\Delta z| \end{aligned}

where \epsilon_1, \epsilon_2 \to 0 as \Delta z \to 0. Then,

\frac{\Delta f}{\Delta z} = \frac{(\frac{\partial u}{\partial x}\Delta x + \frac{\partial u}{\partial y}\Delta y) + i(\frac{\partial v}{\partial x}\Delta x + \frac{\partial v}{\partial y}\Delta y)}{\Delta x + i\Delta y} + \frac{(\epsilon_1 + i\epsilon_2)|\Delta z|}{\Delta z}

The term involving \epsilon_1, \epsilon_2 tends to zero as \Delta z \to 0. For the limit \frac{\mathrm df(z)}{\mathrm dz} to exist, the main fraction must approach a unique value.

Taking path 1 and approaching along the real axis (\Delta y = 0, so \Delta z = \Delta x \to 0):

f^\prime(z)= \lim_{\Delta x \to 0} \frac{(\frac{\partial u}{\partial x}\Delta x) + i(\frac{\partial v}{\partial x}\Delta x)}{\Delta x} = \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}

Taking path 2 and approaching along the imaginary axis (\Delta x = 0, so \Delta z = i\Delta y \to 0):

f^\prime(z)= \lim_{\Delta y \to 0} \frac{(\frac{\partial u}{\partial y}\Delta y) + i(\frac{\partial v}{\partial y}\Delta y)}{i\Delta y} = \frac{1}{i}\left(\frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y}\right) = \frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y}

For \frac{\mathrm df(z)}{\mathrm dz} to be uniquely defined, these two expressions must be equal. Equating the real and imaginary parts:

\begin{aligned} & \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\ & \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \end{aligned}

These are the Cauchy-Riemann conditions (or equations). They are necessary conditions for a function f(z) to be analytic.

If the first-order partial derivatives of u and v are continuous and satisfy the Cauchy-Riemann conditions at a point, then this is also a sufficient condition for f(z) to be differentiable (and therefore analytic) at that point.

For example, considering again f(z) = z^2:

\begin{aligned} & z^2 = (x+iy)^2 = (x^2-y^2) +i(2xy) \\ & u = x^2-y^2 \\ & v = 2xy \end{aligned}

Computing the partial derivatives:

\begin{aligned} & \frac{\partial u}{\partial x} = 2x \\ & \frac{\partial u}{\partial y} = -2y \\ & \frac{\partial v}{\partial x} = 2y \\ & \frac{\partial v}{\partial y} = 2x \end{aligned}

Checking the Cauchy-Riemann conditions. The first one:

\begin{aligned} & \frac{\partial u}{\partial x} = 2x \\ & \frac{\partial v}{\partial y} = 2x \end{aligned}

So:

\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} = 2x

The second one:

\begin{aligned} & \frac{\partial u}{\partial y} = -2y\\ & \frac{\partial v}{\partial x} = 2y = 2y \end{aligned}

So:

\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} = -2y

Both conditions are satisfied for all (x,y). Since the partial derivatives are continuous everywhere, f(z)=z^2 is analytic everywhere (it is an entire function).

The derivative calculated via path 1 is:

\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} = 2x + i(2y) = 2(x+iy) = 2z

The derivative calculated via path 2:

\frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y} = 2x - i(-2y) = 2x + i(2y) = 2(x+iy) = 2z

Both yield the same result:

\frac{\mathrm df(z)}{\mathrm dz}=2z

Polar coordinates

When working with complex functions, it is often convenient to use polar coordinates z = re^{i\theta}, where x = r\cos\theta and y = r\sin\theta. If f(z) = u(r, \theta) + i v(r, \theta), the Cauchy-Riemann conditions in polar form are:

\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r} \frac{\partial v}{\partial \theta} \\ & \frac{\partial v}{\partial r} = - \frac{1}{r} \frac{\partial u}{\partial \theta} \end{aligned}

These can be denoted as u_r = \frac{1}{r}v_\theta and v_r = -\frac{1}{r}u_\theta. These equations hold for r \neq 0. The partial derivatives u_r, u_\theta, v_r, v_\theta can be related to u_x, u_y, v_x, v_y using the chain rule:

\begin{aligned} & u_r = u_x x_r + u_y y_r = u_x \cos\theta + u_y \sin\theta \\ & v_\theta = v_x x_\theta + v_y y_\theta = v_x (-r\sin\theta) + v_y (r\cos\theta) \end{aligned}

Using the Cartesian version of the Cauchy-Riemann conditions (u_x = v_y, u_y = -v_x):

\begin{aligned} & u_r = v_y \cos\theta - v_x \sin\theta \\ & \frac{1}{r}v_\theta = -v_x \sin\theta + v_y \cos\theta \\ & u_r = \frac{1}{r}v_\theta \end{aligned}

Similarly,

\begin{aligned} & v_r = v_x x_r + v_y y_r = v_x \cos\theta + v_y \sin\theta \\ & u_\theta = u_x x_\theta + u_y y_\theta = u_x (-r\sin\theta) + u_y (r\cos\theta) \end{aligned}

Using again the Cartesian version of the Cauchy-Riemann conditions:

\begin{aligned} & v_r = -u_y \cos\theta + u_x \sin\theta\\ & -\frac{1}{r}u_\theta = - (u_x (-\sin\theta) + u_y \cos\theta) = u_x \sin\theta - u_y \cos\theta \\ & v_r = -\frac{1}{r}u_\theta \end{aligned}

The derivative \frac{\mathrm df(z)}{\mathrm dz} can be expressed in two ways following different paths.

Taking path 1 and approaching along the radial direction (\Delta \theta = 0, \Delta r \to 0), \Delta z = (r+\Delta r)e^{i\theta} - re^{i\theta} = \Delta r e^{i\theta}:

f^\prime(z)= \lim_{\Delta r \to 0} \frac{f( (r+\Delta r)e^{i\theta} ) - f(re^{i\theta})}{\Delta r e^{i\theta}} = e^{-i\theta} \left( \frac{\partial u}{\partial r} + i \frac{\partial v}{\partial r} \right)

Taking path 1 and approaching along the angular direction (\Delta r = 0, \Delta \theta \to 0), \Delta z = re^{i(\theta+\Delta\theta)} - re^{i\theta} = re^{i\theta}(e^{i\Delta\theta} - 1), for small \Delta\theta, e^{i\Delta\theta} - 1 \approx i\Delta\theta,so \Delta z \approx i r e^{i\theta} \Delta\theta:

f^\prime(z)= \lim_{\Delta \theta \to 0} \frac{f( re^{i(\theta+\Delta\theta)} ) - f(re^{i\theta})}{i r e^{i\theta} \Delta\theta} = \frac{1}{ire^{i\theta}} \left( \frac{\partial u}{\partial \theta} + i \frac{\partial v}{\partial \theta} \right) = e^{-i\theta} \left( \frac{1}{r}\frac{\partial v}{\partial \theta} - \frac{i}{r}\frac{\partial u}{\partial \theta} \right)

Equating the two expressions for \frac{\mathrm df(z)}{\mathrm dz}:

e^{-i\theta} \left( \frac{\partial u}{\partial r} + i \frac{\partial v}{\partial r} \right) = e^{-i\theta} \left( \frac{1}{r}\frac{\partial v}{\partial \theta} - \frac{i}{r}\frac{\partial u}{\partial \theta} \right)

Comparing the real and imaginary parts inside the parentheses gives the polar Cauchy-Riemann conditions.

For example, let’s consider the principal branch of the complex logarithm:

\operatorname{Log } z = \ln r + i\theta

where z = re^{i\theta} and \theta = \operatorname{Arg}(z) \in (-\pi, \pi]. Here, u(r,\theta) = \ln r and v(r,\theta) = \theta.

The partial derivatives are:

\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r} \\ & \frac{\partial u}{\partial \theta} = 0 \\ & \frac{\partial v}{\partial r} = 0 \\ & \frac{\partial v}{\partial \theta} = 1 \end{aligned}

Checking the Cauchy-Riemann conditions for r \neq 0. The first one:

\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r}\\ & \frac{1}{r}\frac{\partial v}{\partial \theta} = \frac{1}{r}(1) = \frac{1}{r} \end{aligned}

So,

\frac{\partial u}{\partial r} = \frac{1}{r}\frac{\partial v}{\partial \theta}

The second one:

\begin{aligned} & \frac{\partial v}{\partial r} = 0\\ & -\frac{1}{r}\frac{\partial u}{\partial \theta} = -\frac{1}{r}(0) = 0 \end{aligned}

So,

\frac{\partial v}{\partial r} = -\frac{1}{r}\frac{\partial u}{\partial \theta}

both condition are satisfied. Since the partial derivatives are continuous and satisfy the polar Cauchy-Riemann conditions for r > 0 and \theta \in (-\pi, \pi), the principal branch of \operatorname{Log } z is analytic in the domain \mathbb{C} \setminus (-\infty, 0] (the complex plane excluding the non-positive real axis, which is the branch cut).

Properties of analytic functions

Analytic functions possess several properties:

Infinite differentiable: If a function is analytic in an open set, it is infinitely differentiable in that set.

Power Series Representation: Every analytic function can be represented locally by a convergent power series (its Taylor series).

Closure Properties:

  • The sum, difference, and product of analytic functions are analytic.
  • The quotient of two analytic functions f(z)/g(z) is analytic in any domain where g(z) \neq 0.
  • The composition of analytic functions is analytic.

Cauchy’s Integral Theorem: If f(z) is analytic in a simply connected domain \mathcal{D}, then for any simple closed contour (loop) \mathcal{C} entirely within \mathcal{D}, the contour integral of f(z) around \mathcal{C} is zero:

\oint_{\mathcal{C}} f(z)\mathrm{d}z = 0

Cauchy’s Integral Formula (here): This formula expresses the value of an analytic function at any point inside a contour in terms of its values on the contour:

f(a) = \frac{1}{2\pi i}\oint_C \frac{f(z)}{z-a}\, \mathrm{d}z

Estimation inequality (ML-bound)

The estimation inequality, frequently referred to as the ML-bound or ML-inequality, provides a method for establishing an upper bound on the modulus of a complex contour integral.

Let C be a rectifiable curve (a contour with a well-defined finite length) in the complex plane, and let f(z) be a complex-valued function that is continuous on C. The fundamental inequality, which forms the basis of the ML-bound, relates the modulus of the integral to the integral of the modulus:

\left| \int_C f(z) \mathrm{d}z \right| \le \int_C |f(z)| |\mathrm{d}z| = \int_C |f(z)| \mathrm{d}s

Here, \mathrm{d}s = |\mathrm{d}z| represents the element of arc length along the contour C.

This inequality can be proven by considering the definition of the contour integral as the limit of a Riemann sum. Let the contour C be partitioned by points z_0, z_1, \dots, z_n, and let z_k^* be a sample point on the arc between z_{k-1} and z_k. The integral is defined as:

\int_C f(z) \mathrm{d}z = \lim_{n \to \infty} \sum_{k=1}^n f(z_k^*) (z_k - z_{k-1})

By applying the triangle inequality to the finite sum, we obtain:

\left| \sum_{k=1}^n f(z_k^*) (z_k - z_{k-1}) \right| \le \sum_{k=1}^n |f(z_k^*)| |z_k - z_{k-1}|

Taking the limit as the mesh of the partition goes to zero, the left-hand side becomes the modulus of the integral. On the right-hand side, |z_k - z_{k-1}| becomes the differential arc length element \mathrm{d}s = |\mathrm{d}z|, and the sum converges to the line integral of the scalar function |f(z)| with respect to arc length:

\left| \int_C f(z) \mathrm{d}z \right| \le \int_C |f(z)| \mathrm{d}s

Now, suppose we can establish an upper bound for the modulus of the function along the contour. If there exists a non-negative real constant M such that |f(z)| \le M for all z \in C, we can further bound the integral. Let L be the total arc length of the contour C, defined as L = \int_C \mathrm{d}s. Then:

\left| \int_C f(z) \mathrm{d}z \right| \le \int_C |f(z)| \mathrm{d}s \le \int_C M \mathrm{d}s = M \int_C \mathrm{d}s = M L

This yields the ML-bound formula:

\left| \int_C f(z) \mathrm{d}z \right| \le M L

This inequality asserts that the magnitude of the integral is no larger than the product of the maximum modulus of the integrand on the contour and the length of the contour.

One application of the ML-bound is to justify the vanishing of certain parts of a contour integral in a limiting process. This is a standard technique when using the Residue Theorem to compute real improper integrals.

Consider the evaluation of the integral:

\int_{-\infty}^{\infty} \frac{\cos(x)}{x^2 + a^2} \mathrm{d}x

for a > 0. We can evaluate the complex integral:

\oint_C \frac{e^{iz}}{z^2 + a^2} \mathrm{d}z

and then take the real part. The contour C is typically chosen to be a “D-shape” in the upper half-plane, consisting of the real line segment from -R to R and a large semicircular arc \Gamma_R of radius R centered at the origin.

The integral over the closed contour C is given by the Residue Theorem:

\oint_C \frac{e^{iz}}{z^2 + a^2} \mathrm{d}z = \int_{-R}^{R} \frac{e^{ix}}{x^2 + a^2} \mathrm{d}x + \int_{\Gamma_R} \frac{e^{iz}}{z^2 + a^2} \mathrm{d}z = 2\pi i \sum \text{Res}

The only pole inside the contour for sufficiently large R is at z = ia. The residue is:

\text{Res}_{z=ia} \left(\frac{e^{iz}}{(z-ia)(z+ia)}\right) = \frac{e^{i(ia)}}{2ia} = \frac{e^{-a}}{2ia}

So:

\oint_C f(z) \mathrm{d}z = 2\pi i \left(\frac{e^{-a}}{2ia}\right) = \frac{\pi e^{-a}}{a}

We can show that the integral over the semicircular arc \Gamma_R vanishes as R \to \infty. Here, the ML-bound is the appropriate tool.

The length of the contour (L) is the arc length of the semicircle \Gamma_R:

L = \pi R

We need to find an upper bound M for:

|f(z)| = \left|\frac{e^{iz}}{z^2 + a^2}\right|

on \Gamma_R.

On this arc:

z = R e^{i\theta} = R(\cos\theta + i\sin\theta)

for \theta \in [0, \pi].

The modulus of the numerator is:

|e^{iz}| = |e^{iR(\cos\theta + i\sin\theta)}| = |e^{-R\sin\theta} e^{iR\cos\theta}| = e^{-R\sin\theta}

Since \sin\theta \ge 0 for \theta \in [0, \pi], we have e^{-R\sin\theta} \le 1.

For the denominator, we use the reverse triangle inequality:

|z^2 + a^2| \ge ||z|^2 - |a^2|| = |R^2 - a^2|

For R > a, this is R^2 - a^2.

Combining these, we get a bound for |f(z)| on \Gamma_R:

|f(z)| \le \frac{1}{R^2 - a^2} \equiv M

We can then apply the ML-bound:

\left| \int_{\Gamma_R} \frac{e^{iz}}{z^2 + a^2} \mathrm{d}z \right| \le M L = \frac{1}{R^2 - a^2} \cdot \pi R = \frac{\pi R}{R^2 - a^2}

As R \to \infty, the bound goes to zero:

\lim_{R \to \infty} \frac{\pi R}{R^2 - a^2} = 0

This demonstrates that the contribution from the arc integral vanishes. Therefore, in the limit, we have:

\lim_{R \to \infty} \int_{-R}^{R} \frac{e^{ix}}{x^2 + a^2} \mathrm{d}x = \int_{-\infty}^{\infty} \frac{e^{ix}}{x^2 + a^2} \mathrm{d}x = \frac{\pi e^{-a}}{a}

Taking the real part of both sides gives the final result:

\int_{-\infty}^{\infty} \frac{\cos(x)}{x^2 + a^2} \mathrm{d}x = \frac{\pi e^{-a}}{a}

This technique is known as Jordan’s Lemma for integrals of this specific form, which itself is proven using the estimation inequality.

Average value property

Let f(z) be a function analytic on a connected open set \mathcal{D} \subset \mathbb{C}. Let z_0 be a point in \mathcal{D} and let \overline{B(z_0, R_0)} be a closed disk centered at z_0 with radius R_0 > 0 that is entirely contained within \mathcal{D}.

Cauchy’s Integral Formula states that the value of the function at the center of the disk can be recovered by an integral over its boundary circle, C = \partial B(z_0, R_0):

f(z_0) = \frac{1}{2\pi i}\oint_C \frac{f(z)}{z-z_0}\, \mathrm{d}z

To reveal the average value property, we parameterize the circular path C. Let a point z on the circle be represented by z = z_0 + R_0 e^{i\theta}, where \theta ranges from 0 to 2\pi. The differential element \mathrm{d}z is then given by \mathrm{d}z = i R_0 e^{i\theta} \mathrm{d}\theta.

Substituting this parameterization into the integral formula yields a profound simplification.

The term z - z_0 in the denominator becomes R_0 e^{i\theta}. The expression transforms as follows:

\begin{aligned} f(z_0) & = \frac{1}{2\pi i}\int_0^{2\pi} \frac{f(z_0 + R_0 e^{i\theta})}{R_0 e^{i\theta}}\, (i R_0 e^{i\theta} \mathrm{d}\theta) \\ & = \frac{1}{2\pi}\int_0^{2\pi} f(z_0 + R_0 e^{i\theta}) \mathrm{d}\theta \end{aligned}

This result, known as Gauss’s mean value theorem, asserts that the value of an analytic function at a point z_0 is the arithmetic mean of its values on any circle centered at z_0 contained within its domain of analyticity.

This property can also be expressed in terms of the arc length element \mathrm{d}s. Since z = z_0 + R_0 e^{i\theta}, we have |\mathrm{d}z| = |i R_0 e^{i\theta}| \mathrm{d}\theta = R_0 \mathrm{d}\theta. The arc length is s = R_0 \theta, so \mathrm{d}s = R_0 \mathrm{d}\theta.

The total arc length of the circle is 2\pi R_0. The formula can then be written as:

f(z_0) = \frac{1}{2\pi R_0}\oint_C f(z) \mathrm{d}s

This form highlights that f(z_0) is the integral average of f(z) over the circumference of the circle.

Maximum modulus principle

A consequence of the average value property is the maximum modulus principle.

Let a function f(z) be analytic on a bounded, connected domain \mathcal{D} and continuous on its closure \overline{\mathcal{D}}.

The principle states that the maximum value of |f(z)| over \overline{\mathcal{D}} must be attained on the boundary \partial\mathcal{D}, unless f(z) is a constant function.

Proof: the modulus of our function, |f(z)| = \sqrt{u(x,y)^2 + v(x,y)^2}, is a continuous real-valued function on the compact set \overline{\mathcal{D}}. By the extreme value theorem, |f(z)| must attain its maximum value M for at least one point z_0 \in \overline{\mathcal{D}}. Then, |f(z_0)| = M and |f(z)| \leq M for all z \in \overline{\mathcal{D}}.

If this maximum is attained at an interior point z_0 \in \mathcal{D}, then the function must be constant throughout \mathcal{D}.

Suppose the maximum M is attained at an interior point z_0. Since z_0 is in the open set \mathcal{D}, we can construct a small circle C_R centered at z_0 with radius R, such that the disk \overline{B(z_0, R)} is entirely within \mathcal{D}.

From the average value property at z_0:

f(z_0) = \frac{1}{2\pi}\int_0^{2\pi} f(z_0 + R e^{i\theta}) \mathrm{d}\theta

Taking the modulus of both sides and applying the integral inequality:

\left|\int g(\theta) \mathrm{d}\theta \right| \leq \int \left|g(\theta)\right| \mathrm{d}\theta

we have:

M = |f(z_0)| = \left| \frac{1}{2\pi}\int_0^{2\pi} f(z_0 + R e^{i\theta}) \mathrm{d}\theta \right| \leq \frac{1}{2\pi}\int_0^{2\pi} |f(z_0 + R e^{i\theta})| \mathrm{d}\theta

By definition of M, we know that for any point z on the circle C_R, |f(z)| \leq M. Therefore:

\frac{1}{2\pi}\int_0^{2\pi} |f(z_0 + R e^{i\theta})| \mathrm{d}\theta \leq \frac{1}{2\pi}\int_0^{2\pi} M \mathrm{d}\theta = \frac{1}{2\pi} (2\pi M) = M

Combining these inequalities gives:

M \leq \frac{1}{2\pi}\int_0^{2\pi} |f(z_0 + R e^{i\theta})| \mathrm{d}\theta \leq M

This forces the equality:

\frac{1}{2\pi}\int_0^{2\pi} |f(z_0 + R e^{i\theta})| \mathrm{d}\theta = M

This can be rewritten as:

\int_0^{2\pi} (M - |f(z_0 + R e^{i\theta})|) \mathrm{d}\theta = 0

The integrand, g(\theta) = M - |f(z_0 + R e^{i\theta})|, is non-negative for all \theta \in [0, 2\pi] due to M being the maximum modulus. Since |f(z)| is continuous, g(\theta) is a continuous function of \theta. A continuous, non-negative function whose integral is zero must be identically zero.

Suppose, for the sake of contradiction, that for some \theta_0, we have |f(z_0 + R e^{i\theta_0})| < M. Then M - |f(z_0 + R e^{i\theta_0})| > 0.

By continuity, this inequality must hold in a small interval (\theta_1, \theta_2) containing \theta_0. The integral would then be strictly positive, a contradiction.

\int_0^{2\pi} (M - |f(z)|) \mathrm{d}\theta = \int_{\theta_1}^{\theta_2} (M - |f(z)|) \mathrm{d}\theta + \dots > 0

Therefore, the integrand must be zero for all \theta. This implies that |f(z)| = M for all points z on the circle C_R.

We have shown that if |f(z_0)| = M at an interior point z_0, then |f(z)| must be equal to M on any circle centered at z_0 within \mathcal{D}. This implies that the set of points where |f(z)|=M is an open set.

To show that |f(z)| = M for all z \in \mathcal{D}, we can use a connectedness argument.

Let A = \{z \in \mathcal{D} : |f(z)| = M\} and B = \{z \in \mathcal{D} : |f(z)| < M\}. The set B is open by the continuity of |f|. The previous argument shows that for any point z_a \in A, we can find a disk around z_a that is also in A.

Therefore, A is also an open set. Since \mathcal{D} is a connected set, it cannot be partitioned into two disjoint non-empty open sets. As we assumed z_0 \in A, A is non-empty. Therefore, we must have B = \emptyset and A = \mathcal{D}. This means |f(z)| = M for all z \in \mathcal{D}.

A function whose modulus is constant on a domain must itself be a constant function (this can be shown via the Cauchy-Riemann equations). Therefore, if a non-constant analytic function attains its maximum modulus in the interior of its domain, it leads to a contradiction. The maximum must be located on the boundary \partial\mathcal{D}.

Minimum modulus principle

A parallel result, the minimum modulus principle, can be derived under an additional condition. If f(z) is analytic in a domain \mathcal{D}, continuous on \overline{\mathcal{D}}, and importantly, f(z) \neq 0 for any z \in \mathcal{D}, then |f(z)| must attain its minimum value on the boundary \partial\mathcal{D}, unless f(z) is constant.

The proof is direct. Since f(z) is non-vanishing in \mathcal{D}, the function g(z) = 1/f(z) is analytic in \mathcal{D}. It is also continuous on \overline{\mathcal{D}}. By the Maximum Modulus Principle, |g(z)| must attain its maximum on the boundary \partial\mathcal{D}.

\max_{z \in \overline{\mathcal{D}}} |g(z)| = \max_{z \in \partial\mathcal{D}} |g(z)|

Substituting back g(z) = 1/f(z), we get:

\max_{z \in \overline{\mathcal{D}}} \frac{1}{|f(z)|} = \max_{z \in \partial\mathcal{D}} \frac{1}{|f(z)|}

This is equivalent to:

\frac{1}{\min_{z \in \overline{\mathcal{D}}} |f(z)|} = \frac{1}{\min_{z \in \partial\mathcal{D}} |f(z)|}

Which implies:

\min_{z \in \overline{\mathcal{D}}} |f(z)| = \min_{z \in \partial\mathcal{D}} |f(z)|

The condition f(z) \neq 0 is indispensable. Consider f(z) = z on the disk |z| \leq 1. The minimum modulus is |f(0)|=0, which occurs in the interior, not on the boundary where the minimum value is 1. This is permitted because the function has a zero at an interior point.

Parameter-dependent integrals

Considering Cauchy integral formula, it can be noticed that the integrand depends on two complex variables, the dummy variable z and a fixed value z_0, and therefore that it depends from one parameter.

Let \mathcal{D} be a domain in the complex plane and let C be a piecewise smooth contour. Consider a function of two complex variables, g(z, \zeta), defined for z \in \mathcal{D} and \zeta \in C. We impose the following conditions on this function:

  1. for each fixed \zeta \in C, the function g(z, \zeta) is an analytic function of z for all z \in \mathcal{D},
  2. the function g(z, \zeta) and its partial derivative with respect to z, \frac{\partial g}{\partial z}(z, \zeta), are continuous in both variables for z \in \mathcal{D} and \zeta \in C.

Under these conditions, the function F(z) defined by the integral over the contour C:

F(z) = \int_C g(z, \zeta) \mathrm{d}\zeta

is analytic in the domain \mathcal{D}. Furthermore, its derivative can be obtained by differentiating under the integral sign:

F^\prime(z) = \int_C \frac{\partial g}{\partial z}(z, \zeta) \mathrm{d}\zeta

Proof: we can establish the analyticity of F(z) by demonstrating that its real and imaginary parts satisfy the Cauchy-Riemann equations and have continuous partial derivatives.

Let z = x+iy and \zeta = \xi + i\eta. The function g(z, \zeta) has real and imaginary parts u(x, y, \xi, \eta) and v(x, y, \xi, \eta), respectively. The differential along the curve is \mathrm{d}\zeta = \mathrm{d}\xi + i\mathrm{d}\eta. We can express F(z) in terms of its real and imaginary parts, U(x,y) and V(x,y):

\begin{aligned} F(z) & = \int_C (u + iv)(\mathrm{d}\xi + i\mathrm{d}\eta) \\ & = \int_C (u \mathrm{d}\xi - v \mathrm{d}\eta) + i \int_C (v \mathrm{d}\xi + u \mathrm{d}\eta) \end{aligned}

This gives us the explicit forms for U and V:

\begin{aligned} U(x,y) & = \int_C [u(x, y, \xi, \eta)\mathrm{d}\xi - v(x, y, \xi, \eta)\mathrm{d}\eta] \\ V(x,y) & = \int_C [v(x, y, \xi, \eta)\mathrm{d}\xi + u(x, y, \xi, \eta)\mathrm{d}\eta] \end{aligned}

The continuity hypothesis on g and its derivative ensures that we can interchange the order of differentiation and integration (Leibniz’s rule for real line integrals). We compute the partial derivative of U with respect to x:

\frac{\partial U}{\partial x} = \int_C \left(\frac{\partial u}{\partial x}\mathrm{d}\xi - \frac{\partial v}{\partial x}\mathrm{d}\eta\right)

Next, we compute the partial derivative of V with respect to y:

\frac{\partial V}{\partial y} = \int_C \left(\frac{\partial v}{\partial y}\mathrm{d}\xi + \frac{\partial u}{\partial y}\mathrm{d}\eta\right)

By our first hypothesis, g(z, \zeta) is analytic in z for any fixed \zeta. Therefore, its real and imaginary parts, u and v, must satisfy the Cauchy-Riemann equations with respect to the variables x and y:

\begin{aligned} & \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\ & \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} \end{aligned}

Substituting these relations into the expression for \frac{\partial V}{\partial y}:

\frac{\partial V}{\partial y} = \int_C \left(\frac{\partial u}{\partial x}\mathrm{d}\xi + \left(-\frac{\partial v}{\partial x}\right)\mathrm{d}\eta\right) = \int_C \left(\frac{\partial u}{\partial x}\mathrm{d}\xi - \frac{\partial v}{\partial x}\mathrm{d}\eta\right)

Comparing this with the expression for \frac{\partial U}{\partial x}, we find the first Cauchy-Riemann equation for F(z):

\frac{\partial U}{\partial x} = \frac{\partial V}{\partial y}

A similar procedure for the remaining partial derivatives confirms the second equation:

\frac{\partial U}{\partial y} = -\frac{\partial V}{\partial x}

Since \frac{\partial g}{\partial z} is continuous, the partial derivatives of U and V are also continuous. The satisfaction of the Cauchy-Riemann equations with continuous partials is a sufficient condition for F(z) to be analytic in \mathcal{D}.

The derivative of F(z) is given by:

F^\prime(z) = \frac{\partial U}{\partial x} + i\frac{\partial V}{\partial x}

Using the expression for \frac{\partial U}{\partial x} and a similar one for \frac{\partial V}{\partial x}:

\frac{\partial V}{\partial x} = \int_C \left(\frac{\partial v}{\partial x}\mathrm{d}\xi + \frac{\partial u}{\partial x}\mathrm{d}\eta\right) We assemble the derivative:

\begin{aligned} F^\prime &= \int_C \left(\frac{\partial u}{\partial x}\mathrm{d}\xi - \frac{\partial v}{\partial x}\mathrm{d}\eta\right) + i \int_C \left(\frac{\partial v}{\partial x}\mathrm{d}\xi + \frac{\partial u}{\partial x}\mathrm{d}\eta\right) \\ &= \int_C \left( \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} \right) \mathrm{d}\xi + \left( -\frac{\partial v}{\partial x} + i \frac{\partial u}{\partial x} \right) i \mathrm{d}\eta \\ &= \int_C \left( \frac{\partial g}{\partial x} \right) \mathrm{d}\xi + i \left( \frac{\partial g}{\partial x} \right) \mathrm{d}\eta = \int_C \frac{\partial g}{\partial x} (\mathrm{d}\xi + i \mathrm{d}\eta) \end{aligned}

Since \frac{\partial}{\partial x} = \frac{\partial}{\partial z}, this simplifies to the desired result:

F^\prime(z) = \int_C \frac{\partial g}{\partial z}(z, \zeta) \mathrm{d}\zeta

Therefore, it is possible to compute the derivative of an integral by differentiating the integrand function with respect to a parameter.

Higher-order derivatives

A consequence of the analyticity of parameter-dependent integrals is that the existence of a single complex derivative implies the existence of derivatives of all orders.

This property establishes a contrast between complex analysis and real analysis.

We can demonstrate this by iteratively applying the theorem for differentiating under the integral sign to Cauchy’s Integral Formula.

Let f(z) be an analytic function in a domain \mathcal{D} bounded by a simple closed contour C.

For any point z in the interior of \mathcal{D}, its value is given by Cauchy’s integral formula:

f(z) = \frac{1}{2\pi i}\oint_C \frac{f(\zeta)}{\zeta-z}\, \mathrm{d}\zeta

This formula represents f(z) as a parameter-dependent integral where z is the parameter.

To apply the theorem on differentiating such integrals, we must verify its conditions. Let us consider the integrand:

g(z, \zeta) = \frac{f(\zeta)}{\zeta-z}

For any interior point z \in \mathcal{D}, we can always find a subdomain \mathcal{D}_1 \subset \mathcal{D} containing z such that there is a minimum distance d > 0 between any point in the closure \overline{\mathcal{D}_1} and any point \zeta on the boundary contour C. That is, for any z^\prime \in \overline{\mathcal{D}_1} and \zeta \in C, we have |\zeta - z^\prime| \ge d > 0.

Within this subdomain \mathcal{D}_1, the function g(z, \zeta) satisfies the required conditions, as for any fixed \zeta \in C, g(z, \zeta) is an analytic function of z in \mathcal{D}_1 (as it is a simple rational function whose pole is at \zeta, which is not in \mathcal{D}_1); and the function g(z, \zeta) and its partial derivative with respect to z,

\frac{\partial g}{\partial z}(z, \zeta) = \frac{f(\zeta)}{(\zeta-z)^2}

are continuous for all z \in \mathcal{D}_1 and \zeta \in C. The denominator is never zero, and f(\zeta) is continuous on the compact set C, hence bounded.

Therefore, the function f(z) is analytic in \mathcal{D}_1, and its derivative is found by differentiating under the integral sign:

f^\prime(z) = \frac{1}{2\pi i}\oint_C \frac{\partial}{\partial z}\left(\frac{f(\zeta)}{\zeta-z}\right)\, \mathrm{d}\zeta = \frac{1}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^2}\, \mathrm{d}\zeta

This result not only provides a formula for the first derivative but also proves its existence throughout the interior of \mathcal{D}.

Noting that the integral representing f^\prime(z) is itself a parameter-dependent integral, the integrand, g_1(z, \zeta) = \frac{f(\zeta)}{(\zeta-z)^2}, also satisfies the conditions of the theorem on the same subdomain \mathcal{D}_1.

Consequently, f^\prime(z) must also be analytic. We can repeat the process to find the second derivative:

f^{\prime\prime}(z) = \frac{\mathrm{d}}{\mathrm{d}z}f^\prime(z) = \frac{1}{2\pi i}\oint_C \frac{\partial}{\partial z}\left(\frac{f(\zeta)}{(\zeta-z)^2}\right)\, \mathrm{d}\zeta = \frac{2}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^3}\, \mathrm{d}\zeta

Since for any interior point z \in \mathcal{D}, we can construct a suitable subdomain \mathcal{D}_1, these formulas are valid throughout the entire domain \mathcal{D}.

This iterative process can be formalized by mathematical induction. Assume that the formula for the n^{th} derivative holds and that f^{(n)}(z) is analytic:

f^{(n)}(z) = \frac{n!}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^{n+1}}\, \mathrm{d}\zeta

To find the (n+1)^{th} derivative, we differentiate this expression with respect to z, again moving the derivative inside the integral:

\begin{aligned} f^{(n+1)}(z) &= \frac{n!}{2\pi i}\oint_C \frac{\partial}{\partial z}\left(\frac{f(\zeta)}{(\zeta-z)^{n+1}}\right)\, \mathrm{d}\zeta \\ &= \frac{n!}{2\pi i}\oint_C f(\zeta) \left( -(n+1)(\zeta-z)^{-(n+2)}(-1) \right) \, \mathrm{d}\zeta \\ &= \frac{(n+1)n!}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^{n+2}}\, \mathrm{d}\zeta \\ &= \frac{(n+1)!}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^{n+2}}\, \mathrm{d}\zeta \end{aligned}

The formula holds for n+1, completing the induction. This leads to the following theorem.

Theorem: let a function f(z) be analytic on a bounded, connected domain \mathcal{D} and continuous on its closure \overline{\mathcal{D}}, which is bounded by a simple closed contour C.

Then, at all interior points z \in \mathcal{D}, f(z) is infinitely differentiable.

The formula for the n^{th} derivative is given by:

f^{(n)}(z) = \frac{n!}{2\pi i}\oint_C \frac{f(\zeta)}{(\zeta-z)^{n+1}}\, \mathrm{d}\zeta

This property marks a departure from the theory of functions of a real variable.

A real-valued function can possess a continuous first derivative in an interval without its second derivative necessarily existing.

For example, the function h(x) = x|x| is differentiable everywhere, with h^\prime(x) = 2|x|, but h^{\prime\prime}(0) does not exist. In complex analysis, such a situation is impossible and the requirement of differentiability in an open disk in the complex plane is a far more restrictive condition than differentiability on an open interval of the real line.

The integral formula reveals the origin of this requirement: the value of an analytic function and all its derivatives at a point z are determined by the values of the function on a boundary curve C that encloses z. This “action at a distance” enforces a global smoothness that is not present in real analysis.

Morera’s theorem

I demonstrate it here.

Liouville’s theorem

I demonstrate it here.

Uniqueness

The structural properties of analytic functions lead to a powerful principle of rigidity. The behavior of an analytic function in a small portion of its domain of analyticity determines its behavior throughout the entire domain.

For instance, the values of an analytic function on the boundary of a domain completely specify its values at all interior points via Cauchy’s Integral Formula.

This indicates that an analytic function is uniquely determined by what might seem to be incomplete information. This concept is formalized by the Uniqueness Theorem, which is best understood by first examining the nature of the zeros of an analytic function.

Zeros

Let f(z) be a function analytic in a domain \mathcal D. A point z_0 \in \mathcal D is called a zero of f(z) if f(z_0) = 0.

From Taylor’s theorem, we can expand f(z) in a power series in a neighborhood of z_0:

f(z) = \sum_{n=0}^\infty c_n (z - z_0)^n

The condition f(z_0) = 0 immediately implies that the leading coefficient c_0 is zero. If not only c_0 but also the subsequent coefficients c_1, \dots, c_{k-1} are zero, while c_k \neq 0, then z_0 is called a zero of order k. This is equivalent to the conditions f(z_0) = f^\prime(z_0) = \dots = f^{(k-1)}(z_0) = 0 and f^{(k)}(z_0) \neq 0.

In a neighborhood of a zero of order k, the function can be written as:

\begin{aligned} f(z) &= \sum_{n=k}^\infty c_n (z - z_0)^n \\ &= (z - z_0)^k \sum_{n=k}^\infty c_n (z - z_0)^{n-k} \\ &= (z - z_0)^k \sum_{j=0}^\infty c_{j+k} (z - z_0)^j \end{aligned}

We can express this as:

f(z) = (z - z_0)^k g(z)

where the function g(z) is defined by the power series:

g(z) = \sum_{j=0}^\infty c_{j+k} (z - z_0)^j

This function g(z) is analytic in the same disk as f(z) and, importantly, g(z_0) = c_k \neq 0. Because g(z) is continuous and non-zero at z_0, there exists a small neighborhood around z_0 where g(z) \neq 0. This implies that the zeros of a non-identically-zero analytic function are isolated.

Identity theorem

Theorem: let a function f(z) be analytic in a domain \mathcal D. If the set of zeros of f(z) has a limit point in \mathcal D, then f(z) is identically zero throughout \mathcal D.

Proof: let the sequence of zeros \{z_n\} be contained in \mathcal D and converge to a limit a \in \mathcal D.

Since f(z) is analytic, it is continuous. Therefore, the value of the function at the limit point is the limit of the values:

f(a) = \lim_{n\to\infty} f(z_n) = \lim_{n\to\infty} 0 = 0

This shows that a is also a zero of f(z). We expand f(z) in a Taylor series around the point a:

f(z) = \sum_{k=0}^\infty c_k (z - a)^k

The radius of convergence R_0 of this series is at least the distance from a to the boundary of \mathcal D. Since f(a)=0, we know that c_0=0. We can write:

f(z) = (z - a) \sum_{k=1}^\infty c_k (z - a)^{k-1} = (z - a)f_1(z)

where f_1(z) is an analytic function in the same disk |z-a|<R_0.

For each point z_n of the sequence (assuming z_n \neq a), we have f(z_n) = 0. This gives (z_n - a)f_1(z_n) = 0, which implies f_1(z_n) = 0. Since \{z_n\} converges to a, by the continuity of f_1(z), we must have f_1(a) = 0. The value of f_1(a) corresponds to the coefficient c_1 in the original expansion of f(z), so c_1=0.

We can repeat this argument. Since c_1=0, we can write f_1(z)=(z-a)f_2(z), which means f(z)=(z-a)^2f_2(z). The condition f_1(z_n)=0 implies f_2(z_n)=0, and thus f_2(a)=c_2=0. By induction, we find that all coefficients c_k of the Taylor series for f(z) around a are zero. This means f(z) \equiv 0 in the disk of convergence |z-a| < R_0.

Now we must show that f(z) is zero throughout the entire domain \mathcal D. Let z_1 be any other point in \mathcal D. Since a domain is a connected open set, we can connect a and z_1 with a path L that lies entirely within \mathcal D.

We can cover this path with a finite chain of overlapping open disks, all contained within \mathcal D. The first disk is |z-a|<R_0, where we know f(z) \equiv 0.

Since this disk overlaps with the next disk in the chain, the set of zeros in the second disk has a limit point (any point in the intersection). By the argument just given, f(z) must be identically zero in the second disk as well.

We can proceed along the chain of disks, showing that f(z) is identically zero in each one. After a finite number of steps, we reach the disk containing z_1, proving that f(z_1)=0. Since z_1 was an arbitrary point in \mathcal D, we conclude that f(z) \equiv 0 throughout \mathcal D.

Corollary 1: a function f(z) \not\equiv 0, analytic in a domain \mathcal D, can have only a finite number of zeros in any closed and bounded subdomain \overline{\mathcal D^\prime} \subset \mathcal D.

Proof: suppose, for the sake of contradiction, that f(z) has an infinite number of zeros in a compact set \overline{\mathcal D^\prime}. By the Bolzano-Weierstrass theorem, this infinite set of zeros must have a limit point a \in \overline{\mathcal D^\prime}. Since \overline{\mathcal D^\prime} \subset \mathcal D, the limit point a is in the domain \mathcal D. By the main theorem, this implies f(z) \equiv 0 in \mathcal D, which contradicts the hypothesis.

Corollary 2: if a point z_0 \in \mathcal D is a zero of infinite order for an analytic function f(z), then f(z) \equiv 0 throughout \mathcal D.

Proof: a zero of infinite order means that f^{(n)}(z_0) = 0 for all n \ge 0. This implies all coefficients c_n in the Taylor expansion around z_0 are zero. Consequently, f(z) \equiv 0 in a neighborhood of z_0. This neighborhood contains a sequence of zeros converging to z_0. By the main theorem, f(z) \equiv 0 in all of \mathcal D.

Corollary 3: An analytic function that is not identically zero can have an infinite number of zeros in an open domain \mathcal D only if the limit points of the set of zeros lie on the boundary of \mathcal D or at infinity.

Uniqueness theorem

Theorem: let two functions, f(z) and g(z), be analytic in a domain \mathcal D. If f(z_n) = g(z_n) on a sequence of distinct points \{z_n\} that converges to a point a \in \mathcal D, then f(z) \equiv g(z) throughout \mathcal D.

Proof: we define a new function h(z) = f(z) - g(z). This function is analytic in \mathcal D because it is the difference of two analytic functions.

By hypothesis, h(z_n) = f(z_n) - g(z_n) = 0 for all points in the sequence \{z_n\}. This sequence of zeros for h(z) converges to the point a \in \mathcal D.

Applying the preceding theorem to h(z), we conclude that h(z) \equiv 0 throughout \mathcal D. This implies that f(z) = g(z) for all z \in \mathcal D.

This theorem is often called the uniqueness theorem for analytic functions. It shows that an analytic function is uniquely determined by its values on any set of points containing a limit point within its domain of analyticity.

Corollary 1: if two functions f(z) and g(z), analytic in a domain \mathcal D, coincide on a curve segment L \subset \mathcal D, then they are identical throughout \mathcal D.

Corollary 2: if a function f(z) is analytic in a domain \mathcal D_1 and a function g(z) is analytic in a domain \mathcal D_2, and if f(z)=g(z) in the non-empty open set \mathcal D_1 \cap \mathcal D_2, then there exists a unique analytic function h(z) defined on the domain \mathcal D_1 \cup \mathcal D_2 such that:

h(z) = \begin{cases} f(z), & \quad z \in \mathcal D_1 \\ g(z), & \quad z \in \mathcal D_2 \end{cases}

This principle forms the basis of analytic continuation.

Go to the top of the page