Properties of Analytic Functions
A complex function f(z) is said to be analytic (or holomorphic) in an open set \mathcal{D} \subseteq \mathbb{C} if f(z) is single-valued and possesses a finite complex derivative f^\prime(z) at every point z \in \mathcal{D}. The derivative is defined by the limit:
f^\prime(z) = \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}
The limit must exist and be unique, regardless of the path along which \Delta z approaches zero.
A theorem in complex analysis establishes that a function is analytic in an open set \mathcal{D} if and only if it is infinitely differentiable in \mathcal{D} and, for every z_0 \in \mathcal{D}, f(z) can be represented by its Taylor series expansion around z_0:
f(z) = \sum_{n=0}^{\infty} a_n (z - z_0)^n = \sum_{n=0}^{\infty} \frac{f^{(n)}(z_0)}{n!} (z - z_0)^n
This series converges to f(z) for all z in some open disk (a neighborhood) centered at z_0 and contained within \mathcal{D}. The coefficients a_n = \frac{f^{(n)}(z_0)}{n!} are complex numbers, where f^{(n)}(z_0) is the n^{th} complex derivative of f at z_0.
The terms “analytic” and “holomorphic” are as a consequence used interchangeably. The set of all analytic functions on a given open set D is often denoted by \mathcal{C}^{\omega}(D) or H(D).
If a function is analytic on the entire complex plane \mathbb{C}, it is called an entire function.
Some examples of analytic functions are the following.
Polynomials, functions of the form:
P(z) = a_m z^m + \dots + a_1 z + a_0
where a_k \in \mathbb{C}, are entire functions. For example, f(z) = z^n for n \in \mathbb{N}_0.
Power series with infinite radius of convergence: for instance, the complex exponential f(z) = e^z, and the complex trigonometric functions f(z) = \sin(z) and f(z) = \cos(z), are entire functions.
Principal branch of the complex logarithm: f(z) = \operatorname{Log}(z) is analytic on its principal branch, typically defined on the domain \mathbb{C} \setminus (-\infty, 0] (the complex plane excluding the non-positive real axis, which serves as the branch cut).
Rational functions, functions of the form:
R(z) = \frac{P(z)}{Q(z)}
where P(z) and Q(z) are polynomials, are analytic in any domain where Q(z) \neq 0.
Some examples of Functions that are not analytic (or not everywhere analytic) are the following.
Functions that are not complex differentiable: for example, the complex conjugate function f(z) = \bar{z}, which is nowhere analytic. Similarly, f(z) = \Re(z), f(z) = \Im(z), and f(z) = |z| are nowhere analytic.
Functions with singularities: For example, f(z) = 1/z is analytic on \mathbb{C} \setminus \{0\} but is not analytic at the singularity z=0. Therefore, it is not an entire function.
Functions with discontinuities: Since to be differentiable a function need to be continue, a function that is discontinuous at a point cannot be analytic at that point.
Multivalued functions: Functions like f(z) = \sqrt{z} or z^a (for non-integer a), when considered as mappings from a single z to multiple w values, are not single-valued and therefore not analytic in this sense. However, specific branches of these multivalued functions can be defined (by introducing branch cuts) which are single-valued and analytic on their respective domains.
Let’s consider for example an analytic function:
f(z) = z^2
The derivative is:
\begin{aligned} f^\prime(z) & = \lim_{\Delta z \rightarrow 0} \frac{f (z + \Delta z) - f (z)}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{(z + \Delta z)^2 - z^2}{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} \frac{z^2 + 2 z \Delta z + (\Delta z)^2 - z^2}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{2 z \Delta z + (\Delta z)^2 }{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} (2z + \Delta z) = 2z \end{aligned}
The limit exists and is uniquely 2z, regardless of how \Delta z \to 0. It is possible to illustrate path independence explicitly.
Let z = x+iy and \Delta z = \Delta x + i\Delta y.
\lim_{\Delta z \rightarrow 0} (2(x+iy) + (\Delta x + i \Delta y)) = 2(x+iy)
If we approach along the real axis (\Delta y = 0, \Delta z = \Delta x \to 0):
\lim_{\Delta x \rightarrow 0} (2z + \Delta x) = 2z
If we approach along the imaginary axis (\Delta x = 0, \Delta z = i\Delta y \to 0):
\lim_{i\Delta y \rightarrow 0} (2z + i\Delta y) = 2z
The result is 2z in all cases, confirming f(z)=z^2 is analytic (it is, in fact, an entire function).
Let’s consider for example a function which is not analytic:
f(z) = \bar{z} = x - iy
The derivative is:
\begin{aligned} f^\prime(z) & = \lim_{\Delta z \rightarrow 0} \frac{f (z + \Delta z) - f (z)}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{\overline{z + \Delta z} - \bar{z}}{\Delta z} = \lim_{\Delta z \rightarrow 0} \frac{\bar{z} + \overline{\Delta z} - \bar{z}}{\Delta z} \\ & = \lim_{\Delta z \rightarrow 0} \frac{\overline{\Delta z}}{\Delta z} \end{aligned}
Let \Delta z = \Delta x + i\Delta y. Then \overline{\Delta z} = \Delta x - i\Delta y.
f^\prime(z) = \lim_{\substack{\Delta x \to 0 \\ \Delta y \to 0}} \frac{\Delta x - i \Delta y}{\Delta x + i \Delta y}
This limit depends on the path of approach:
Let \Delta z \to 0 along the real axis (so \Delta y = 0, \Delta z = \Delta x):
\lim_{\Delta x \rightarrow 0} \frac{\Delta x - i(0)}{\Delta x + i(0)} = \lim_{\Delta x \rightarrow 0} \frac{\Delta x}{\Delta x} = 1
Let \Delta z \to 0 along the imaginary axis (so \Delta x = 0, \Delta z = i\Delta y):
\lim_{i\Delta y \rightarrow 0} \frac{0 - i \Delta y}{0 + i \Delta y} = \lim_{\Delta y \rightarrow 0} \frac{-i \Delta y}{i \Delta y} = -1
Since the limit yields different values depending on the path of approach (1 \neq -1), the derivative f^\prime(z) does not exist for any z, and therefore f(z) = \bar{z} is nowhere analytic.
Analytic functions possess several properties, distinguishing them significantly from differentiable functions of a real variable:
Infinite differentiable: If a function is analytic in an open set, it is infinitely differentiable in that set.
Power Series Representation: Every analytic function can be represented locally by a convergent power series (its Taylor series).
Closure Properties:
Cauchy’s Integral Theorem: If f(z) is analytic in a simply connected domain \mathcal{D}, then for any simple closed contour (loop) \mathcal{C} entirely within \mathcal{D}, the contour integral of f(z) around \mathcal{C} is zero:
\oint_{\mathcal{C}} f(z)\mathrm{d}z = 0
Cauchy’s Integral Formula: This formula expresses the value of an analytic function at any point inside a contour in terms of its values on the contour.
Liouville’s Theorem: A bounded entire function (analytic everywhere in \mathbb{C}) must be constant.
Maximum Modulus Principle: The maximum value of the modulus |f(z)| of a non-constant analytic function in a bounded domain occurs on the boundary of the domain, not in its interior.
The Cauchy-Riemann conditions provide a test for the check if a complex function is analytic by relating its real and imaginary parts.
Let a complex function f(z) be defined as f(z) = u(x,y) + i v(x,y), where z = x + iy, and u(x,y) = \Re(f(z)) and v(x,y) = \Im(f(z)) are real-valued functions of two real variables x and y.
For f(z) to be analytic at a point z, its derivative f^\prime(z) must exist and be unique:
f^\prime(z)= \lim_{\Delta z \to 0} \frac{f(z + \Delta z) - f(z)}{\Delta z}
The existence of this limit implies that it must be the same regardless of the path along which \Delta z = \Delta x + i \Delta y approaches zero.
If we assume the partial derivatives of u and v exist, the increment \Delta f = f(z+\Delta z) - f(z) can be written as:
\Delta f = (u(x+\Delta x, y+\Delta y) - u(x,y)) + i(v(x+\Delta x, y+\Delta y) - v(x,y))
Assuming u and v are differentiable as functions of two real variables, we have:
\begin{aligned} u(x+\Delta x, y+\Delta y) - u(x,y) & = \frac{\partial u}{\partial x}\Delta x + \frac{\partial u}{\partial y}\Delta y + \epsilon_1 |\Delta z| \\ v(x+\Delta x, y+\Delta y) - v(x,y) & = \frac{\partial v}{\partial x}\Delta x + \frac{\partial v}{\partial y}\Delta y + \epsilon_2 |\Delta z| \end{aligned}
where \epsilon_1, \epsilon_2 \to 0 as \Delta z \to 0. Then,
\frac{\Delta f}{\Delta z} = \frac{(\frac{\partial u}{\partial x}\Delta x + \frac{\partial u}{\partial y}\Delta y) + i(\frac{\partial v}{\partial x}\Delta x + \frac{\partial v}{\partial y}\Delta y)}{\Delta x + i\Delta y} + \frac{(\epsilon_1 + i\epsilon_2)|\Delta z|}{\Delta z}
The term involving \epsilon_1, \epsilon_2 tends to zero as \Delta z \to 0. For the limit \frac{\mathrm df(z)}{\mathrm dz} to exist, the main fraction must approach a unique value.
Taking path 1 and approaching along the real axis (\Delta y = 0, so \Delta z = \Delta x \to 0):
f^\prime(z)= \lim_{\Delta x \to 0} \frac{(\frac{\partial u}{\partial x}\Delta x) + i(\frac{\partial v}{\partial x}\Delta x)}{\Delta x} = \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}
Taking path 2 and approaching along the imaginary axis (\Delta x = 0, so \Delta z = i\Delta y \to 0):
f^\prime(z)= \lim_{\Delta y \to 0} \frac{(\frac{\partial u}{\partial y}\Delta y) + i(\frac{\partial v}{\partial y}\Delta y)}{i\Delta y} = \frac{1}{i}\left(\frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y}\right) = \frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y}
For \frac{\mathrm df(z)}{\mathrm dz} to be uniquely defined, these two expressions must be equal. Equating the real and imaginary parts:
\begin{aligned} & \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\ & \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \end{aligned}
These are the Cauchy-Riemann conditions (or equations). They are necessary conditions for a function f(z) to be analytic.
If the first-order partial derivatives of u and v are continuous and satisfy the Cauchy-Riemann conditions at a point, then this is also a sufficient condition for f(z) to be differentiable (and thus analytic) at that point.
For example, considering again f(z) = z^2:
\begin{aligned} & z^2 = (x+iy)^2 = (x^2-y^2) +i(2xy) \\ & u = x^2-y^2 \\ & v = 2xy \end{aligned}
Computing the partial derivatives:
\begin{aligned} & \frac{\partial u}{\partial x} = 2x \\ & \frac{\partial u}{\partial y} = -2y \\ & \frac{\partial v}{\partial x} = 2y \\ & \frac{\partial v}{\partial y} = 2x \end{aligned}
Checking the Cauchy-Riemann conditions. The first one:
\begin{aligned} & \frac{\partial u}{\partial x} = 2x \\ & \frac{\partial v}{\partial y} = 2x \end{aligned}
So:
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} = 2x
The second one:
\begin{aligned} & \frac{\partial u}{\partial y} = -2y\\ & \frac{\partial v}{\partial x} = 2y = 2y \end{aligned}
So:
\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} = -2y
Both conditions are satisfied for all (x,y). Since the partial derivatives are continuous everywhere, f(z)=z^2 is analytic everywhere (it is an entire function).
The derivative calculated via path 1 is:
\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} = 2x + i(2y) = 2(x+iy) = 2z
The derivative calculated via path 2:
\frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y} = 2x - i(-2y) = 2x + i(2y) = 2(x+iy) = 2z
Both yield the same result:
\frac{\mathrm df(z)}{\mathrm dz}=2z
When working with complex functions, it is often convenient to use polar coordinates z = re^{i\theta}, where x = r\cos\theta and y = r\sin\theta. If f(z) = u(r, \theta) + i v(r, \theta), the Cauchy-Riemann conditions in polar form are:
\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r} \frac{\partial v}{\partial \theta} \\ & \frac{\partial v}{\partial r} = - \frac{1}{r} \frac{\partial u}{\partial \theta} \end{aligned}
These can be denoted as u_r = \frac{1}{r}v_\theta and v_r = -\frac{1}{r}u_\theta. These equations hold for r \neq 0. The partial derivatives u_r, u_\theta, v_r, v_\theta can be related to u_x, u_y, v_x, v_y using the chain rule:
\begin{aligned} & u_r = u_x x_r + u_y y_r = u_x \cos\theta + u_y \sin\theta \\ & v_\theta = v_x x_\theta + v_y y_\theta = v_x (-r\sin\theta) + v_y (r\cos\theta) \end{aligned}
Using the Cartesian version of the Cauchy-Riemann conditions (u_x = v_y, u_y = -v_x):
\begin{aligned} & u_r = v_y \cos\theta - v_x \sin\theta \\ & \frac{1}{r}v_\theta = -v_x \sin\theta + v_y \cos\theta \\ & u_r = \frac{1}{r}v_\theta \end{aligned}
Similarly,
\begin{aligned} & v_r = v_x x_r + v_y y_r = v_x \cos\theta + v_y \sin\theta \\ & u_\theta = u_x x_\theta + u_y y_\theta = u_x (-r\sin\theta) + u_y (r\cos\theta) \end{aligned}
Using again the Cartesian version of the Cauchy-Riemann conditions:
\begin{aligned} & v_r = -u_y \cos\theta + u_x \sin\theta\\ & -\frac{1}{r}u_\theta = - (u_x (-\sin\theta) + u_y \cos\theta) = u_x \sin\theta - u_y \cos\theta \\ & v_r = -\frac{1}{r}u_\theta \end{aligned}
The derivative \frac{\mathrm df(z)}{\mathrm dz} can be expressed in two ways following different paths.
Taking path 1 and approaching along the radial direction (\Delta \theta = 0, \Delta r \to 0), \Delta z = (r+\Delta r)e^{i\theta} - re^{i\theta} = \Delta r e^{i\theta}:
f^\prime(z)= \lim_{\Delta r \to 0} \frac{f( (r+\Delta r)e^{i\theta} ) - f(re^{i\theta})}{\Delta r e^{i\theta}} = e^{-i\theta} \left( \frac{\partial u}{\partial r} + i \frac{\partial v}{\partial r} \right)
Taking path 1 and approaching along the angular direction (\Delta r = 0, \Delta \theta \to 0), \Delta z = re^{i(\theta+\Delta\theta)} - re^{i\theta} = re^{i\theta}(e^{i\Delta\theta} - 1), for small \Delta\theta, e^{i\Delta\theta} - 1 \approx i\Delta\theta,so \Delta z \approx i r e^{i\theta} \Delta\theta:
f^\prime(z)= \lim_{\Delta \theta \to 0} \frac{f( re^{i(\theta+\Delta\theta)} ) - f(re^{i\theta})}{i r e^{i\theta} \Delta\theta} = \frac{1}{ire^{i\theta}} \left( \frac{\partial u}{\partial \theta} + i \frac{\partial v}{\partial \theta} \right) = e^{-i\theta} \left( \frac{1}{r}\frac{\partial v}{\partial \theta} - \frac{i}{r}\frac{\partial u}{\partial \theta} \right)
Equating the two expressions for \frac{\mathrm df(z)}{\mathrm dz}:
e^{-i\theta} \left( \frac{\partial u}{\partial r} + i \frac{\partial v}{\partial r} \right) = e^{-i\theta} \left( \frac{1}{r}\frac{\partial v}{\partial \theta} - \frac{i}{r}\frac{\partial u}{\partial \theta} \right)
Comparing the real and imaginary parts inside the parentheses gives the polar Cauchy-Riemann conditions.
For example, let’s consider the principal branch of the complex logarithm:
\operatorname{Log } z = \ln r + i\theta
where z = re^{i\theta} and \theta = \operatorname{Arg}(z) \in (-\pi, \pi]. Here, u(r,\theta) = \ln r and v(r,\theta) = \theta.
The partial derivatives are:
\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r} \\ & \frac{\partial u}{\partial \theta} = 0 \\ & \frac{\partial v}{\partial r} = 0 \\ & \frac{\partial v}{\partial \theta} = 1 \end{aligned}
Checking the Cauchy-Riemann conditions for r \neq 0. The first one:
\begin{aligned} & \frac{\partial u}{\partial r} = \frac{1}{r}\\ & \frac{1}{r}\frac{\partial v}{\partial \theta} = \frac{1}{r}(1) = \frac{1}{r} \end{aligned}
So,
\frac{\partial u}{\partial r} = \frac{1}{r}\frac{\partial v}{\partial \theta}
The second one:
\begin{aligned} & \frac{\partial v}{\partial r} = 0\\ & -\frac{1}{r}\frac{\partial u}{\partial \theta} = -\frac{1}{r}(0) = 0 \end{aligned}
So,
\frac{\partial v}{\partial r} = -\frac{1}{r}\frac{\partial u}{\partial \theta}
both condition are satisfied. Since the partial derivatives are continuous and satisfy the polar Cauchy-Riemann conditions for r > 0 and \theta \in (-\pi, \pi), the principal branch of \operatorname{Log } z is analytic in the domain \mathbb{C} \setminus (-\infty, 0] (the complex plane excluding the non-positive real axis, which is the branch cut).
The real and imaginary parts of an analytic function are harmonic functions. A real-valued function \phi(x,y) is harmonic if it satisfies Laplace’s equation:
\nabla^2 \phi = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} = 0
If f(z) = u(x,y) + iv(x,y) is analytic, and assuming u and v have continuous second-order partial derivatives (which is true for analytic functions), we can show u and v are harmonic using the Cauchy-Riemann conditions:
\begin{aligned} & \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\ & \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \end{aligned}
Differentiate the first one with respect to x and the second one with respect to y:
\begin{aligned} & \frac{\partial^2 u}{\partial x^2} = \frac{\partial^2 v}{\partial x \partial y} \\ & \frac{\partial^2 u}{\partial y^2} = -\frac{\partial^2 v}{\partial y \partial x} \end{aligned}
Assuming continuity of mixed partials (Clairaut’s theorem: \frac{\partial^2 v}{\partial x \partial y} = \frac{\partial^2 v}{\partial y \partial x}), summing these two equations gives:
\begin{aligned} & \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = \frac{\partial^2 v}{\partial x \partial y} - \frac{\partial^2 v}{\partial y \partial x} = 0 \\ & \nabla^2 u = 0 \end{aligned}
Similarly, differentiate the first one with respect to y and the second with respect to x:
\begin{aligned} & \frac{\partial^2 u}{\partial y \partial x} = \frac{\partial^2 v}{\partial y^2} \\ & \frac{\partial^2 u}{\partial x \partial y} = -\frac{\partial^2 v}{\partial x^2} \end{aligned}
Summing (after multiplying the second by -1 and rearranging, or using \frac{\partial^2 v}{\partial x^2} = -\frac{\partial^2 u}{\partial x \partial y} and \frac{\partial^2 v}{\partial y^2} = \frac{\partial^2 u}{\partial y \partial x}):
\begin{aligned} & \frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2} = -\frac{\partial^2 u}{\partial x \partial y} + \frac{\partial^2 u}{\partial y \partial x} = 0 \\ & \quad \nabla^2 v = 0 \end{aligned}
Both u and v are harmonic functions. v is called a harmonic conjugate of u (and u is a harmonic conjugate of -v).