Gradient Theorem

Gradient Theorem
Fundamental Theorem For Line Integrals

Gradient Theorem

In vector calculus, the gradient theorem, also referred to as the fundamental theorem of calculus for line integrals, establishes a connection between the line integral of a gradient field and the values of the underlying scalar potential function at the endpoints of the curve of integration.

This theorem is a generalization of the fundamental theorem of calculus to higher dimensions. It asserts that for a certain class of vector fields, the line integral becomes independent of the path taken, depending only on the start and end points.

Theorem: let U be an open subset of \mathbb{R}^n, and let f : U \to \mathbb{R} be a continuously differentiable scalar function, meaning its gradient, \mathbf{\nabla} f, is continuous throughout U.

Let \gamma be a piecewise smooth curve contained entirely within U, parameterized by a vector function \mathbf{r}(t) for t \in [a, b]. The curve begins at point \mathbf{p} = \mathbf{r}(a) and terminates at point \mathbf{q} = \mathbf{r}(b).

Piecewise smooth curve \gamma

The gradient theorem states that the line integral of the gradient of f along the curve \gamma is given by the difference in the values of f at the endpoints of the curve:

\int_{\gamma} \mathbf{\nabla} f(\mathbf{v}) \cdot \mathrm{d}\mathbf{v} = f(\mathbf{q}) - f(\mathbf{p})

Here, \mathbf{\nabla} f represents the gradient vector field of the scalar function f. A vector field that can be expressed as the gradient of a scalar function is known as a conservative vector field, and the function f is called the scalar potential.

Proof: the proof of the gradient theorem combines the definition of a line integral, the multivariable chain rule, and the fundamental theorem of calculus from single-variable calculus.

Let the curve \gamma be parametrized by the vector function \mathbf{r}(t) = (x_1(t), x_2(t), \dots, x_n(t)) for t in the interval [a, b]. The endpoints of this curve are \mathbf{p} = \mathbf{r}(a) and \mathbf{q} = \mathbf{r}(b).

We begin by expressing the line integral of the vector field \mathbf{\nabla} f along \gamma according to its definition:

\int_{\gamma} \mathbf{\nabla} f(\mathbf{v}) \cdot \mathrm{d}\mathbf{v} = \int_{a}^{b} \mathbf{\nabla} f(\mathbf{r}(t)) \cdot \mathbf{r}^\prime(t) \, \mathrm{d}t

The gradient of the scalar function f(x_1, x_2, \dots, x_n) is the vector field whose components are the partial derivatives of f:

\mathbf{\nabla} f = \left( \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \dots, \frac{\partial f}{\partial x_n} \right)

The derivative of the parametrization of the curve, \mathbf{r}^\prime(t), is given by:

\mathbf{r}^\prime(t) = \left( \frac{\mathrm{d}x_1}{\mathrm{d}t}, \frac{\mathrm{d}x_2}{\mathrm{d}t}, \dots, \frac{\mathrm{d}x_n}{\mathrm{d}t} \right)

We can now expand the dot product within the integral:

\mathbf{\nabla} f(\mathbf{r}(t)) \cdot \mathbf{r}^\prime(t) = \frac{\partial f}{\partial x_1} \frac{\mathrm{d}x_1}{\mathrm{d}t} + \frac{\partial f}{\partial x_2} \frac{\mathrm{d}x_2}{\mathrm{d}t} + \dots + \frac{\partial f}{\partial x_n} \frac{\mathrm{d}x_n}{\mathrm{d}t}

The expression on the right-hand side is the total derivative of the composite function f(\mathbf{r}(t)) with respect to t, as given by the multivariable chain rule.

Let us define a new function g(t) = f(\mathbf{r}(t)). The chain rule states:

\frac{\mathrm{d}g}{\mathrm{d}t} = \frac{\mathrm{d}}{\mathrm{d}t} f(\mathbf{r}(t)) = \sum_{i=1}^{n} \frac{\partial f}{\partial x_i}(\mathbf{r}(t)) \frac{\mathrm{d}x_i}{\mathrm{d}t} = \mathbf{\nabla} f(\mathbf{r}(t)) \cdot \mathbf{r}^\prime(t)

Substituting this result back into our integral, we obtain:

\int_{a}^{b} \mathbf{\nabla} f(\mathbf{r}(t)) \cdot \mathbf{r}^\prime(t) \, \mathrm{d}t = \int_{a}^{b} \frac{\mathrm{d}}{\mathrm{d}t} f(\mathbf{r}(t)) \, \mathrm{d}t

We now have an integral of a derivative with respect to a single variable, t.

At this point, we can invoke the fundamental theorem of calculus from single-variable calculus, which states that for a continuously differentiable function G(t):

\int_{a}^{b} G^\prime(t) \, \mathrm{d}t = G(b) - G(a)

Applying this to our context, where G(t) = f(\mathbf{r}(t)):

\int_{a}^{b} \frac{\mathrm{d}}{\mathrm{d}t} f(\mathbf{r}(t)) \, \mathrm{d}t = f(\mathbf{r}(b)) - f(\mathbf{r}(a))

Recalling that \mathbf{p} = \mathbf{r}(a) and \mathbf{q} = \mathbf{r}(b), we arrive at the final statement of the theorem:

\int_{\gamma} \mathbf{\nabla} f(\mathbf{v}) \cdot \mathrm{d}\mathbf{v} = f(\mathbf{q}) - f(\mathbf{p})

This completes the proof. It is important to note that the result holds for any piecewise smooth curve because such a curve can be decomposed into a finite number of smooth segments, and the integral over the entire curve is the sum of the integrals over these segments.

The nature of the sum of the differences of the potential function at the endpoints of these segments yields the same final result.

The gradient theorem has implications in physics and engineering. For instance, in mechanics, if a force field is conservative (the gradient of a potential energy function), the work done by the force in moving a particle between two points is independent of the path taken. This principle is fundamental to the concept of conservation of energy.

Go to the top of the page