Linear Systems of ODEs with Forcing

ODEs
Response to External Forcing

Linear Systems of ODEs with Forcing

Method of Undetermined Coefficients

Method of Variation of Parameters

System of Linear ODEs with Forcing

Dirac Delta Function and Impulse Response

General Solution Via Convolution

The analysis of linear dynamical systems often extends to scenarios where the system is subject to an external influence or forcing function. Such systems are described by non-homogeneous linear ordinary differential equations.

The general solution to these equations retains a structure that separates the system’s intrinsic dynamics from its response to the external forcing.

A second-order linear non-homogeneous ODE with constant coefficients can be written in the form:

\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + a_1 \frac{\mathrm{d}x}{\mathrm{d}t} + a_0 x = f(t)

The function f(t) is the non-homogeneous term, or forcing function. The theory of linear operators dictates that the general solution x(t) to this equation is the sum of two components: the complementary function x_h(t), which is the general solution to the corresponding homogeneous equation (f(t)=0), and a particular integral x_p(t), which is any specific solution to the non-homogeneous equation.

x(t) = x_h(t) + x_p(t)

The complementary function captures the transient behavior of the system, while the particular integral describes its steady-state response to the forcing. We will explore two primary methods for finding the particular integral.

Method of undetermined coefficients

This method is an efficient approach applicable when the forcing function f(t) and its derivatives form a finite-dimensional vector space. This is true for functions such as polynomials, exponentials, sines, and cosines, and their products.

The procedure consists of proposing a particular solution x_p(t) that is a linear combination of the functional forms present in f(t) and its derivatives, with coefficients that are to be determined.

Let us consider the initial value problem:

\begin{aligned} & \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 3\frac{\mathrm{d}x}{\mathrm{d}t} + 2x = e^{-3t} \\ & x(0) = 2\\ & \dot{x}(0) = 4 \end{aligned}

First, we solve the homogeneous equation to find the complementary function. The characteristic equation is:

\lambda^2 + 3\lambda + 2 = (\lambda+1)(\lambda+2) = 0

The eigenvalues are \lambda_1=-1 and \lambda_2=-2. The homogeneous solution is therefore:

x_h(t) = c_1 e^{-t} + c_2 e^{-2t}

Next, we propose a particular solution based on the form of the forcing term f(t) = e^{-3t}.

We assume a solution of the form:

x_p(t) = K e^{-3t}

We compute its derivatives and substitute them into the non-homogeneous equation:

\begin{aligned} & \dot{x}_p = -3K e^{-3t} \\ & \ddot{x}_p = 9K e^{-3t} \\ & (9K e^{-3t}) + 3(-3K e^{-3t}) + 2(K e^{-3t}) = e^{-3t} \end{aligned}

Factoring out the exponential term yields an algebraic equation for the coefficient K:

\begin{aligned} & (9 - 9 + 2)K = 1 \\ & 2K = 1 \\ & K = \frac{1}{2} \end{aligned}

The particular integral is:

x_p(t) = \frac{1}{2}e^{-3t}

The general solution is the sum:

x(t) = c_1 e^{-t} + c_2 e^{-2t} + \frac{1}{2}e^{-3t}

Finally, the constants c_1 and c_2 are determined by applying the initial conditions:

\begin{aligned} x(0) &= c_1 + c_2 + \frac{1}{2} = 2 \\ \dot{x}(0) &= -c_1 - 2c_2 - \frac{3}{2} = 4 \end{aligned}

Solving this linear system gives c_1 = -3 and c_2 = \frac{9}{2}.

A consideration arises when the forcing term is a solution to the homogeneous equation.

In such cases, the standard approach will fail and the method must be modified by multiplying the solution by t.

For example, if the forcing term were e^{-t}, the correct starting solution would be:

x_p(t) = K t e^{-t}

Method of variation of parameters

This method provides a more general algorithm for finding a particular solution that is valid for any continuous forcing function f(t).

The core idea is to start with the homogeneous solution and allow the parameters (the constants of integration) to vary as functions of t.

Let the basis of solutions for the homogeneous equation be y_1(t) and y_2(t).

The homogeneous solution is:

x_h(t) = c_1 y_1(t) + c_2 y_2(t)

We seek a particular solution of the form:

x_p(t) = u_1(t) y_1(t) + u_2(t) y_2(t)

The functions u_1(t) and u_2(t) are to be determined. We compute the first derivative:

\dot{x}_p = (\dot{u}_1 y_1 + \dot{u}_2 y_2) + (u_1 \dot{y}_1 + u_2 \dot{y}_2)

Since we have introduced two unknown functions but only need to satisfy one equation, we have a degree of freedom to impose an additional constraint. We choose a constraint that simplifies the subsequent algebra by setting the first parenthetical term to zero:

\dot{u}_1 y_1 + \dot{u}_2 y_2 = 0

This simplifies the first derivative to \dot{x}_p = u_1 \dot{y}_1 + u_2 \dot{y}_2. The second derivative is then:

\ddot{x}_p = (\dot{u}_1 \dot{y}_1 + \dot{u}_2 \dot{y}_2) + (u_1 \ddot{y}_1 + u_2 \ddot{y}_2)

Substituting x_p, \dot{x}_p, and \ddot{x}_p into the original non-homogeneous equation and rearranging gives:

u_1(\ddot{y}_1 + a_1 \dot{y}_1 + a_0 y_1) + u_2(\ddot{y}_2 + a_1 \dot{y}_2 + a_0 y_2) + \dot{u}_1 \dot{y}_1 + \dot{u}_2 \dot{y}_2 = f(t)

The first two terms vanish because y_1 and y_2 are, by definition, solutions to the homogeneous equation. This leaves us with a second equation for the derivatives of u_1 and u_2. We now have a system of two linear equations for \dot{u}_1 and \dot{u}_2:

\begin{aligned} & \dot{u}_1 y_1 + \dot{u}_2 y_2 = 0 \\ & \dot{u}_1 \dot{y}_1 + \dot{u}_2 \dot{y}_2 = f(t) \end{aligned}

Using our previous example, y_1 = e^{-t} and y_2 = e^{-2t}. The system becomes:

\begin{aligned} & \dot{u}_1 e^{-t} + \dot{u}_2 e^{-2t} = 0 \\ & -\dot{u}_1 e^{-t} - 2\dot{u}_2 e^{-2t} = e^{-3t} \end{aligned}

Solving this system yields:

\begin{aligned} & \dot{u}_1 = e^{-2t} \\ & \dot{u}_2 = -e^{-t} \end{aligned}

Integrating with respect to t gives the functions u_1 and u_2:

\begin{aligned} & u_1(t) = \int e^{-2t} \, \mathrm{d}t = -\frac{1}{2}e^{-2t} + K_1 \\ & u_2(t) = \int -e^{-t} \, \mathrm{d}t = e^{-t} + K_2 \end{aligned}

Substituting these back into the expression for x(t):

\begin{aligned} x(t) &= \left(-\frac{1}{2}e^{-2t} + K_1\right)e^{-t} + \left(e^{-t} + K_2\right)e^{-2t} \\ &= -\frac{1}{2}e^{-3t} + K_1 e^{-t} + e^{-3t} + K_2 e^{-2t} \\ &= K_1 e^{-t} + K_2 e^{-2t} + \frac{1}{2}e^{-3t} \end{aligned}

This result matches the one obtained by the method of undetermined coefficients, demonstrating that the approaches are consistent.

The constants of integration K_1 and K_2 are equivalent to the constants c_1 and c_2 in the general solution, and are determined by the initial conditions.

While more computationally intensive, the method of variation of parameters is applicable to any forcing function and provides a formal integral representation for the particular solution.

System of linear ODEs with forcing

The analysis of linear dynamical systems is extended to include external influences through the introduction of a forcing term.

This leads to non-homogeneous systems of first-order ordinary differential equations, which are fundamental in fields such as control theory, electrical engineering, and mechanics.

The general solution to such systems can be constructed by combining the system’s intrinsic response to its initial state with its response to the external input.

The state-space representation of a forced linear system is given by:

\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} = \mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u}(t)

Here, \mathbf{x}(t) \in \mathbb{R}^n is the state vector, \mathbf{A} is the n \times n system matrix that governs the internal dynamics, \mathbf{u}(t) \in \mathbb{R}^m is the vector of external inputs or controls, and \mathbf{B} is the n \times m input matrix that maps the inputs to the state dynamics.

Consider the dynamics of an inverted pendulum, a classic problem in control theory. The system’s state can be described by its angle \theta and angular velocity \omega. A control torque \tau is applied to stabilize the pendulum in its upright, unstable equilibrium position.

Inverse Pendulum

The nonlinear equation of motion, neglecting friction:

\ddot{\theta} = \sin(\theta) + \tau

It can be written as a system of equations:

\begin{aligned} & \dot{\theta} = \omega \\ & \dot{\omega} = \sin(\theta) + \tau \\ & \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} \theta \\ \omega \end{bmatrix} =\begin{bmatrix} \omega \\ \sin(\theta) \end{bmatrix} + \begin{bmatrix} 0 \\ \tau \end{bmatrix} \end{aligned}

Considering first the case \tau = 0, this equation can be linearized around \theta = \pi:

\begin{aligned} & \frac{D\mathbf{f}}{D\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ -\cos(\theta) & 0 \end{bmatrix} \\ & \frac{D\mathbf{f}}{D\mathbf{x}} \bigg|_{\theta = \pi} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \\ & \lambda_1 = 1 \\ & \lambda_2 = -1 \end{aligned}

The point around \theta = \pi is a saddle and the position is unstable.

A state feedback control law can be designed to stabilize this unstable equilibrium.

This involves defining the control input \tau as a linear function of the state variables:

\mathbf{u} = -\mathbf{K}\mathbf{x}

The matrix \mathbf{K} is the feedback gain matrix. The system equation becomes:

\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{B}(-\mathbf{K}\mathbf{x}) = (\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x}

The feedback control effectively creates a new closed-loop system with a modified system matrix, \mathbf{A}_{cl} = \mathbf{A}-\mathbf{B}\mathbf{K}. By choosing the gain matrix \mathbf{K} appropriately, it is possible (if the system is controllable) to place the eigenvalues of \mathbf{A}_{cl} anywhere in the complex plane, thereby altering the system’s stability and dynamic response.

In our case, let’s consider now \tau \neq 0 and the system takes the form:

\begin{aligned} & \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} \theta \\ \omega \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} \omega \\ \theta \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \tau \\ & \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u} \end{aligned}

We can choose \tau = -2 \theta -2 \omega, and this choice is able to stabilize this system. In matrix form:

\begin{bmatrix} 0 \\ 1 \end{bmatrix} \tau = \begin{bmatrix} 0 & 0 \\ -2 & -2 \end{bmatrix} \begin{bmatrix} \theta \\ \omega \end{bmatrix}

It is possible to verify that now the system is stable:

\begin{aligned} \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} \theta \\ \omega \end{bmatrix} & = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} \omega \\ \theta \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \tau \\ & = \begin{bmatrix} 0 & 1 \\ -1 & -2 \end{bmatrix} \begin{bmatrix} \omega \\ \theta \end{bmatrix} \end{aligned}

This system has eigenvalues \lambda_1 = -1, \lambda_2 = -1, and therefore is a stable system.

Dirac delta function and impulse response

To derive the general solution for the forced system, it is instructive to first consider the system’s response to an idealized, instantaneous input. This is modeled using the Dirac delta function, \delta(t). This is a generalized function with the defining property that for any continuous function g(t):

\int_{-\infty}^{\infty} g(\tau) \delta(t-\tau) \, \mathrm{d}\tau = g(t)

The integral of the Dirac delta function is the Heaviside step function, H(t).

Integral of a function \delta(t)

We first find the impulse response of the system. This is the solution \mathbf{x}(t) for zero initial conditions, \mathbf{x}(0)=\mathbf{0}, when subjected to an impulsive input \mathbf{u}(t) = \delta(t)\mathbf{v}, where \mathbf{v} is a constant vector. Integrating the system equation over an infinitesimal interval around t=0:

\begin{aligned} & \int_{-\epsilon}^{\epsilon} \frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} \mathrm{d}\tau = \int_{-\epsilon}^{\epsilon} (\mathbf{A}\mathbf{x} + \mathbf{B}\delta(\tau)\mathbf{v}) \mathrm{d}\tau \\ & \mathbf{x}(\epsilon) - \mathbf{x}(-\epsilon) = \lim_{\epsilon\to 0} \int_{-\epsilon}^{\epsilon} \mathbf{A}\mathbf{x}(\tau) \mathrm{d}\tau + \mathbf{B}\mathbf{v} \end{aligned}

As \epsilon \to 0, the integral of the bounded term \mathbf{A}\mathbf{x} vanishes. Since \mathbf{x}(-\epsilon)=\mathbf{0}, we find that the state at t=0^+ is \mathbf{x}(0^+) = \mathbf{B}\mathbf{v}.

One dimensional case with a negative eigenvalue \lambda, t=0

For all t>0, the input is zero, so the system evolves as a homogeneous system from this new initial state. The solution for t \ge 0 is:

\mathbf{x}(t) = e^{\mathbf{A}t} \mathbf{x}(0^+) = e^{\mathbf{A}t}\mathbf{B}\mathbf{v}

If the impulse occurs at time \tau_0, the response for t \ge \tau_0 is \mathbf{x}(t) = e^{\mathbf{A}(t-\tau_0)}\mathbf{B}\mathbf{v}.

One dimensional case with a negative eigenvalue \lambda, t=3

We can consider a case where there are two impulse and analyze the response. Since the system is linear, the superposition principle holds and the response will be the sum of the response of the homogeneous case plus the response of the impulse response.

Combined response with initial conditions and a delta impulse

General solution via convolution

We can now construct the solution for an arbitrary input function \mathbf{u}(t) by representing it as a continuum of impulses. The input over an infinitesimal time interval [\tau, \tau+\mathrm{d}\tau] can be viewed as an impulse of strength \mathbf{u}(\tau)\mathrm{d}\tau.

The differential response \mathrm{d}\mathbf{x}(t) at time t due to this impulse at time \tau is given by the impulse response formula:

\mathrm{d}\mathbf{x}(t) = e^{\mathbf{A}(t-\tau)} \mathbf{B} \mathbf{u}(\tau) \mathrm{d}\tau

Since the system is linear, the total response at time t due to the forcing function is the superposition (integral) of all these infinitesimal responses for all past times \tau from 0 to t.

The complete solution is the sum of the response to the initial conditions (the homogeneous solution) and the response to the forcing function (the particular solution).

\mathbf{x}(t) = e^{\mathbf{A}t}\mathbf{x}_0 + \int_0^t e^{\mathbf{A}(t-\tau)} \mathbf{B} \mathbf{u}(\tau) \, \mathrm{d}\tau

This is the variation of parameters formula for systems.

The integral term is the convolution of the system’s impulse response function, e^{\mathbf{A}t}\mathbf{B}, with the input signal \mathbf{u}(t).

Convolution integral

It represents the accumulated effect of the entire history of the input on the current state of the system, with past inputs being propagated to the present time t by the state transition matrix e^{\mathbf{A}(t-\tau)}.

Go to the top of the page