Higher Order Linear ODEs

ODEs
Higher Order Linear

Higher Order Linear ODEs

Coupled oscillator system

General form

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors calculation

Matrix exponential

Phase plane analysis

General criterion for asymptotic stability

The analysis of coupled dynamical systems frequently leads to systems of differential equations. While often approached from a vector-space perspective, it is instructive to understand their equivalence to single, higher-order differential equations.

We will explore this relationship using the canonical example of a coupled mass-spring system, which serves as a prototype for many phenomena in physics and engineering.

This analysis will show in a general treatment of n^{th} order linear homogeneous ODEs, revealing the connection between their characteristic equations and the eigenvalue problem of their corresponding state-space representation.

Coupled oscillator system

Consider a one-dimensional system of two masses, m_1 and m_2, connected by a spring of stiffness k_2, with the first mass also attached to a fixed support by a spring of stiffness k_1. The displacement of each mass from its equilibrium position is denoted by x_1(t) and x_2(t), respectively.

Harmonic oscillator with two masses

The forces acting on each mass are determined by Hooke’s law. Mass m_1 experiences a restoring force from the first spring, -k_1 x_1, and a force from the second spring, k_2(x_2 - x_1). Mass m_2 experiences only the force from the second spring, -k_2(x_2 - x_1). Applying Newton’s second law, F=ma, to each mass independently yields a system of two coupled second-order linear ODEs:

\begin{aligned} & m_1 \frac{\mathrm{d}^2 x_1}{\mathrm{d}t^2} = -k_1 x_1 + k_2 (x_2 - x_1) \\ & m_2 \frac{\mathrm{d}^2 x_2}{\mathrm{d}t^2} = -k_2 (x_2 - x_1) \end{aligned}

Rearranging these equations into a more standard form highlights the coupling between the variables:

\begin{aligned} & m_1 \ddot{x}_1 + (k_1 + k_2) x_1 - k_2 x_2 = 0 \\ & m_2 \ddot{x}_2 - k_2 x_1 + k_2 x_2 = 0 \end{aligned}

To obtain a unique solution, we must specify four initial conditions: x_1(0), \dot{x}_1(0), x_2(0), and \dot{x}_2(0).

This system can be analyzed through two primary, yet equivalent, mathematical frameworks.

One approach is to eliminate one of the dependent variables to obtain a single higher-order equation for the other. We can solve the first equation for x_2:

x_2 = \frac{m_1}{k_2} \ddot{x}_1 + \frac{k_1 + k_2}{k_2} x_1

To substitute this into the second equation, we must compute its second time derivative:

\ddot{x}_2 = \frac{m_1}{k_2} x_1^{(4)} + \frac{k_1 + k_2}{k_2} \ddot{x}_1

Substituting the expressions for x_2 and \ddot{x}_2 into the second equation of motion, m_2 \ddot{x}_2 + k_2 x_2 - k_2 x_1 = 0, yields:

m_2 \left( \frac{m_1}{k_2} x_1^{(4)} + \frac{k_1 + k_2}{k_2} \ddot{x}_1 \right) + k_2 \left( \frac{m_1}{k_2} \ddot{x}_1 + \frac{k_1 + k_2}{k_2} x_1 \right) - k_2 x_1 = 0

Combining terms based on the order of the derivative of x_1 gives a single fourth-order linear homogeneous ODE with constant coefficients:

(m_1 m_2) x_1^{(4)} + (m_1 k_2 + m_2(k_1+k_2)) \ddot{x}_1 + k_1 k_2 x_1 = 0

This demonstrates that the dynamics of one component of the coupled system can be described by a single higher-order differential equation.

Alternatively, we can convert the system of two second-order equations into a system of four first-order equations.

This is accomplished by introducing the velocities as new state variables. Let us define a state vector \mathbf{z}(t) \in \mathbb{R}^4:

\mathbf{z}(t) = \begin{bmatrix} z_1(t) \\ z_2(t) \\ z_3(t) \\ z_4(t) \end{bmatrix} = \begin{bmatrix} x_1(t) \\ x_2(t) \\ \dot{x}_1(t) \\ \dot{x}_2(t) \end{bmatrix}

The time evolution of this state vector is given by \dot{\mathbf{z}} = \mathbf{A} \mathbf{z}, where the matrix \mathbf{A} encodes the dynamics of the system:

\frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} x_1 \\ x_2 \\ \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -\frac{k_1+k_2}{m_1} & \frac{k_2}{m_1} & 0 & 0 \\ \frac{k_2}{m_2} & -\frac{k_2}{m_2} & 0 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \dot{x}_1 \\ \dot{x}_2 \end{bmatrix}

This state-space representation is central to modern control theory and numerical analysis.

General form

Both paths lead to the study of higher-order linear equations. Let us consider the general case of an n^{th} order linear homogeneous ODE with constant coefficients a_i \in \mathbb{R}:

a_n \frac{\mathrm{d}^n x}{\mathrm{d}t^n} + a_{n-1} \frac{\mathrm{d}^{n-1} x}{\mathrm{d}t^{n-1}} + \cdots + a_1 \frac{\mathrm{d}x}{\mathrm{d}t} + a_0 x = 0

We posit a solution of the form x(t) = e^{\lambda t}. Substituting this form into the equation and noting that \frac{\mathrm{d}^k}{\mathrm{d}t^k} e^{\lambda t} = \lambda^k e^{\lambda t}, we obtain:

\left( a_n \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0 \right) e^{\lambda t} = 0

Since e^{\lambda t} \neq 0, the expression in the parenthesis must be zero. This yields the characteristic equation, a polynomial of degree n in \lambda:

P(\lambda) = a_n \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0 = 0

By the fundamental theorem of algebra, this polynomial has n roots \lambda_1, \lambda_2, \ldots, \lambda_n in the complex plane.

If these roots are distinct, the general solution to the ODE is a linear combination of the fundamental solutions e^{\lambda_i t}:

x(t) = \sum_{i=1}^n c_i e^{\lambda_i t}

The n constants c_i are determined by n initial conditions. If a root \lambda_j has multiplicity m, it contributes m linearly independent solutions of the form e^{\lambda_j t}, t e^{\lambda_j t}, \ldots, t^{m-1}e^{\lambda_j t}.

Given n initial conditions, x(0), \dot{x}(0), \ldots, x^{(n-1)}(0), we can determine the coefficients c_i. For the case of distinct roots, this leads to a system of linear equations:

\begin{aligned} & x(0) = c_1 + c_2 + \cdots + c_n \\ & \dot{x}(0) = c_1\lambda_1 + c_2\lambda_2 + \cdots + c_n\lambda_n \\ & \ddot{x}(0) = c_1\lambda_1^2 + c_2\lambda_2^2 + \cdots + c_n\lambda_n^2 \\ &\vdots \\ & x^{(n-1)}(0) = c_1\lambda_1^{n-1} + c_2\lambda_2^{n-1} + \cdots + c_n\lambda_n^{n-1} \end{aligned}

This system can be expressed in matrix form, \mathbf{d} = \mathbf{V} \mathbf{c}:

\begin{bmatrix} x(0) \\ \dot{x}(0) \\ \vdots \\ x^{(n-1)}(0) \end{bmatrix} = \begin{bmatrix} 1 & 1 & \cdots & 1 \\ \lambda_1 & \lambda_2 & \cdots & \lambda_n \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_1^{n-1} & \lambda_2^{n-1} & \cdots & \lambda_n^{n-1} \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix}

The matrix \mathbf{V} is a Vandermonde matrix. Its determinant is non-zero if and only if all \lambda_i are distinct, which guarantees the existence of a unique set of coefficients \mathbf{c} = \mathbf{V}^{-1} \mathbf{d} for the given initial conditions.

We can rewrite the general n^{th} order ODE (assuming a_n=1 without loss of generality) as a first-order system.

Let’s define the state vector \mathbf{x} = [x, \dot{x}, \ldots, x^{(n-1)}]^T. The system dynamics \dot{\mathbf{x}} = \mathbf{A} \mathbf{x} are governed by the companion matrix \mathbf{A}:

\frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1} \\ x_n \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ -a_0 & -a_1 & -a_2 & \cdots & -a_{n-1} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1} \\ x_n \end{bmatrix}

The eigenvalues of this matrix \mathbf{A} are given by the solutions to its characteristic equation, \det(\mathbf{A} - \lambda\mathbf{I}) = 0. Let us compute this for the n=3 case:

\det \begin{bmatrix} -\lambda & 1 & 0 \\ 0 & -\lambda & 1 \\ -a_0 & -a_1 & -a_2-\lambda \end{bmatrix} = 0

Expanding the determinant along the first column gives:

\begin{aligned} &(-\lambda) \det \begin{bmatrix} -\lambda & 1 \\ -a_1 & -a_2-\lambda \end{bmatrix} - (0) + (-a_0) \det \begin{bmatrix} 1 & 0 \\ -\lambda & 1 \end{bmatrix} \\ &= -\lambda(-\lambda(-a_2-\lambda) - (-a_1)) - a_0(1) \\ &= -\lambda(\lambda^2 + a_2\lambda + a_1) - a_0 \\ &= -(\lambda^3 + a_2\lambda^2 + a_1\lambda + a_0) = 0 \end{aligned}

This is the characteristic polynomial of the original third-order ODE.

This result holds in general: the characteristic polynomial of the companion matrix \mathbf{A} is exactly the characteristic polynomial of the scalar n^{th} order ODE.

Therefore, the eigenvalues of the state-space matrix \mathbf{A} are identical to the roots \lambda_i that determine the fundamental solutions e^{\lambda_i t}. This establishes a complete correspondence between the two representations.

Eigenvalues and eigenvectors

The analysis of higher-order linear ordinary differential equations is unified through their transformation into first-order systems of the form:

\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} = \mathbf{A} \mathbf{x}

where \mathbf{x}(t) is a state vector in \mathbb{R}^n and \mathbf{A} is an n \times n matrix of constant coefficients. The objective is to find a general solution for \mathbf{x}(t).

It is possible to achieve this decoupling the system through a coordinate transformation, a procedure rooted in the concepts of eigenvalues and eigenvectors.

Consider the ideal scenario where the system is completely decoupled, meaning the matrix \mathbf{A} is diagonal, \mathbf{A} = \mathbf{D} = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n). The system of equations is then:

\frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}

Each equation, \dot{x}_i = \lambda_i x_i, is independent of the others and has the straightforward solution x_i(t) = x_i(0) e^{\lambda_i t}. In matrix form, the solution is expressed using the matrix exponential:

\mathbf{x}(t) = \begin{bmatrix} e^{\lambda_1 t} & 0 & \cdots & 0 \\ 0 & e^{\lambda_2 t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{\lambda_n t} \end{bmatrix} \mathbf{x}(0) = e^{\mathbf{D}t} \mathbf{x}(0)

This suggests that if we can transform a general coupled system into a decoupled one, the solution becomes readily accessible.

For a general, non-diagonal matrix \mathbf{A}, our goal is to find an invertible linear transformation, represented by a matrix \mathbf{T}, that maps the original coordinates \mathbf{x} to new coordinates \mathbf{z} via \mathbf{x} = \mathbf{T}\mathbf{z}.

In this new coordinate system, we desire the dynamics to be diagonal. Substituting \mathbf{x} = \mathbf{T}\mathbf{z} into the original equation:

\begin{aligned} & \frac{\mathrm{d}}{\mathrm{d}t}(\mathbf{T}\mathbf{z}) = \mathbf{A}(\mathbf{T}\mathbf{z}) \\ & \mathbf{T}\frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t} = \mathbf{A}\mathbf{T}\mathbf{z} \\ & \frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t} = (\mathbf{T}^{-1}\mathbf{A}\mathbf{T})\mathbf{z} \end{aligned}

For the system to be decoupled in the \mathbf{z} coordinates, the matrix \mathbf{D} = \mathbf{T}^{-1}\mathbf{A}\mathbf{T} must be diagonal. This is the classic diagonalization problem of linear algebra.

Rearranging this gives:

\mathbf{A}\mathbf{T} = \mathbf{T}\mathbf{D}

Let the columns of \mathbf{T} be the vectors \boldsymbol{\xi}_1, \boldsymbol{\xi}_2, \ldots, \boldsymbol{\xi}_n and the diagonal entries of \mathbf{D} be the scalars \lambda_1, \lambda_2, \ldots, \lambda_n. The matrix equation \mathbf{A}\mathbf{T} = \mathbf{T}\mathbf{D} can be examined column by column:

\mathbf{A} \begin{bmatrix} | & & | \\ \boldsymbol{\xi}_1 & \cdots & \boldsymbol{\xi}_n \\ | & & | \end{bmatrix} = \begin{bmatrix} | & & | \\ \boldsymbol{\xi}_1 & \cdots & \boldsymbol{\xi}_n \\ | & & | \end{bmatrix} \begin{bmatrix} \lambda_1 & & 0 \\ & \ddots & \\ 0 & & \lambda_n \end{bmatrix}

This equality implies that for each column i, \mathbf{A}\boldsymbol{\xi}_i = \lambda_i \boldsymbol{\xi}_i. This is the fundamental eigenvalue equation. The scalars \lambda_i are the eigenvalues of \mathbf{A}, and the corresponding non-zero vectors \boldsymbol{\xi}_i are its eigenvectors.

Geometrically, eigenvectors of a matrix \mathbf{A} are those special vectors whose direction is invariant under the linear transformation represented by \mathbf{A}; they are only scaled by the corresponding eigenvalue.

As example it is possible to take a matrix 2 x 2 and guess the values:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \\ & \mathbf{A} \,\mathbf{x} = \lambda \, \mathbf{x} \\ & \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 3 \\ -1 \end{bmatrix} \ne \lambda \begin{bmatrix} 1 \\ 0 \end{bmatrix} \\ & \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} -1 \\ 3 \end{bmatrix} \ne \lambda \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{aligned}

That can be represented graphically:

Sample non-eigevectors

In general that would be the case. However, there are two vectors \xi_{1,2} for which that holds:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \\ & \mathbf{A} \,\mathbf{x} = \lambda \, \mathbf{x} \\ & \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \end{bmatrix} = 2 \begin{bmatrix} 1 \\ 1 \end{bmatrix} \\ & \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \begin{bmatrix} 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 4 \\ -4 \end{bmatrix} = 4 \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{aligned}

Therefore, the two eigenvectors and eigenvalues are:

\begin{aligned} & \xi_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \lambda_1 = 2 \\ & \xi_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}, \lambda_2 = 4 \end{aligned}

That can be also be seen graphically:

Eigenvectors

Eigenvalues and eigenvectors calculation

To find the eigenvalues and eigenvectors, we rewrite the equation as:

\begin{aligned} & \mathbf{A}\mathbf{x} - \lambda\mathbf{x} = \mathbf{0} \\ & (\mathbf{A} - \lambda\mathbf{I})\mathbf{x} = \mathbf{0} \end{aligned}

For this homogeneous system to have a non-trivial solution \mathbf{x} \neq \mathbf{0}, the matrix (\mathbf{A} - \lambda\mathbf{I}) must be singular. This condition is equivalent to its determinant being zero:

\det(\mathbf{A} - \lambda\mathbf{I}) = 0

This equation is the characteristic polynomial of \mathbf{A}, and its roots are the eigenvalues. For each eigenvalue \lambda_i, the corresponding eigenvectors are found by solving the linear system (\mathbf{A} - \lambda_i\mathbf{I})\mathbf{x} = \mathbf{0} to find the basis vectors for its null space.

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \\ & \det(\mathbf{A} - \lambda \, \mathbf{I}) \\ & \begin{vmatrix} 3 -\lambda & -1 \\ -1 & 3 - \lambda \end{vmatrix} \\ & = (3 - \lambda)^2 - 1 = \lambda^2 - 6 \, \lambda + 8 \\ & = (\lambda - 2)(\lambda - 4) \\ & \lambda_1 = 2, \, \lambda_2 = 4 \end{aligned}

\lambda_1 = 2, \, \lambda_2 = 4 are the eigenvalues. It is now possible to compute the eigenvectors. for \lambda_1 = 2:

\begin{aligned} & \left(\mathbf{A} - \lambda \, \mathbf{I}\right) \, \mathbf{x} = 0 \\ & \left( \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} - \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \right) \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow x_1 = x_2 \\ & \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{aligned}

for \lambda_2 = 4:

\begin{aligned} & \left(\mathbf{A} - \lambda \, \mathbf{I}\right) \, \mathbf{x} = 0 \\ & \left( \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} - \begin{bmatrix} 4 & 0 \\ 0 & 4 \end{bmatrix} \right) \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \begin{bmatrix} -1 & -1 \\ -1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow x_1 = -x_2 \\ & \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{aligned}

Matrix exponential

The formal solution to \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} is \mathbf{x}(t) = e^{\mathbf{A}t}\mathbf{x}(0), where the matrix exponential e^{\mathbf{A}t} is defined by its Taylor series expansion:

e^{\mathbf{A}t} = \sum_{n=0}^{\infty} \frac{(\mathbf{A}t)^n}{n!} = \mathbf{I} + \mathbf{A}t + \frac{1}{2!}\mathbf{A}^2 t^2 + \cdots

If \mathbf{A} is diagonalizable, we can substitute \mathbf{A} = \mathbf{T}\mathbf{D}\mathbf{T}^{-1}. A key property is that \mathbf{A}^n = (\mathbf{T}\mathbf{D}\mathbf{T}^{-1})^n = \mathbf{T}\mathbf{D}^n\mathbf{T}^{-1}. The series becomes:

\begin{aligned} e^{\mathbf{A}t} &= \sum_{n=0}^{\infty} \frac{t^n}{n!} (\mathbf{T}\mathbf{D}^n\mathbf{T}^{-1}) \\ &= \mathbf{T} \left( \sum_{n=0}^{\infty} \frac{(\mathbf{D}t)^n}{n!} \right) \mathbf{T}^{-1} \\ &= \mathbf{T}e^{\mathbf{D}t}\mathbf{T}^{-1} \end{aligned}

The general solution for a diagonalizable system is therefore:

\mathbf{x}(t) = \mathbf{T} e^{\mathbf{D}t} \mathbf{T}^{-1} \mathbf{x}(0)

This provides a complete recipe for solving the system: first, find the eigenvalues and eigenvectors of \mathbf{A} to construct \mathbf{D} and \mathbf{T}; then, use this decomposition to construct the solution.

It is possible to take as example the matrix used previously:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix} \\ & \mathbf{T} = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \\ & \mathbf{D} = \begin{bmatrix} 2 & 0 \\ 0 & 4 \end{bmatrix} \\ & \mathbf{T}^{-1} = \begin{bmatrix} 0.5 & 0.5 \\ 0.5 & -0.5 \end{bmatrix} \end{aligned}

And the solution becomes:

\begin{aligned} & \mathbf{x}(t) = \mathbf{T} e^{\mathbf{D}\,t} \mathbf{T}^{-1} \, \mathbf{x}_0 \\ & \mathbf{x}(t) = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} e^{2t} & 0 \\ 0 & e^{4t} \end{bmatrix} \begin{bmatrix} 0.5 & 0.5 \\ 0.5 & -0.5 \end{bmatrix} \, \mathbf{x}(0) \\ & \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = \frac{1}{2} \begin{bmatrix} e^{2t} + e^{4t} & e^{2t} - e^{4t} \\ e^{2t} - e^{4t} & e^{2t} + e^{4t} \end{bmatrix} \begin{bmatrix} x_1(0) \\ x_2(0) \end{bmatrix} \end{aligned}

Phase plane analysis

The behavior of the solutions can be visualized in the phase plane (for n=2). The nature of the trajectories is entirely determined by the eigenvalues of \mathbf{A}.

Real eigenvalues

If the eigenvalues \lambda_1, \lambda_2 are real and distinct, the eigenvectors define two invariant lines. If \lambda_1, \lambda_2 > 0, all trajectories diverge from the origin. This fixed point is an unstable node or source. If \lambda_1, \lambda_2 < 0, all trajectories converge to the origin. This is a stable node or sink.

If \lambda_1 > 0 and \lambda_2 < 0, trajectories are repelled along the direction of the first eigenvector and attracted along the direction of the second. This is a saddle point, which is inherently unstable.

Complex eigenvalues

If the eigenvalues are a complex conjugate pair, \lambda_{1,2} = \alpha \pm i\beta, the solutions involve oscillatory behavior. If \alpha = 0 (purely imaginary eigenvalues), the trajectories are closed orbits (ellipses) around the origin. The fixed point is a center, and the system is stable but not asymptotically stable.

If \alpha < 0, the term e^{\alpha t} introduces a decay. Trajectories spiral inwards towards the origin. This is a stable spiral or spiral sink.

If \alpha > 0, the trajectories spiral outwards. This is an unstable spiral or spiral source.

Examples

The previous matrix:

\mathbf{A} = \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix}

With two real and distinct eigenvalues \lambda_1 =2 and \lambda_2 = 4 is an example of a source.

Source

Let’s consider a different system which has a matrix of the form:

\mathbf{A} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

As it already diagonal with unitary column, it is not necessary to calculate the eigenvalues and eigenvectors, as the eigenvalues are \lambda_{1,2} = \pm 1, and the eigenvectors are the columns of the matrix.

The solution is:

\begin{aligned} & \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ & \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} = \frac{1}{2} \begin{bmatrix} e^{t} & 0 \\ 0 & e^{-t} \end{bmatrix} \begin{bmatrix} x(0) \\ y(0) \end{bmatrix} \end{aligned}

Since the sign are opposite the point is an example of a saddle.

Saddle

This system is unstable.

Let’s consider now a system which has a matrix with pure complex eigenvalues:

\mathbf{A} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix}

The first step is to compute the eigenvalues:

\begin{aligned} & \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ & | \mathbf{A} - \lambda \mathbf{I} | = 0 \\ & \left\vert \begin{bmatrix} - \lambda & 2 \\ -2 & - \lambda \end{bmatrix} \right\vert = 0 \\ & \lambda^2 + 4 = 0 \\ & \lambda_{1,2} = \sqrt{-4} = -2 \, i \\ \end{aligned}

Once the eigenvalues are computed, we can find the eigenvectors:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix} \\ & (\mathbf{A} - \lambda_1 \mathbf{I}) \mathbf{x}_1 = \mathbf{0} \\ & \begin{bmatrix} -2\,i & 2 \\ -2 & -2\,i \end{bmatrix} \begin{bmatrix} x_{11} \\ x_{12} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow \begin{bmatrix} x_{11} \\ x_{12} \end{bmatrix} = \begin{bmatrix} 1 \\ i \end{bmatrix} \\ & (\mathbf{A} - \lambda_2 \mathbf{I}) \mathbf{x}_2 = \mathbf{0} \\ & \begin{bmatrix} 2\,i & 2 \\ 2 & 2\,i \end{bmatrix} \begin{bmatrix} x_{21} \\ x_{22} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow \begin{bmatrix} x_{21} \\ x_{22} \end{bmatrix} = \begin{bmatrix} 1 \\ -i \end{bmatrix} \\ \end{aligned}

Now we can build the matrices:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix} \\ & \mathbf{T} = \begin{bmatrix} 1 & 1 \\ i & -i \end{bmatrix} \\ & \mathbf{D} = \begin{bmatrix} 2i & 0 \\ 0 & -2i \end{bmatrix} \\ & \mathbf{T}^{-1} = \begin{bmatrix} 0.5 & -0.5i \\ 0.5 & 0.5i \end{bmatrix} \end{aligned}

And the solution is:

\begin{aligned} & \mathbf{x}(t) = \mathbf{T} e^{\mathbf{D}\,t} \mathbf{T}^{-1} \, \mathbf{x}_0 \\ & \mathbf{x}(t) = \begin{bmatrix} 1 & 1 \\ i & -i \end{bmatrix} \begin{bmatrix} e^{2i\,t} & 0 \\ 0 & e^{-2i\,t} \end{bmatrix} \begin{bmatrix} 0.5 & -0.5i \\ 0.5 & 0.5i \end{bmatrix} \, \mathbf{x}(0) \\ & \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = \frac{1}{2} \begin{bmatrix} e^{2i\,t} + e^{-2i\,t} & -i\,e^{2i\,t} +i\, e^{-2i\,t} \\ i\,e^{2i\,t} -i\, e^{-2i\,t} & e^{2i\,t} + e^{-2i\,t} \end{bmatrix} \begin{bmatrix} x_1(0) \\ x_2(0) \end{bmatrix} \\ & \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = \begin{bmatrix} \cos(2t) && \sin(2t) \\ -\sin(2t) && \cos(2t) \end{bmatrix} \begin{bmatrix} x_1(0) \\ x_2(0) \end{bmatrix} \end{aligned}

This solution is periodic.

Center

Finally, let’s consider a matrix with complex eigenvalues with a real negative part:

\mathbf{A} = \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix}

The first step is to compute the eigenvalues:

\begin{aligned} & \frac {\mathrm{d}}{\mathrm{d} t} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ & | \mathbf{A} - \lambda \mathbf{I} | = 0 \\ & \left\vert \begin{bmatrix} -1 - \lambda & 2 \\ -2 & -1 - \lambda \end{bmatrix} \right\vert = 0 \\ & (-1 -\lambda)^2 + 4 = \lambda^2 + 2 \lambda + 5 = 0 \\ & \lambda_{1,2} = \frac{-2 \pm \sqrt{4-20}}{2} = \frac{-2 \pm \sqrt{-16}}{2} = \frac{-2 \pm 4 \, i }{2} = -1 \pm 2\,i \\ \end{aligned}

Once the eigenvalues are computed, we can find the eigenvectors:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \\ & (\mathbf{A} - \lambda_1 \mathbf{I}) \mathbf{x}_1 = \mathbf{0} \\ & \begin{bmatrix} -1 -2\,i & 2 \\ -2 & -1 -2\,i \end{bmatrix} \begin{bmatrix} x_{11} \\ x_{12} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow \begin{bmatrix} x_{11} \\ x_{12} \end{bmatrix} = \begin{bmatrix} 1 \\ i \end{bmatrix} \\ & (\mathbf{A} - \lambda_2 \mathbf{I}) \mathbf{x}_2 = \mathbf{0} \\ & \begin{bmatrix} -1 + 2\,i & 2 \\ 2 & -1 + 2\,i \end{bmatrix} \begin{bmatrix} x_{21} \\ x_{22} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ & \Rightarrow \begin{bmatrix} x_{21} \\ x_{22} \end{bmatrix} = \begin{bmatrix} 1 \\ -i \end{bmatrix} \end{aligned}

Now we can build the matrices:

\begin{aligned} & \mathbf{A} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix} \\ & \mathbf{T} = \begin{bmatrix} 1 & 1 \\ i & -i \end{bmatrix} \\ & \mathbf{D} = \begin{bmatrix} -1 + 2i & 0 \\ 0 & -1 -2i \end{bmatrix} \\ & \mathbf{T}^{-1} = \begin{bmatrix} 0.5 & -0.5i \\ 0.5 & 0.5i \end{bmatrix} \end{aligned}

And the solution is:

\begin{aligned} & \mathbf{x}(t) = \mathbf{T} e^{\mathbf{D}\,t} \mathbf{T}^{-1} \, \mathbf{x}_0 \\ & \mathbf{x}(t) = \begin{bmatrix} 1 & 1 \\ i & -i \end{bmatrix} \begin{bmatrix} e^{-t} e^{2i\,t} & 0 \\ 0 & e^{-t} e^{-2i\,t} \end{bmatrix} \begin{bmatrix} 0.5 & -0.5i \\ 0.5 & 0.5i \end{bmatrix} \, \mathbf{x}(0) \\ & \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = \frac{1}{2} e^{-t} \begin{bmatrix} e^{2i\,t} + e^{-2i\,t} & -i\,e^{2i\,t} +i\, e^{-2i\,t} \\ i\,e^{2i\,t} -i\, e^{-2i\,t} & e^{2i\,t} + e^{-2i\,t} \end{bmatrix} \begin{bmatrix} x_1(0) \\ x_2(0) \end{bmatrix} \\ & \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = e^{-t} \begin{bmatrix} \cos(2t) && \sin(2t) \\ -\sin(2t) && \cos(2t) \end{bmatrix} \begin{bmatrix} x_1(0) \\ x_2(0) \end{bmatrix} \end{aligned}

Compared to the previous case, there is a dumping effect due to the real coefficient e^{-t} and the solution converges.

Spyral

General criterion for asymptotic stability

A linear system \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} is defined as asymptotically stable if all solutions converge to the origin as t \to \infty, i.e., \lim_{t \to \infty} \mathbf{x}(t) = \mathbf{0} for any initial condition \mathbf{x}(0).

Using the eigenvalues and eigenvector decomposition the solution become:

\begin{aligned} & \mathbf{A} \,\mathbf{T} = \mathbf{T} \, \mathbf{D} \\ & \mathbf{x}(t) = \mathbf{T} e^{\mathbf{D}\,t} \mathbf{T}^{-1} \, \mathbf{x}_0 \end{aligned}

As e^{\mathbf{D}\,t} is a diagonal matrix of the form:

e^{\mathbf{D}\,t} = \begin{bmatrix} e^{\lambda_1 t} & 0 & \cdots & 0 \\ 0 & e^{\lambda_2 t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{\lambda_n t} \end{bmatrix}

The solution \mathbf{x}(t) is therefore a linear combination of terms of the form e^{\lambda_i t}. For complex eigenvalues \lambda = \alpha + i\beta, the corresponding solution term behaves as:

e^{\lambda t} = e^{(\alpha+i\beta)t} = e^{\alpha t} e^{i\beta t} = e^{\alpha t}(\cos(\beta t) + i\sin(\beta t))

The magnitude of this term is |e^{\lambda t}| = |e^{\alpha t}| |e^{i\beta t}| = e^{\alpha t}. The solution component associated with this eigenvalue will decay to zero only if e^{\alpha t} \to 0, which requires \alpha < 0. This leads to a powerful and universal criterion for stability:

A linear system \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} is asymptotically stable if and only if all eigenvalues \lambda_i of the matrix \mathbf{A} have a strictly negative real part.

\Re(\lambda_i) < 0, \quad \forall i=1, \ldots, n

Go to the top of the page