Skip to main content

Ordinary Differential Equations

Section 4.1 Introduction to linear systems

Definition 4.1.1. Matrices.

A matrix \(A=\{a_{ij}\}\) is any rectangular array of numbers. The size of \(A\) is written as \(m \times n\text{,}\) where \(m\) is the number of rows and \(n\) is the number of columns.

Example 4.1.3.

Let \(A=\begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix}\) and \(B=\begin{bmatrix} 1&0 \\ -1&1 \\ 0&1 \end{bmatrix}\text{.}\)
  1. Find \(AB\text{.}\)
  2. Find \(BA\text{.}\)

Definition 4.1.4. Matrix transpose.

If \(A=\{a_{ij}\}\) is an \(m \times n\) matrix, its transpose \(A^T=\{a_{ji}\}\) is an \(n \times m\) matrix.

Definition 4.1.5. Square matrices.

A matrix \(A\) is called square if the number of its rows equals the number of its columns, i.e. that \(A\) is an \(n \times n\) matrix.

Definition 4.1.6. Identity matrix.

An identity matrix \(I_n\) is an \(n \times n\) matrix with entries of \(1\) along the main diagonal (top left to bottom right) and \(0\) everywhere else.

Definition 4.1.7. Trace.

The trace of an \(n \times n\) matrix \(A\) is the sum of all the entries on its main diagonal, i.e.
\begin{equation*} \mathrm{tr}(A) = \displaystyle\sum_{i=1}^n a_{ii}. \end{equation*}

Definition 4.1.8. Determinant of \(2 \times 2\) matrices.

Let \(A=\begin{bmatrix}a&b \\ c&d\end{bmatrix}\text{.}\) The determinant of \(A\) is
\begin{equation*} \mathrm{det}(A)=ad-bc. \end{equation*}

Definition 4.1.9. Invertibility.

Let \(A\) be an \(n \times n\) matrix. If \(\mathrm{det}(A) \neq 0\text{,}\) then \(A\) is called invertible (also called nonsingular), which implies the rows of \(A\) are linearly independent. If \(\mathrm{det}(A)=0\text{,}\) then \(A\) is called noninvertible (also called singular), which implies the rows of \(A\) are linearly dependent so that at least one row can be written as a linear combination of the other rows.

Definition 4.1.10. Matrix inverse.

Let \(A\) be an \(n \times n\) matrix with \(\mathrm{det}(A) \neq 0\text{.}\) Then there exists a unique \(n \times n\) matrix \(A^{-1}\) such that \(A^{-1}A=AA^{-1}=I_n\text{.}\) The matrix \(A^{-1}\) is the multiplicative inverse of \(A\text{.}\)

Example 4.1.12.

Consider the linear system
\begin{equation*} \left\{ \begin{array}{ll} ax+by=s \\ cx+dy=t. \end{array}\right. \end{equation*}
This sytem can be written in as a vector equation of the form \(A\vec{x}=\vec{b}\text{.}\)

Example 4.1.15.

Let \(\vec{x}\) be a length two vector and let \(c \in \mathbb{R}\text{.}\) Then \(c\vec{x}\) scales the vector’s length, but "preserves" its direction.

Definition 4.1.16. Eigenvalues and eigenvectors.

Let \(A\) be an \(n \times n\) matrix. A constant \(\lambda\) is called an eigenvalue of \(A\) if there is a nonzero vector \(\vec{v}\) such that
\begin{equation*} A\vec{v}=\lambda \vec{v}. \end{equation*}
The vector \(\vec{v}\) is called the eigenvector corresponding to the eigenvalue \(\lambda\text{.}\) The pair \(\left\{ \lambda, \vec{v}\right\}\) is called an eigenpair of \(A\text{.}\)
To find eigenvectors, we rewrite \(A\vec{v}=\lambda \vec{v}\) as \((A-\lambda I)\vec{v}=\vec{0}\text{.}\)

Example 4.1.18.

Find the eigenpairs of \(A=\begin{bmatrix} 5&-1 \\ 3&1\end{bmatrix}\text{.}\)

Example 4.1.19.

We rewrite the spring-mass problems as a system of linear ODEs using a substitution. Consider \(my''+\beta y'+ky=f(t)\text{,}\) or in standard form \(y''+2\lambda y+\omega^2 y=u_2(t)\text{.}\)

Example 4.1.20.

Consider two brine tanks, Tank 1 contains \(x(t)\) pounds of salt in \(100\) gallons of brine and Tank 2 contains \(y(t)\) pounds of salt in \(200\) gallons of brine. Each tank is well-stirred and brine in one tank is pumped into the other. In addition, fresh water flows into Tank 1 at a rate of \(20\) gallons per minute and the brine in Tank 2 flows out at a rate of \(20\) gallons per minute so that the total volume of brine remains constant. Find the vector equation of the given system.

Definition 4.1.21. Homogeneous systems.

A linear homogeneous sytem is of the form
\begin{equation} \vec{x}'=A\vec{x},\tag{4.1.1} \end{equation}
where \(A\) is an \(n \times n\) matrix.

Definition 4.1.22. Homogeneous solutions.

A vector-valued function \(\vec{w}\colon I \rightarrow \mathbb{R}^n\text{,}\) where \(I\) is the interval of existence, is called a solution to (4.1.1) provided that \(\vec{w}'(t)=A\vec{w}(t)\) for all \(t \in I\text{.}\) Note that the zero vector is always a asolution to (4.1.1), called the trivial solution.

Example 4.1.23.

Let \(\vec{x}'=\begin{bmatrix} 5&-1 \\ 3&1 \end{bmatrix}\vec{x}\text{.}\) Determine if the following are solutions on \((-\infty,\infty)\text{.}\)
  1. \(\displaystyle \vec{x}=\begin{bmatrix} 1\\ 1 \end{bmatrix} e^{-t}\)
  2. \(\displaystyle \vec{x}=\begin{bmatrix} 1\\3 \end{bmatrix}e^{2t}\)

Proof.

In order to ensure that solutions to (4.1.1) are "distinct", we need an idea of linear independence in this setting.

Definition 4.1.28. General solution.

Let \(\vec{x}_1,\ldots,\vec{x}_n \in \mathbb{R}^n\) form a fundamental set of solutions to (4.1.1) on \(I\text{.}\) Then
\begin{equation*} \vec{x}=c_1\vec{x}_1+c_2\vec{x}_2+\ldots+c_n\vec{x}_n \end{equation*}
is a general solution to (4.1.1) on \(I\text{.}\)

Example 4.1.29.

Verify \(\vec{x}_1(t)=\begin{bmatrix} 1\\ 3 \end{bmatrix}e^{2t}\) and \(\vec{x}_2(t)=\begin{bmatrix} 1\\ 1\end{bmatrix}e^{4t}\) form a fundamental set of solutions to \(\vec{x}'=\begin{bmatrix} 5&-1 \\ 3& 1 \end{bmatrix}\vec{x}\) on \((-\infty,\infty)\text{.}\)

Example 4.1.30.

Solve \(\vec{x}'=\begin{bmatrix} 5&-1 \\ 3& 1 \end{bmatrix}\vec{x}, \vec{x}(0)=\begin{bmatrix} 2 \\ -1 \end{bmatrix}.\)