16. Eigenvalues and Eigenvectors#

16.1. Overview#

Eigenvalues and eigenvectors are a relatively advanced topic in linear algebra.

At the same time, these concepts are extremely useful for

  • economic modeling (especially dynamics!)

  • statistics

  • some parts of applied mathematics

  • machine learning

  • and many other fields of science.

In this lecture we explain the basics of eigenvalues and eigenvectors and introduce the Neumann Series Lemma.

We assume in this lecture that students are familiar with matrices and understand the basics of matrix algebra.

We will use the following imports:

import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import matrix_power
from matplotlib.lines import Line2D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d

16.2. Matrices as transformations#

Let’s start by discussing an important concept concerning matrices.

16.2.1. Mapping vectors to vectors#

One way to think about a matrix is as a rectangular collection of numbers.

Another way to think about a matrix is as a map (i.e., as a function) that transforms vectors to new vectors.

To understand the second point of view, suppose we multiply an \(n \times m\) matrix \(A\) with an \(m \times 1\) column vector \(x\) to obtain an \(n \times 1\) column vector \(y\):

\[ Ax = y \]

If we fix \(A\) and consider different choices of \(x\), we can understand \(A\) as a map transforming \(x\) to \(Ax\).

Because \(A\) is \(n \times m\), it transforms \(m\)-vectors to \(n\)-vectors.

We can write this formally as \(A \colon \mathbb{R}^m \rightarrow \mathbb{R}^n\).

You might argue that if \(A\) is a function then we should write \(A(x) = y\) rather than \(Ax = y\) but the second notation is more conventional.

16.2.2. Square matrices#

Let’s restrict our discussion to square matrices.

In the above discussion, this means that \(m=n\) and \(A\) maps \(\mathbb R^n\) to itself.

This means \(A\) is an \(n \times n\) matrix that maps (or “transforms”) a vector \(x\) in \(\mathbb{R}^n\) to a new vector \(y=Ax\) also in \(\mathbb{R}^n\).

Example 16.1

\[\begin{split} \begin{bmatrix} 2 & 1 \\ -1 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 3 \end{bmatrix} = \begin{bmatrix} 5 \\ 2 \end{bmatrix} \end{split}\]

Here, the matrix

\[\begin{split} A = \begin{bmatrix} 2 & 1 \\ -1 & 1 \end{bmatrix} \end{split}\]

transforms the vector \(x = \begin{bmatrix} 1 \\ 3 \end{bmatrix}\) to the vector \(y = \begin{bmatrix} 5 \\ 2 \end{bmatrix}\).

Let’s visualize this using Python:

A = np.array([[2,  1],
              [-1, 1]])
from math import sqrt

fig, ax = plt.subplots()
# Set the axes through the origin

for spine in ['left', 'bottom']:
    ax.spines[spine].set_position('zero')
for spine in ['right', 'top']:
    ax.spines[spine].set_color('none')

ax.set(xlim=(-2, 6), ylim=(-2, 4), aspect=1)

vecs = ((1, 3), (5, 2))
c = ['r', 'black']
for i, v in enumerate(vecs):
    ax.annotate('', xy=v, xytext=(0, 0),
                arrowprops=dict(color=c[i],
                shrink=0,
                alpha=0.7,
                width=0.5))

ax.text(0.2 + 1, 0.2 + 3, 'x=$(1,3)$')
ax.text(0.2 + 5, 0.2 + 2, 'Ax=$(5,2)$')

ax.annotate('', xy=(sqrt(10/29) * 5, sqrt(10/29) * 2), xytext=(0, 0),
            arrowprops=dict(color='purple',
                            shrink=0,
                            alpha=0.7,
                            width=0.5))

ax.annotate('', xy=(1, 2/5), xytext=(1/3, 1),
            arrowprops={'arrowstyle': '->',
                        'connectionstyle': 'arc3,rad=-0.3'},
            horizontalalignment='center')
ax.text(0.8, 0.8, f'θ', fontsize=14)

plt.show()
_images/ae939e5f399be91fd7a22fd06eb05c2957a66d43367e81848ce3d6cec7045348.png

One way to understand this transformation is that \(A\)

  • first rotates \(x\) by some angle \(\theta\) and

  • then scales it by some scalar \(\gamma\) to obtain the image \(y\) of \(x\).

16.3. Types of transformations#

Let’s examine some standard transformations we can perform with matrices.

Below we visualize transformations by thinking of vectors as points instead of arrows.

We consider how a given matrix transforms

  • a grid of points and

  • a set of points located on the unit circle in \(\mathbb{R}^2\).

To build the transformations we will use two functions, called grid_transform and circle_transform.

Each of these functions visualizes the actions of a given \(2 \times 2\) matrix \(A\).

Hide code cell source
def colorizer(x, y):
    r = min(1, 1-y/3)
    g = min(1, 1+y/3)
    b = 1/4 + x/16
    return (r, g, b)


def grid_transform(A=np.array([[1, -1], [1, 1]])):
    xvals = np.linspace(-4, 4, 9)
    yvals = np.linspace(-3, 3, 7)
    xygrid = np.column_stack([[x, y] for x in xvals for y in yvals])
    uvgrid = A @ xygrid

    colors = list(map(colorizer, xygrid[0], xygrid[1]))

    fig, ax = plt.subplots(1, 2, figsize=(10, 5))

    for axes in ax:
        axes.set(xlim=(-11, 11), ylim=(-11, 11))
        axes.set_xticks([])
        axes.set_yticks([])
        for spine in ['left', 'bottom']:
            axes.spines[spine].set_position('zero')
        for spine in ['right', 'top']:
            axes.spines[spine].set_color('none')

    # Plot x-y grid points
    ax[0].scatter(xygrid[0], xygrid[1], s=36, c=colors, edgecolor="none")
    # ax[0].grid(True)
    # ax[0].axis("equal")
    ax[0].set_title("points $x_1, x_2, \cdots, x_k$")

    # Plot transformed grid points
    ax[1].scatter(uvgrid[0], uvgrid[1], s=36, c=colors, edgecolor="none")
    # ax[1].grid(True)
    # ax[1].axis("equal")
    ax[1].set_title("points $Ax_1, Ax_2, \cdots, Ax_k$")

    plt.show()


def circle_transform(A=np.array([[-1, 2], [0, 1]])):

    fig, ax = plt.subplots(1, 2, figsize=(10, 5))

    for axes in ax:
        axes.set(xlim=(-4, 4), ylim=(-4, 4))
        axes.set_xticks([])
        axes.set_yticks([])
        for spine in ['left', 'bottom']:
            axes.spines[spine].set_position('zero')
        for spine in ['right', 'top']:
            axes.spines[spine].set_color('none')

    θ = np.linspace(0, 2 * np.pi, 150)
    r = 1

    θ_1 = np.empty(12)
    for i in range(12):
        θ_1[i] = 2 * np.pi * (i/12)

    x = r * np.cos(θ)
    y = r * np.sin(θ)
    a = r * np.cos(θ_1)
    b = r * np.sin(θ_1)
    a_1 = a.reshape(1, -1)
    b_1 = b.reshape(1, -1)
    colors = list(map(colorizer, a, b))
    ax[0].plot(x, y, color='black', zorder=1)
    ax[0].scatter(a_1, b_1, c=colors, alpha=1, s=60,
                  edgecolors='black', zorder=2)
    ax[0].set_title("unit circle in $\mathbb{R}^2$")

    x1 = x.reshape(1, -1)
    y1 = y.reshape(1, -1)
    ab = np.concatenate((a_1, b_1), axis=0)
    transformed_ab = A @ ab
    transformed_circle_input = np.concatenate((x1, y1), axis=0)
    transformed_circle = A @ transformed_circle_input
    ax[1].plot(transformed_circle[0, :],
               transformed_circle[1, :], color='black', zorder=1)
    ax[1].scatter(transformed_ab[0, :], transformed_ab[1:,],
                  color=colors, alpha=1, s=60, edgecolors='black', zorder=2)
    ax[1].set_title("transformed circle")

    plt.show()
<>:31: SyntaxWarning: invalid escape sequence '\c'
<>:37: SyntaxWarning: invalid escape sequence '\c'
<>:72: SyntaxWarning: invalid escape sequence '\m'
<>:31: SyntaxWarning: invalid escape sequence '\c'
<>:37: SyntaxWarning: invalid escape sequence '\c'
<>:72: SyntaxWarning: invalid escape sequence '\m'
/tmp/ipykernel_5445/1188859409.py:31: SyntaxWarning: invalid escape sequence '\c'
  ax[0].set_title("points $x_1, x_2, \cdots, x_k$")
/tmp/ipykernel_5445/1188859409.py:37: SyntaxWarning: invalid escape sequence '\c'
  ax[1].set_title("points $Ax_1, Ax_2, \cdots, Ax_k$")
/tmp/ipykernel_5445/1188859409.py:72: SyntaxWarning: invalid escape sequence '\m'
  ax[0].set_title("unit circle in $\mathbb{R}^2$")

16.3.1. Scaling#

A matrix of the form

\[\begin{split} \begin{bmatrix} \alpha & 0 \\ 0 & \beta \end{bmatrix} \end{split}\]

scales vectors across the x-axis by a factor \(\alpha\) and along the y-axis by a factor \(\beta\).

Here we illustrate a simple example where \(\alpha = \beta = 3\).

A = np.array([[3, 0],  # scaling by 3 in both directions
              [0, 3]])
grid_transform(A)
circle_transform(A)
_images/fef4b70257808178fc9226550dd72172a7a4e64ca17728dd495b08dc8d14f148.png _images/51d75c89f2d249ec5b5d3edc95c2f40bc2b3cc25497a1c3a1c661ce5e46cd758.png

16.3.2. Shearing#

A “shear” matrix of the form

\[\begin{split} \begin{bmatrix} 1 & \lambda \\ 0 & 1 \end{bmatrix} \end{split}\]

stretches vectors along the x-axis by an amount proportional to the y-coordinate of a point.

A = np.array([[1, 2],     # shear along x-axis
              [0, 1]])
grid_transform(A)
circle_transform(A)
_images/9f5b74c0424e7f7d0b87ad0d27b5d798bf64d3f31d46df197ca1edcedc8476ec.png _images/56d101a3be98cef4649e1e60a3274996df2cea045c8569dfad221ba84ace2423.png

16.3.3. Rotation#

A matrix of the form

\[\begin{split} \begin{bmatrix} \cos \theta & \sin \theta \\ - \sin \theta & \cos \theta \end{bmatrix} \end{split}\]

is called a rotation matrix.

This matrix rotates vectors clockwise by an angle \(\theta\).

θ = np.pi/4  # 45 degree clockwise rotation
A = np.array([[np.cos(θ), np.sin(θ)],
              [-np.sin(θ), np.cos(θ)]])
grid_transform(A)
_images/334f5e40f5bcad5ec6f094074bcbd43b14609aab9713499769af82e767869d66.png

16.3.4. Permutation#

The permutation matrix

\[\begin{split} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \end{split}\]

interchanges the coordinates of a vector.

A = np.column_stack([[0, 1], [1, 0]])
grid_transform(A)
_images/0a06f950f550498f8905c1111cfae15f16857f22b877f3da441fb5bb566ab61d.png

More examples of common transition matrices can be found here.

16.4. Matrix multiplication as composition#

Since matrices act as functions that transform one vector to another, we can apply the concept of function composition to matrices as well.

16.4.1. Linear compositions#

Consider the two matrices

\[\begin{split} A = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} \end{split}\]

What will the output be when we try to obtain \(ABx\) for some \(2 \times 1\) vector \(x\)?

\[\begin{split} \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}} }_{\textstyle A} } \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}} }_{\textstyle B}} \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 1 \\ 3 \end{bmatrix}} }^{\textstyle x}} \rightarrow \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 0 & 1 \\ -1 & -2 \end{bmatrix}} }_{\textstyle AB}} \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 1 \\ 3 \end{bmatrix}} }^{\textstyle x}} \rightarrow \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 3 \\ -7 \end{bmatrix}} }^{\textstyle y}} \end{split}\]
\[\begin{split} \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}} }_{\textstyle A} } \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}} }_{\textstyle B}} \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 1 \\ 3 \end{bmatrix}} }^{\textstyle x}} \rightarrow \color{red}{\underbrace{ \color{black}{\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}} }_{\textstyle A}} \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 7 \\ 3 \end{bmatrix}} }^{\textstyle Bx}} \rightarrow \color{red}{\overbrace{ \color{black}{\begin{bmatrix} 3 \\ -7 \end{bmatrix}} }^{\textstyle y}} \end{split}\]

We can observe that applying the transformation \(AB\) on the vector \(x\) is the same as first applying \(B\) on \(x\) and then applying \(A\) on the vector \(Bx\).

Thus the matrix product \(AB\) is the composition of the matrix transformations \(A\) and \(B\)

This means first apply transformation \(B\) and then transformation \(A\).

When we matrix multiply an \(n \times m\) matrix \(A\) with an \(m \times k\) matrix \(B\) the obtained matrix product is an \(n \times k\) matrix \(AB\).

Thus, if \(A\) and \(B\) are transformations such that \(A \colon \mathbb{R}^m \to \mathbb{R}^n\) and \(B \colon \mathbb{R}^k \to \mathbb{R}^m\), then \(AB\) transforms \(\mathbb{R}^k\) to \(\mathbb{R}^n\).

Viewing matrix multiplication as composition of maps helps us understand why, under matrix multiplication, \(AB\) is generally not equal to \(BA\).

(After all, when we compose functions, the order usually matters.)

16.4.2. Examples#

Let \(A\) be the \(90^{\circ}\) clockwise rotation matrix given by \(\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\) and let \(B\) be a shear matrix along the x-axis given by \(\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix}\).

We will visualize how a grid of points changes when we apply the transformation \(AB\) and then compare it with the transformation \(BA\).

Hide code cell source
def grid_composition_transform(A=np.array([[1, -1], [1, 1]]),
                               B=np.array([[1, -1], [1, 1]])):
    xvals = np.linspace(-4, 4, 9)
    yvals = np.linspace(-3, 3, 7)
    xygrid = np.column_stack([[x, y] for x in xvals for y in yvals])
    uvgrid = B @ xygrid
    abgrid = A @ uvgrid

    colors = list(map(colorizer, xygrid[0], xygrid[1]))

    fig, ax = plt.subplots(1, 3, figsize=(15, 5))

    for axes in ax:
        axes.set(xlim=(-12, 12), ylim=(-12, 12))
        axes.set_xticks([])
        axes.set_yticks([])
        for spine in ['left', 'bottom']:
            axes.spines[spine].set_position('zero')
        for spine in ['right', 'top']:
            axes.spines[spine].set_color('none')

    # Plot grid points
    ax[0].scatter(xygrid[0], xygrid[1], s=36, c=colors, edgecolor="none")
    ax[0].set_title("points $x_1, x_2, \cdots, x_k$")

    # Plot intermediate grid points
    ax[1].scatter(uvgrid[0], uvgrid[1], s=36, c=colors, edgecolor="none")
    ax[1].set_title("points $Bx_1, Bx_2, \cdots, Bx_k$")

    # Plot transformed grid points
    ax[2].scatter(abgrid[0], abgrid[1], s=36, c=colors, edgecolor="none")
    ax[2].set_title("points $ABx_1, ABx_2, \cdots, ABx_k$")

    plt.show()
<>:24: SyntaxWarning: invalid escape sequence '\c'
<>:28: SyntaxWarning: invalid escape sequence '\c'
<>:32: SyntaxWarning: invalid escape sequence '\c'
<>:24: SyntaxWarning: invalid escape sequence '\c'
<>:28: SyntaxWarning: invalid escape sequence '\c'
<>:32: SyntaxWarning: invalid escape sequence '\c'
/tmp/ipykernel_5445/2452742297.py:24: SyntaxWarning: invalid escape sequence '\c'
  ax[0].set_title("points $x_1, x_2, \cdots, x_k$")
/tmp/ipykernel_5445/2452742297.py:28: SyntaxWarning: invalid escape sequence '\c'
  ax[1].set_title("points $Bx_1, Bx_2, \cdots, Bx_k$")
/tmp/ipykernel_5445/2452742297.py:32: SyntaxWarning: invalid escape sequence '\c'
  ax[2].set_title("points $ABx_1, ABx_2, \cdots, ABx_k$")
A = np.array([[0, 1],     # 90 degree clockwise rotation
              [-1, 0]])
B = np.array([[1, 2],     # shear along x-axis
              [0, 1]])

16.4.2.1. Shear then rotate#

grid_composition_transform(A, B)  # transformation AB
_images/600da8ff80e1b319a3c2070d944a2238591b02b8059e0eab54ebcae85129087c.png

16.4.2.2. Rotate then shear#

grid_composition_transform(B,A)         # transformation BA
_images/916941cbf02440eccbd9513c61e3f7580bfd3859bc63189118851e7a73601220.png

It is evident that the transformation \(AB\) is not the same as the transformation \(BA\).

16.5. Iterating on a fixed map#

In economics (and especially in dynamic modeling), we are often interested in analyzing behavior where we repeatedly apply a fixed matrix.

For example, given a vector \(v\) and a matrix \(A\), we are interested in studying the sequence

\[ v, \quad Av, \quad AAv = A^2v, \quad \ldots \]

Let’s first see examples of a sequence of iterates \((A^k v)_{k \geq 0}\) under different maps \(A\).

def plot_series(A, v, n):

    B = np.array([[1, -1],
                  [1, 0]])

    fig, ax = plt.subplots()

    ax.set(xlim=(-4, 4), ylim=(-4, 4))
    ax.set_xticks([])
    ax.set_yticks([])
    for spine in ['left', 'bottom']:
        ax.spines[spine].set_position('zero')
    for spine in ['right', 'top']:
        ax.spines[spine].set_color('none')

    θ = np.linspace(0, 2 * np.pi, 150)
    r = 2.5
    x = r * np.cos(θ)
    y = r * np.sin(θ)
    x1 = x.reshape(1, -1)
    y1 = y.reshape(1, -1)
    xy = np.concatenate((x1, y1), axis=0)

    ellipse = B @ xy
    ax.plot(ellipse[0, :], ellipse[1, :], color='black',
            linestyle=(0, (5, 10)), linewidth=0.5)

    # Initialize holder for trajectories
    colors = plt.cm.rainbow(np.linspace(0, 1, 20))

    for i in range(n):
        iteration = matrix_power(A, i) @ v
        v1 = iteration[0]
        v2 = iteration[1]
        ax.scatter(v1, v2, color=colors[i])
        if i == 0:
            ax.text(v1+0.25, v2, f'$v$')
        elif i == 1:
            ax.text(v1+0.25, v2, f'$Av$')
        elif 1 < i < 4:
            ax.text(v1+0.25, v2, f'$A^{i}v$')
    plt.show()
A = np.array([[sqrt(3) + 1, -2],
              [1, sqrt(3) - 1]])
A = (1/(2*sqrt(2))) * A
v = (-3, -3)
n = 12

plot_series(A, v, n)
_images/545651d837a38e02d571c59a8e6d76aa711e2ae3ed6d0f9a7fd0628ee7c1790d.png

Here with each iteration the vectors get shorter, i.e., move closer to the origin.

In this case, repeatedly multiplying a vector by \(A\) makes the vector “spiral in”.

B = np.array([[sqrt(3) + 1, -2],
              [1, sqrt(3) - 1]])
B = (1/2) * B
v = (2.5, 0)
n = 12

plot_series(B, v, n)
_images/d988b0521c8b149f68b80246e16bafa984b4456258a162dcdbb8c72b78b31096.png

Here with each iteration vectors do not tend to get longer or shorter.

In this case, repeatedly multiplying a vector by \(A\) simply “rotates it around an ellipse”.

B = np.array([[sqrt(3) + 1, -2],
              [1, sqrt(3) - 1]])
B = (1/sqrt(2)) * B
v = (-1, -0.25)
n = 6

plot_series(B, v, n)
_images/fafd98b963ba1971c46ae6bd858214bca6a68c6d7ae082c5a409a67f60695f29.png

Here with each iteration vectors tend to get longer, i.e., farther from the origin.

In this case, repeatedly multiplying a vector by \(A\) makes the vector “spiral out”.

We thus observe that the sequence \((A^kv)_{k \geq 0}\) behaves differently depending on the map \(A\) itself.

We now discuss the property of A that determines this behavior.

16.6. Eigenvalues#

In this section we introduce the notions of eigenvalues and eigenvectors.

16.6.1. Definitions#

Let \(A\) be an \(n \times n\) square matrix.

If \(\lambda\) is scalar and \(v\) is a non-zero \(n\)-vector such that

\[ A v = \lambda v. \]

Then we say that \(\lambda\) is an eigenvalue of \(A\), and \(v\) is the corresponding eigenvector.

Thus, an eigenvector of \(A\) is a nonzero vector \(v\) such that when the map \(A\) is applied, \(v\) is merely scaled.

The next figure shows two eigenvectors (blue arrows) and their images under \(A\) (red arrows).

As expected, the image \(Av\) of each \(v\) is just a scaled version of the original

from numpy.linalg import eig

A = [[1, 2],
     [2, 1]]
A = np.array(A)
evals, evecs = eig(A)
evecs = evecs[:, 0], evecs[:, 1]

fig, ax = plt.subplots(figsize=(10, 8))
# Set the axes through the origin
for spine in ['left', 'bottom']:
    ax.spines[spine].set_position('zero')
for spine in ['right', 'top']:
    ax.spines[spine].set_color('none')
# ax.grid(alpha=0.4)

xmin, xmax = -3, 3
ymin, ymax = -3, 3
ax.set(xlim=(xmin, xmax), ylim=(ymin, ymax))

# Plot each eigenvector
for v in evecs:
    ax.annotate('', xy=v, xytext=(0, 0),
                arrowprops=dict(facecolor='blue',
                shrink=0,
                alpha=0.6,
                width=0.5))

# Plot the image of each eigenvector
for v in evecs:
    v = A @ v
    ax.annotate('', xy=v, xytext=(0, 0),
                arrowprops=dict(facecolor='red',
                shrink=0,
                alpha=0.6,
                width=0.5))

# Plot the lines they run through
x = np.linspace(xmin, xmax, 3)
for v in evecs:
    a = v[1] / v[0]
    ax.plot(x, a * x, 'b-', lw=0.4)

plt.show()
_images/e09ad116f35ad581581c2a703cc0a8370932357cc47763156a41734c62b8d643.png

16.6.2. Complex values#

So far our definition of eigenvalues and eigenvectors seems straightforward.

There is one complication we haven’t mentioned yet:

When solving \(Av = \lambda v\),

  • \(\lambda\) is allowed to be a complex number and

  • \(v\) is allowed to be an \(n\)-vector of complex numbers.

We will see some examples below.

16.6.3. Some mathematical details#

We note some mathematical details for more advanced readers.

(Other readers can skip to the next section.)

The eigenvalue equation is equivalent to \((A - \lambda I) v = 0\).

This equation has a nonzero solution \(v\) only when the columns of \(A - \lambda I\) are linearly dependent.

This in turn is equivalent to stating the determinant is zero.

Hence, to find all eigenvalues, we can look for \(\lambda\) such that the determinant of \(A - \lambda I\) is zero.

This problem can be expressed as one of solving for the roots of a polynomial in \(\lambda\) of degree \(n\).

This in turn implies the existence of \(n\) solutions in the complex plane, although some might be repeated.

16.6.4. Facts#

Some nice facts about the eigenvalues of a square matrix \(A\) are as follows:

  1. the determinant of \(A\) equals the product of the eigenvalues

  2. the trace of \(A\) (the sum of the elements on the principal diagonal) equals the sum of the eigenvalues

  3. if \(A\) is symmetric, then all of its eigenvalues are real

  4. if \(A\) is invertible and \(\lambda_1, \ldots, \lambda_n\) are its eigenvalues, then the eigenvalues of \(A^{-1}\) are \(1/\lambda_1, \ldots, 1/\lambda_n\).

A corollary of the last statement is that a matrix is invertible if and only if all its eigenvalues are nonzero.

16.6.5. Computation#

Using NumPy, we can solve for the eigenvalues and eigenvectors of a matrix as follows

from numpy.linalg import eig

A = ((1, 2),
     (2, 1))

A = np.array(A)
evals, evecs = eig(A)
evals  # eigenvalues
array([ 3., -1.])
evecs  # eigenvectors
array([[ 0.70710678, -0.70710678],
       [ 0.70710678,  0.70710678]])

Note that the columns of evecs are the eigenvectors.

Since any scalar multiple of an eigenvector is an eigenvector with the same eigenvalue (which can be verified), the eig routine normalizes the length of each eigenvector to one.

The eigenvectors and eigenvalues of a map \(A\) determine how a vector \(v\) is transformed when we repeatedly multiply by \(A\).

This is discussed further later.

16.7. The Neumann Series Lemma#

In this section we present a famous result about series of matrices that has many applications in economics.

16.7.1. Scalar series#

Here’s a fundamental result about series:

If \(a\) is a number and \(|a| < 1\), then

(16.1)#\[ \sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}\]

For a one-dimensional linear equation \(x = ax + b\) where x is unknown we can thus conclude that the solution \(x^{*}\) is given by:

\[ x^{*} = \frac{b}{1-a} = \sum_{k=0}^{\infty} a^k b \]

16.7.2. Matrix series#

A generalization of this idea exists in the matrix setting.

Consider the system of equations \(x = Ax + b\) where \(A\) is an \(n \times n\) square matrix and \(x\) and \(b\) are both column vectors in \(\mathbb{R}^n\).

Using matrix algebra we can conclude that the solution to this system of equations will be given by:

(16.2)#\[ x^{*} = (I-A)^{-1}b\]

What guarantees the existence of a unique vector \(x^{*}\) that satisfies (16.2)?

The following is a fundamental result in functional analysis that generalizes (16.1) to a multivariate case.

Theorem 16.1 (Neumann Series Lemma)

Let \(A\) be a square matrix and let \(A^k\) be the \(k\)-th power of \(A\).

Let \(r(A)\) be the spectral radius of \(A\), defined as \(\max_i |\lambda_i|\), where

  • \(\{\lambda_i\}_i\) is the set of eigenvalues of \(A\) and

  • \(|\lambda_i|\) is the modulus of the complex number \(\lambda_i\)

Neumann’s Theorem states the following: If \(r(A) < 1\), then \(I - A\) is invertible, and

\[ (I - A)^{-1} = \sum_{k=0}^{\infty} A^k \]

We can see the Neumann Series Lemma in action in the following example.

A = np.array([[0.4, 0.1],
              [0.7, 0.2]])

evals, evecs = eig(A)   # finding eigenvalues and eigenvectors

r = max(abs(λ) for λ in evals)    # compute spectral radius
print(r)
0.5828427124746189

The spectral radius \(r(A)\) obtained is less than 1.

Thus, we can apply the Neumann Series Lemma to find \((I-A)^{-1}\).

I = np.identity(2)  # 2 x 2 identity matrix
B = I - A
B_inverse = np.linalg.inv(B)  # direct inverse method
A_sum = np.zeros((2, 2))  # power series sum of A
A_power = I
for i in range(50):
    A_sum += A_power
    A_power = A_power @ A

Let’s check equality between the sum and the inverse methods.

np.allclose(A_sum, B_inverse)
True

Although we truncate the infinite sum at \(k = 50\), both methods give us the same result which illustrates the result of the Neumann Series Lemma.

16.8. Exercises#

Exercise 16.1

Power iteration is a method for finding the greatest absolute eigenvalue of a diagonalizable matrix.

The method starts with a random vector \(b_0\) and repeatedly applies the matrix \(A\) to it

\[ b_{k+1}=\frac{A b_k}{\left\|A b_k\right\|} \]

A thorough discussion of the method can be found here.

In this exercise, first implement the power iteration method and use it to find the greatest absolute eigenvalue and its corresponding eigenvector.

Then visualize the convergence.

Exercise 16.2

We have discussed the trajectory of the vector \(v\) after being transformed by \(A\).

Consider the matrix \(A = \begin{bmatrix} 1 & 2 \\ 1 & 1 \end{bmatrix}\) and the vector \(v = \begin{bmatrix} 2 \\ -2 \end{bmatrix}\).

Try to compute the trajectory of \(v\) after being transformed by \(A\) for \(n=4\) iterations and plot the result.

Exercise 16.3

Previously, we demonstrated the trajectory of the vector \(v\) after being transformed by \(A\) for three different matrices.

Use the visualization in the previous exercise to explain the trajectory of the vector \(v\) after being transformed by \(A\) for the three different matrices.