38. The Perron-Frobenius Theorem#

In addition to what’s in Anaconda, this lecture will need the following libraries:

!pip install quantecon
Hide code cell output
Collecting quantecon
  Downloading quantecon-0.7.2-py3-none-any.whl.metadata (4.9 kB)
Requirement already satisfied: numba>=0.49.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (0.59.1)
Requirement already satisfied: numpy>=1.17.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.26.4)
Requirement already satisfied: requests in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (2.32.2)
Requirement already satisfied: scipy>=1.5.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.13.1)
Requirement already satisfied: sympy in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.12)
Requirement already satisfied: llvmlite<0.43,>=0.42.0dev0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from numba>=0.49.0->quantecon) (0.42.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (2024.6.2)
Requirement already satisfied: mpmath>=0.19 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from sympy->quantecon) (1.3.0)
Downloading quantecon-0.7.2-py3-none-any.whl (215 kB)
?25l   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/215.4 kB ? eta -:--:--
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 215.4/215.4 kB 14.0 MB/s eta 0:00:00
?25h
Installing collected packages: quantecon
Successfully installed quantecon-0.7.2

In this lecture we will begin with the foundational concepts in spectral theory.

Then we will explore the Perron-Frobenius theorem and connect it to applications in Markov chains and networks.

We will use the following imports:

import numpy as np
from numpy.linalg import eig
import scipy as sp
import quantecon as qe

38.1. Nonnegative matrices#

Often, in economics, the matrix that we are dealing with is nonnegative.

Nonnegative matrices have several special and useful properties.

In this section we will discuss some of them — in particular, the connection between nonnegativity and eigenvalues.

An \(n \times m\) matrix \(A\) is called nonnegative if every element of \(A\) is nonnegative, i.e., \(a_{ij} \geq 0\) for every \(i,j\).

We denote this as \(A \geq 0\).

38.1.1. Irreducible matrices#

We introduced irreducible matrices in the Markov chain lecture.

Here we generalize this concept:

Let \(a^{k}_{ij}\) be element \((i,j)\) of \(A^k\).

An \(n \times n\) nonnegative matrix \(A\) is called irreducible if \(A + A^2 + A^3 + \cdots \gg 0\), where \(\gg 0\) indicates that every element in \(A\) is strictly positive.

In other words, for each \(i,j\) with \(1 \leq i, j \leq n\), there exists a \(k \geq 0\) such that \(a^{k}_{ij} > 0\).

Example 38.1

Here are some examples to illustrate this further:

\[\begin{split} A = \begin{bmatrix} 0.5 & 0.1 \\ 0.2 & 0.2 \end{bmatrix} \end{split}\]

\(A\) is irreducible since \(a_{ij}>0\) for all \((i,j)\).

\[\begin{split} B = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \quad B^2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{split}\]

\(B\) is irreducible since \(B + B^2\) is a matrix of ones.

\[\begin{split} C = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{split}\]

\(C\) is not irreducible since \(C^k = C\) for all \(k \geq 0\) and thus \(c^{k}_{12},c^{k}_{21} = 0\) for all \(k \geq 0\).

38.1.2. Left eigenvectors#

Recall that we previously discussed eigenvectors in Eigenvalues and Eigenvectors.

In particular, \(\lambda\) is an eigenvalue of \(A\) and \(v\) is an eigenvector of \(A\) if \(v\) is nonzero and satisfy

\[ Av = \lambda v. \]

In this section we introduce left eigenvectors.

To avoid confusion, what we previously referred to as “eigenvectors” will be called “right eigenvectors”.

Left eigenvectors will play important roles in what follows, including that of stochastic steady states for dynamic models under a Markov assumption.

A vector \(w\) is called a left eigenvector of \(A\) if \(w\) is a right eigenvector of \(A^\top\).

In other words, if \(w\) is a left eigenvector of matrix \(A\), then \(A^\top w = \lambda w\), where \(\lambda\) is the eigenvalue associated with the left eigenvector \(v\).

This hints at how to compute left eigenvectors

A = np.array([[3, 2],
              [1, 4]])

# Compute eigenvalues and right eigenvectors
λ, v = eig(A)

# Compute eigenvalues and left eigenvectors
λ, w = eig(A.T)

# Keep 5 decimals
np.set_printoptions(precision=5)

print(f"The eigenvalues of A are:\n {λ}\n")
print(f"The corresponding right eigenvectors are: \n {v[:,0]} and {-v[:,1]}\n")
print(f"The corresponding left eigenvectors are: \n {w[:,0]} and {-w[:,1]}\n")
The eigenvalues of A are:
 [2. 5.]

The corresponding right eigenvectors are: 
 [-0.89443  0.44721] and [0.70711 0.70711]

The corresponding left eigenvectors are: 
 [-0.70711  0.70711] and [0.44721 0.89443]

We can also use scipy.linalg.eig with argument left=True to find left eigenvectors directly

eigenvals, ε, e = sp.linalg.eig(A, left=True)

print(f"The eigenvalues of A are:\n {eigenvals.real}\n")
print(f"The corresponding right eigenvectors are: \n {e[:,0]} and {-e[:,1]}\n")
print(f"The corresponding left eigenvectors are: \n {ε[:,0]} and {-ε[:,1]}\n")
The eigenvalues of A are:
 [2. 5.]

The corresponding right eigenvectors are: 
 [-0.89443  0.44721] and [0.70711 0.70711]

The corresponding left eigenvectors are: 
 [-0.70711  0.70711] and [0.44721 0.89443]

The eigenvalues are the same while the eigenvectors themselves are different.

(Also note that we are taking the nonnegative value of the eigenvector of dominant eigenvalue, this is because eig automatically normalizes the eigenvectors.)

We can then take transpose to obtain \(A^\top w = \lambda w\) and obtain \(w^\top A= \lambda w^\top\).

This is a more common expression and where the name left eigenvectors originates.

38.1.3. The Perron-Frobenius theorem#

For a square nonnegative matrix \(A\), the behavior of \(A^k\) as \(k \to \infty\) is controlled by the eigenvalue with the largest absolute value, often called the dominant eigenvalue.

For any such matrix \(A\), the Perron-Frobenius theorem characterizes certain properties of the dominant eigenvalue and its corresponding eigenvector.

Theorem 38.1 (Perron-Frobenius Theorem)

If a matrix \(A \geq 0\) then,

  1. the dominant eigenvalue of \(A\), \(r(A)\), is real-valued and nonnegative.

  2. for any other eigenvalue (possibly complex) \(\lambda\) of \(A\), \(|\lambda| \leq r(A)\).

  3. we can find a nonnegative and nonzero eigenvector \(v\) such that \(Av = r(A)v\).

Moreover if \(A\) is also irreducible then,

  1. the eigenvector \(v\) associated with the eigenvalue \(r(A)\) is strictly positive.

  2. there exists no other positive eigenvector \(v\) (except scalar multiples of \(v\)) associated with \(r(A)\).

(More of the Perron-Frobenius theorem about primitive matrices will be introduced below.)

(This is a relatively simple version of the theorem — for more details see here).

We will see applications of the theorem below.

Let’s build our intuition for the theorem using a simple example we have seen before.

Now let’s consider examples for each case.

38.1.3.1. Example: irreducible matrix#

Consider the following irreducible matrix \(A\):

A = np.array([[0, 1, 0],
              [.5, 0, .5],
              [0, 1, 0]])

We can compute the dominant eigenvalue and the corresponding eigenvector

eig(A)
EigResult(eigenvalues=array([-1.00000e+00,  2.90566e-17,  1.00000e+00]), eigenvectors=array([[ 5.77350e-01,  7.07107e-01,  5.77350e-01],
       [-5.77350e-01,  1.36592e-16,  5.77350e-01],
       [ 5.77350e-01, -7.07107e-01,  5.77350e-01]]))

Now we can see the claims of the Perron-Frobenius theorem holds for the irreducible matrix \(A\):

  1. The dominant eigenvalue is real-valued and non-negative.

  2. All other eigenvalues have absolute values less than or equal to the dominant eigenvalue.

  3. A non-negative and nonzero eigenvector is associated with the dominant eigenvalue.

  4. As the matrix is irreducible, the eigenvector associated with the dominant eigenvalue is strictly positive.

  5. There exists no other positive eigenvector associated with the dominant eigenvalue.

38.1.4. Primitive matrices#

We know that in real world situations it’s hard for a matrix to be everywhere positive (although they have nice properties).

The primitive matrices, however, can still give us helpful properties with looser definitions.

Let \(A\) be a square nonnegative matrix and let \(A^k\) be the \(k^{th}\) power of \(A\).

A matrix is called primitive if there exists a \(k \in \mathbb{N}\) such that \(A^k\) is everywhere positive.

Example 38.2

Recall the examples given in irreducible matrices:

\[\begin{split} A = \begin{bmatrix} 0.5 & 0.1 \\ 0.2 & 0.2 \end{bmatrix} \end{split}\]

\(A\) here is also a primitive matrix since \(A^k\) is everywhere nonnegative for \(k \in \mathbb{N}\).

\[\begin{split} B = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \quad B^2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{split}\]

\(B\) is irreducible but not primitive since there are always zeros in either principal diagonal or secondary diagonal.

We can see that if a matrix is primitive, then it implies the matrix is irreducible but not vice versa.

Now let’s step back to the primitive matrices part of the Perron-Frobenius theorem

Theorem 38.2 (Continous of Perron-Frobenius Theorem)

If \(A\) is primitive then,

  1. the inequality \(|\lambda| \leq r(A)\) is strict for all eigenvalues \(\lambda\) of \(A\) distinct from \(r(A)\), and

  2. with \(v\) and \(w\) normalized so that the inner product of \(w\) and \(v = 1\), we have \( r(A)^{-m} A^m\) converges to \(v w^{\top}\) when \(m \rightarrow \infty\). The matrix \(v w^{\top}\) is called the Perron projection of \(A\).

38.1.4.1. Example 1: primitive matrix#

Consider the following primitive matrix \(B\):

B = np.array([[0, 1, 1],
              [1, 0, 1],
              [1, 1, 0]])

np.linalg.matrix_power(B, 2)
array([[2, 1, 1],
       [1, 2, 1],
       [1, 1, 2]])

We compute the dominant eigenvalue and the corresponding eigenvector

eig(B)
EigResult(eigenvalues=array([-1.,  2., -1.]), eigenvectors=array([[-0.8165 ,  0.57735,  0.22646],
       [ 0.40825,  0.57735, -0.79259],
       [ 0.40825,  0.57735,  0.56614]]))

Now let’s give some examples to see if the claims of the Perron-Frobenius theorem hold for the primitive matrix \(B\):

  1. The dominant eigenvalue is real-valued and non-negative.

  2. All other eigenvalues have absolute values strictly less than the dominant eigenvalue.

  3. A non-negative and nonzero eigenvector is associated with the dominant eigenvalue.

  4. The eigenvector associated with the dominant eigenvalue is strictly positive.

  5. There exists no other positive eigenvector associated with the dominant eigenvalue.

  6. The inequality \(|\lambda| < r(B)\) holds for all eigenvalues \(\lambda\) of \(B\) distinct from the dominant eigenvalue.

Furthermore, we can verify the convergence property (7) of the theorem on the following examples:

def compute_perron_projection(M):

    eigval, v = eig(M)
    eigval, w = eig(M.T)

    r = np.max(eigval)

    # Find the index of the dominant (Perron) eigenvalue
    i = np.argmax(eigval)

    # Get the Perron eigenvectors
    v_P = v[:, i].reshape(-1, 1)
    w_P = w[:, i].reshape(-1, 1)

    # Normalize the left and right eigenvectors
    norm_factor = w_P.T @ v_P
    v_norm = v_P / norm_factor

    # Compute the Perron projection matrix
    P = v_norm @ w_P.T
    return P, r

def check_convergence(M):
    P, r = compute_perron_projection(M)
    print("Perron projection:")
    print(P)

    # Define a list of values for n
    n_list = [1, 10, 100, 1000, 10000]

    for n in n_list:

        # Compute (A/r)^n
        M_n = np.linalg.matrix_power(M/r, n)

        # Compute the difference between A^n / r^n and the Perron projection
        diff = np.abs(M_n - P)

        # Calculate the norm of the difference matrix
        diff_norm = np.linalg.norm(diff, 'fro')
        print(f"n = {n}, error = {diff_norm:.10f}")


A1 = np.array([[1, 2],
               [1, 4]])

A2 = np.array([[0, 1, 1],
               [1, 0, 1],
               [1, 1, 0]])

A3 = np.array([[0.971, 0.029, 0.1, 1],
               [0.145, 0.778, 0.077, 0.59],
               [0.1, 0.508, 0.492, 1.12],
               [0.2, 0.8, 0.71, 0.95]])

for M in A1, A2, A3:
    print("Matrix:")
    print(M)
    check_convergence(M)
    print()
    print("-"*36)
    print()
Matrix:
[[1 2]
 [1 4]]
Perron projection:
[[0.1362  0.48507]
 [0.24254 0.8638 ]]
n = 1, error = 0.0989045731
n = 10, error = 0.0000000001
n = 100, error = 0.0000000000
n = 1000, error = 0.0000000000
n = 10000, error = 0.0000000000

------------------------------------

Matrix:
[[0 1 1]
 [1 0 1]
 [1 1 0]]
Perron projection:
[[0.33333 0.33333 0.33333]
 [0.33333 0.33333 0.33333]
 [0.33333 0.33333 0.33333]]
n = 1, error = 0.7071067812
n = 10, error = 0.0013810679
n = 100, error = 0.0000000000
n = 1000, error = 0.0000000000
n = 10000, error = 0.0000000000

------------------------------------

Matrix:
[[0.971 0.029 0.1   1.   ]
 [0.145 0.778 0.077 0.59 ]
 [0.1   0.508 0.492 1.12 ]
 [0.2   0.8   0.71  0.95 ]]
Perron projection:
[[0.12506 0.31949 0.20233 0.43341]
 [0.07714 0.19707 0.1248  0.26735]
 [0.12158 0.31058 0.19669 0.42133]
 [0.13885 0.3547  0.22463 0.48118]]
n = 1, error = 0.5361031549
n = 10, error = 0.0000434043
n = 100, error = 0.0000000000
n = 1000, error = 0.0000000000
n = 10000, error = 0.0000000000

------------------------------------

The convergence is not observed in cases of non-primitive matrices.

Let’s go through an example

B = np.array([[0, 1, 1],
              [1, 0, 0],
              [1, 0, 0]])

# This shows that the matrix is not primitive
print("Matrix:")
print(B)
print("100th power of matrix B:")
print(np.linalg.matrix_power(B, 100))

check_convergence(B)
Matrix:
[[0 1 1]
 [1 0 0]
 [1 0 0]]
100th power of matrix B:
[[1125899906842624                0                0]
 [               0  562949953421312  562949953421312]
 [               0  562949953421312  562949953421312]]
Perron projection:
[[0.5     0.35355 0.35355]
 [0.35355 0.25    0.25   ]
 [0.35355 0.25    0.25   ]]
n = 1, error = 1.0000000000
n = 10, error = 1.0000000000
n = 100, error = 1.0000000000
n = 1000, error = 1.0000000000
n = 10000, error = 1.0000000000

The result shows that the matrix is not primitive as it is not everywhere positive.

These examples show how the Perron-Frobenius theorem relates to the eigenvalues and eigenvectors of positive matrices and the convergence of the power of matrices.

In fact we have already seen the theorem in action before in the Markov chain lecture.

38.1.4.2. Example 2: connection to Markov chains#

We are now prepared to bridge the languages spoken in the two lectures.

A primitive matrix is both irreducible and aperiodic.

So Perron-Frobenius theorem explains why both Imam and Temple matrix and Hamilton matrix converge to a stationary distribution, which is the Perron projection of the two matrices

P = np.array([[0.68, 0.12, 0.20],
              [0.50, 0.24, 0.26],
              [0.36, 0.18, 0.46]])

print(compute_perron_projection(P)[0])
[[0.56146 0.15565 0.28289]
 [0.56146 0.15565 0.28289]
 [0.56146 0.15565 0.28289]]
mc = qe.MarkovChain(P)
ψ_star = mc.stationary_distributions[0]
ψ_star
array([0.56146, 0.15565, 0.28289])
P_hamilton = np.array([[0.971, 0.029, 0.000],
                       [0.145, 0.778, 0.077],
                       [0.000, 0.508, 0.492]])

print(compute_perron_projection(P_hamilton)[0])
[[0.8128  0.16256 0.02464]
 [0.8128  0.16256 0.02464]
 [0.8128  0.16256 0.02464]]
mc = qe.MarkovChain(P_hamilton)
ψ_star = mc.stationary_distributions[0]
ψ_star
array([0.8128 , 0.16256, 0.02464])

We can also verify other properties hinted by Perron-Frobenius in these stochastic matrices.

Another example is the relationship between convergence gap and convergence rate.

In the exercise, we stated that the convergence rate is determined by the spectral gap, the difference between the largest and the second largest eigenvalue.

This can be proven using what we have learned here.

Please note that we use \(\mathbb{1}\) for a vector of ones in this lecture.

With Markov model \(M\) with state space \(S\) and transition matrix \(P\), we can write \(P^t\) as

\[ P^t=\sum_{i=1}^{n-1} \lambda_i^t v_i w_i^{\top}+\mathbb{1} \psi^*, \]

This is proven in [Sargent and Stachurski, 2023] and a nice discussion can be found here.

In this formula \(\lambda_i\) is an eigenvalue of \(P\) with corresponding right and left eigenvectors \(v_i\) and \(w_i\) .

Premultiplying \(P^t\) by arbitrary \(\psi \in \mathscr{D}(S)\) and rearranging now gives

\[ \psi P^t-\psi^*=\sum_{i=1}^{n-1} \lambda_i^t \psi v_i w_i^{\top} \]

Recall that eigenvalues are ordered from smallest to largest from \(i = 1 ... n\).

As we have seen, the largest eigenvalue for a primitive stochastic matrix is one.

This can be proven using Gershgorin Circle Theorem, but it is out of the scope of this lecture.

So by the statement (6) of Perron-Frobenius theorem, \(\lambda_i<1\) for all \(i<n\), and \(\lambda_n=1\) when \(P\) is primitive.

Hence, after taking the Euclidean norm deviation, we obtain

\[ \left\|\psi P^t-\psi^*\right\|=O\left(\eta^t\right) \quad \text { where } \quad \eta:=\left|\lambda_{n-1}\right|<1 \]

Thus, the rate of convergence is governed by the modulus of the second largest eigenvalue.

38.2. Exercises#

Exercise 38.1 (Leontief’s Input-Output Model)

Wassily Leontief developed a model of an economy with \(n\) sectors producing \(n\) different commodities representing the interdependencies of different sectors of an economy.

Under this model some of the output is consumed internally by the industries and the rest is consumed by external consumers.

We define a simple model with 3 sectors - agriculture, industry, and service.

The following table describes how output is distributed within the economy:

Total output

Agriculture

Industry

Service

Consumer

Agriculture

\(x_1\)

0.3\(x_1\)

0.2\(x_2\)

0.3\(x_3\)

4

Industry

\(x_2\)

0.2\(x_1\)

0.4\(x_2\)

0.3\(x_3\)

5

Service

\(x_3\)

0.2\(x_1\)

0.5\(x_2\)

0.1\(x_3\)

12

The first row depicts how agriculture’s total output \(x_1\) is distributed

  • \(0.3x_1\) is used as inputs within agriculture itself,

  • \(0.2x_2\) is used as inputs by the industry sector to produce \(x_2\) units,

  • \(0.3x_3\) is used as inputs by the service sector to produce \(x_3\) units and

  • 4 units is the external demand by consumers.

We can transform this into a system of linear equations for the 3 sectors as given below:

\[\begin{split} x_1 = 0.3x_1 + 0.2x_2 + 0.3x_3 + 4 \\ x_2 = 0.2x_1 + 0.4x_2 + 0.3x_3 + 5 \\ x_3 = 0.2x_1 + 0.5x_2 + 0.1x_3 + 12 \end{split}\]

This can be transformed into the matrix equation \(x = Ax + d\) where

\[\begin{split} x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} , \; A = \begin{bmatrix} 0.3 & 0.2 & 0.3 \\ 0.2 & 0.4 & 0.3 \\ 0.2 & 0.5 & 0.1 \end{bmatrix} \; \text{and} \; d = \begin{bmatrix} 4 \\ 5 \\ 12 \end{bmatrix} \end{split}\]

The solution \(x^{*}\) is given by the equation \(x^{*} = (I-A)^{-1} d\)

  1. Since \(A\) is a nonnegative irreducible matrix, find the Perron-Frobenius eigenvalue of \(A\).

  2. Use the Neumann Series Lemma to find the solution \(x^{*}\) if it exists.