16. Distributions and Probabilities#

16.1. Outline#

In this lecture we give a quick introduction to data and probability distributions using Python.

!pip install --upgrade yfinance  
Hide code cell output
Requirement already satisfied: yfinance in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (0.2.38)
Requirement already satisfied: pandas>=1.3.0 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (2.1.4)
Requirement already satisfied: numpy>=1.16.5 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (1.26.4)
Requirement already satisfied: requests>=2.31 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (2.31.0)
Requirement already satisfied: multitasking>=0.0.7 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (0.0.11)
Requirement already satisfied: lxml>=4.9.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (4.9.3)
Requirement already satisfied: appdirs>=1.4.4 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (1.4.4)
Requirement already satisfied: pytz>=2022.5 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (2023.3.post1)
Requirement already satisfied: frozendict>=2.3.4 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (2.4.2)
Requirement already satisfied: peewee>=3.16.2 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (3.17.3)
Requirement already satisfied: beautifulsoup4>=4.11.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (4.12.2)
Requirement already satisfied: html5lib>=1.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from yfinance) (1.1)
Requirement already satisfied: soupsieve>1.2 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from beautifulsoup4>=4.11.1->yfinance) (2.5)
Requirement already satisfied: six>=1.9 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from html5lib>=1.1->yfinance) (1.16.0)
Requirement already satisfied: webencodings in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from html5lib>=1.1->yfinance) (0.5.1)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from pandas>=1.3.0->yfinance) (2.8.2)
Requirement already satisfied: tzdata>=2022.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from pandas>=1.3.0->yfinance) (2023.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from requests>=2.31->yfinance) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from requests>=2.31->yfinance) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from requests>=2.31->yfinance) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages (from requests>=2.31->yfinance) (2024.2.2)
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import yfinance as yf
import scipy.stats
import seaborn as sns

16.2. Common distributions#

In this section we recall the definitions of some well-known distributions and explore how to manipulate them with SciPy.

16.2.1. Discrete distributions#

Let’s start with discrete distributions.

A discrete distribution is defined by a set of numbers \(S = \{x_1, \ldots, x_n\}\) and a probability mass function (PMF) on \(S\), which is a function \(p\) from \(S\) to \([0,1]\) with the property

\[ \sum_{i=1}^n p(x_i) = 1 \]

We say that a random variable \(X\) has distribution \(p\) if \(X\) takes value \(x_i\) with probability \(p(x_i)\).

That is,

\[ \mathbb P\{X = x_i\} = p(x_i) \quad \text{for } i= 1, \ldots, n \]

The mean or expected value of a random variable \(X\) with distribution \(p\) is

\[ \mathbb{E}[X] = \sum_{i=1}^n x_i p(x_i) \]

Expectation is also called the first moment of the distribution.

We also refer to this number as the mean of the distribution (represented by) \(p\).

The variance of \(X\) is defined as

\[ \mathbb{V}[X] = \sum_{i=1}^n (x_i - \mathbb{E}[X])^2 p(x_i) \]

Variance is also called the second central moment of the distribution.

The cumulative distribution function (CDF) of \(X\) is defined by

\[ F(x) = \mathbb{P}\{X \leq x\} = \sum_{i=1}^n \mathbb 1\{x_i \leq x\} p(x_i) \]

Here \(\mathbb 1\{ \textrm{statement} \} = 1\) if “statement” is true and zero otherwise.

Hence the second term takes all \(x_i \leq x\) and sums their probabilities.

16.2.1.1. Uniform distribution#

One simple example is the uniform distribution, where \(p(x_i) = 1/n\) for all \(i\).

We can import the uniform distribution on \(S = \{1, \ldots, n\}\) from SciPy like so:

n = 10
u = scipy.stats.randint(1, n+1)

Here’s the mean and variance:

u.mean(), u.var()
(5.5, 8.25)

The formula for the mean is \((n+1)/2\), and the formula for the variance is \((n^2 - 1)/12\).

Now let’s evaluate the PMF:

u.pmf(1)
0.1
u.pmf(2)
0.1

Here’s a plot of the probability mass function:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
plt.show()
_images/9227dc17c6ecf145499075f79034c7235eaf26dd66203e913e7361a9fed8ca74.png

Here’s a plot of the CDF:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
plt.show()
_images/8ad9eece4d66c65537d1667709ee4f5bc32dda422c9cb789999d239ebe28accf.png

The CDF jumps up by \(p(x_i)\) at \(x_i\).

Exercise 16.1

Calculate the mean and variance for this parameterization (i.e., \(n=10\)) directly from the PMF, using the expressions given above.

Check that your answers agree with u.mean() and u.var().

16.2.1.2. Bernoulli distribution#

Another useful distribution is the Bernoulli distribution on \(S = \{0,1\}\), which has PMF:

\[\begin{split} p(x_i)= \begin{cases} p & \text{if $x_i = 1$}\\ 1-p & \text{if $x_i = 0$} \end{cases} \end{split}\]

Here \(x_i \in S\) is the outcome of the random variable.

We can import the Bernoulli distribution on \(S = \{0,1\}\) from SciPy like so:

p = 0.4 
u = scipy.stats.bernoulli(p)

Here’s the mean and variance:

u.mean(), u.var()
(0.4, 0.24)

The formula for the mean is \(p\), and the formula for the variance is \(p(1-p)\).

Now let’s evaluate the PMF:

u.pmf(0)
u.pmf(1)
0.4

16.2.1.3. Binomial distribution#

Another useful (and more interesting) distribution is the binomial distribution on \(S=\{0, \ldots, n\}\), which has PMF:

\[ p(i) = \binom{n}{i} \theta^i (1-\theta)^{n-i} \]

Here \(\theta \in [0,1]\) is a parameter.

The interpretation of \(p(i)\) is: the probability of \(i\) successes in \(n\) independent trials with success probability \(\theta\).

For example, if \(\theta=0.5\), then \(p(i)\) is the probability of \(i\) heads in \(n\) flips of a fair coin.

The mean and variance are:

n = 10
θ = 0.5
u = scipy.stats.binom(n, θ)
u.mean(), u.var()
(5.0, 2.5)

The formula for the mean is \(n \theta\) and the formula for the variance is \(n \theta (1-\theta)\).

Here’s the PMF:

u.pmf(1)
0.009765625000000002
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
plt.show()
_images/c0538c7d650c62fec66a07fa306903f52cc217b9f8cf1e5e72a52ff277562e38.png

Here’s the CDF:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
plt.show()
_images/13fd9e51fe4cc30be16fb51eb964c98dee6d1d1cdc61eb46a9c1c148822c4c97.png

Exercise 16.2

Using u.pmf, check that our definition of the CDF given above calculates the same function as u.cdf.

16.2.1.4. Poisson distribution#

Poisson distribution on \(S = \{0, 1, \ldots\}\) with parameter \(\lambda > 0\) has PMF

\[ p(i) = \frac{\lambda^i}{i!} e^{-\lambda} \]

The interpretation of \(p(i)\) is: the probability of \(i\) events in a fixed time interval, where the events occur at a constant rate \(\lambda\) and independently of each other.

The mean and variance are:

λ = 2
u = scipy.stats.poisson(λ)
u.mean(), u.var()
(2.0, 2.0)

The the expectation of Poisson distribution is \(\lambda\) and the variance is also \(\lambda\).

Here’s the PMF:

λ = 2
u = scipy.stats.poisson(λ)
u.pmf(1)
0.2706705664732254
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
plt.show()
_images/a164b042053359565ea9f73fc318dc62c778df1d3d411f8ad80d72d1e8f87f99.png

16.2.2. Continuous distributions#

Continuous distributions are represented by a probability density function, which is a function \(p\) over \(\mathbb R\) (the set of all real numbers) such that \(p(x) \geq 0\) for all \(x\) and

\[ \int_{-\infty}^\infty p(x) dx = 1 \]

We say that random variable \(X\) has distribution \(p\) if

\[ \mathbb P\{a < X < b\} = \int_a^b p(x) dx \]

for all \(a \leq b\).

The definition of the mean and variance of a random variable \(X\) with distribution \(p\) are the same as the discrete case, after replacing the sum with an integral.

For example, the mean of \(X\) is

\[ \mathbb{E}[X] = \int_{-\infty}^\infty x p(x) dx \]

The cumulative distribution function (CDF) of \(X\) is defined by

\[ F(x) = \mathbb P\{X \leq x\} = \int_{-\infty}^x p(x) dx \]

16.2.2.1. Normal distribution#

Perhaps the most famous distribution is the normal distribution, which has density

\[ p(x) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) \]

This distribution has two parameters, \(\mu\) and \(\sigma\).

It can be shown that, for this distribution, the mean is \(\mu\) and the variance is \(\sigma^2\).

We can obtain the moments, PDF and CDF of the normal density as follows:

μ, σ = 0.0, 1.0
u = scipy.stats.norm(μ, σ)
u.mean(), u.var()
(0.0, 1.0)

Here’s a plot of the density — the famous “bell-shaped curve”:

μ_vals = [-1, 0, 1]
σ_vals = [0.4, 1, 1.6]
fig, ax = plt.subplots()
x_grid = np.linspace(-4, 4, 200)

for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\mu={μ}, \sigma={σ}$')

plt.legend()
plt.show()
_images/4f29f7195fc982bdbc426ba115b860c79eb0b31f66cf5690ee5171dd7a36361b.png

Here’s a plot of the CDF:

fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\mu={μ}, \sigma={σ}$')
    ax.set_ylim(0, 1)
plt.legend()
plt.show()
_images/eef465284d9248a1e44c46058350ba70d8367d1de4ed686198745f7476dacb22.png

16.2.2.2. Lognormal distribution#

The lognormal distribution is a distribution on \(\left(0, \infty\right)\) with density

\[ p(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp \left(- \frac{\left(\log x - \mu\right)^2}{2 \sigma^2} \right) \]

This distribution has two parameters, \(\mu\) and \(\sigma\).

It can be shown that, for this distribution, the mean is \(\exp\left(\mu + \sigma^2/2\right)\) and the variance is \(\left[\exp\left(\sigma^2\right) - 1\right] \exp\left(2\mu + \sigma^2\right)\).

It has a nice interpretation: if \(X\) is lognormally distributed, then \(\log X\) is normally distributed.

It is often used to model variables that are “multiplicative” in nature, such as income or asset prices.

We can obtain the moments, PDF, and CDF of the lognormal density as follows:

μ, σ = 0.0, 1.0
u = scipy.stats.lognorm(s=σ, scale=np.exp(μ))
u.mean(), u.var()
(1.6487212707001282, 4.670774270471604)
μ_vals = [-1, 0, 1]
σ_vals = [0.25, 0.5, 1]
x_grid = np.linspace(0, 3, 200)

fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.lognorm(σ, scale=np.exp(μ))
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\mu={μ}, \sigma={σ}$')

plt.legend()
plt.show()
_images/fbd1d6916dad323fdb45b1e6b80c5ed55f88e886a1f7c1d0a4b61bcf5c0560d2.png
fig, ax = plt.subplots()
μ = 1
for σ in σ_vals:
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\mu={μ}, \sigma={σ}$')
    ax.set_ylim(0, 1)
    ax.set_xlim(0, 3)
plt.legend()
plt.show()
_images/fb4b51f6af208a48f3abf76b09a2ec63b4fa16859aa7bc17e9d5953a70f4051f.png

16.2.2.3. Exponential distribution#

The exponential distribution is a distribution on \(\left(0, \infty\right)\) with density

\[ p(x) = \lambda \exp \left( - \lambda x \right) \]

This distribution has one parameter, \(\lambda\).

It is related to the Poisson distribution as it describes the distribution of the length of the time interval between two consecutive events in a Poisson process.

It can be shown that, for this distribution, the mean is \(1/\lambda\) and the variance is \(1/\lambda^2\).

We can obtain the moments, PDF, and CDF of the exponential density as follows:

λ = 1.0
u = scipy.stats.expon(scale=1/λ)
u.mean(), u.var()
(1.0, 1.0)
fig, ax = plt.subplots()
λ_vals = [0.5, 1, 2]
x_grid = np.linspace(0, 6, 200)

for λ in λ_vals:
    u = scipy.stats.expon(scale=1/λ)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\lambda={λ}$')
plt.legend()
plt.show()
_images/26ddb21a49628e520a175ca89dc1f8fe372bd146626282b446ad5ecf15315212.png
fig, ax = plt.subplots()
for λ in λ_vals:
    u = scipy.stats.expon(scale=1/λ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=f'$\lambda={λ}$')
    ax.set_ylim(0, 1)
plt.legend()
plt.show()
_images/8b69a4675ccc7b00e0d3e8c1d5ca5e51aff1e07473e23d6ca65b73a4d12a1588.png

16.2.2.4. Beta distribution#

The beta distribution is a distribution on \((0, 1)\) with density

\[ p(x) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)} x^{\alpha - 1} (1 - x)^{\beta - 1} \]

where \(\Gamma\) is the gamma function.

(The role of the gamma function is just to normalize the density, so that it integrates to one.)

This distribution has two parameters, \(\alpha > 0\) and \(\beta > 0\).

It can be shown that, for this distribution, the mean is \(\alpha / (\alpha + \beta)\) and the variance is \(\alpha \beta / (\alpha + \beta)^2 (\alpha + \beta + 1)\).

We can obtain the moments, PDF, and CDF of the Beta density as follows:

α, β = 3.0, 1.0
u = scipy.stats.beta(α, β)
u.mean(), u.var()
(0.75, 0.0375)
α_vals = [0.5, 1, 5, 25, 3]
β_vals = [3, 1, 10, 20, 0.5]
x_grid = np.linspace(0, 1, 200)

fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.beta(α, β)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=fr'$\alpha={α}, \beta={β}$')
plt.legend()
plt.show()
_images/7285105e420b44c9ed061d0d5480e2cce02e331c3512bad137d3458a7f0a6e65.png
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.beta(α, β)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=fr'$\alpha={α}, \beta={β}$')
    ax.set_ylim(0, 1)
plt.legend()
plt.show()
_images/74b1680872252241e15c78df8b6941b5dd484ceba5bcdb7c34f7fff13d0607bd.png

16.2.2.5. Gamma distribution#

The gamma distribution is a distribution on \(\left(0, \infty\right)\) with density

\[ p(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} \exp(-\beta x) \]

This distribution has two parameters, \(\alpha > 0\) and \(\beta > 0\).

It can be shown that, for this distribution, the mean is \(\alpha / \beta\) and the variance is \(\alpha / \beta^2\).

One interpretation is that if \(X\) is gamma distributed and \(\alpha\) is an integer, then \(X\) is the sum of \(\alpha\) independent exponentially distributed random variables with mean \(1/\beta\).

We can obtain the moments, PDF, and CDF of the Gamma density as follows:

α, β = 3.0, 2.0
u = scipy.stats.gamma(α, scale=1/β)
u.mean(), u.var()
(1.5, 0.75)
α_vals = [1, 3, 5, 10]
β_vals = [3, 5, 3, 3]
x_grid = np.linspace(0, 7, 200)

fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.gamma(α, scale=1/β)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=fr'$\alpha={α}, \beta={β}$')
plt.legend()
plt.show()
_images/496b70db6522acf491b0b8ef132071a4b0cfab071061889c5967f15177786d2f.png
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.gamma(α, scale=1/β)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=fr'$\alpha={α}, \beta={β}$')
    ax.set_ylim(0, 1)
plt.legend()
plt.show()
_images/fc77fa7b1cab4c73aeae89c90aeba5c98c1c76bedec702b2cca25427e95bf955.png

16.3. Observed distributions#

Sometimes we refer to observed data or measurements as “distributions”.

For example, let’s say we observe the income of 10 people over a year:

data = [['Hiroshi', 1200], 
        ['Ako', 1210], 
        ['Emi', 1400],
        ['Daiki', 990],
        ['Chiyo', 1530],
        ['Taka', 1210],
        ['Katsuhiko', 1240],
        ['Daisuke', 1124],
        ['Yoshi', 1330],
        ['Rie', 1340]]

df = pd.DataFrame(data, columns=['name', 'income'])
df
name income
0 Hiroshi 1200
1 Ako 1210
2 Emi 1400
3 Daiki 990
4 Chiyo 1530
5 Taka 1210
6 Katsuhiko 1240
7 Daisuke 1124
8 Yoshi 1330
9 Rie 1340

In this situation, we might refer to the set of their incomes as the “income distribution.”

The terminology is confusing because this set is not a probability distribution — it’s just a collection of numbers.

However, as we will see, there are connections between observed distributions (i.e., sets of numbers like the income distribution above) and probability distributions.

Below we explore some observed distributions.

16.3.1. Summary statistics#

Suppose we have an observed distribution with values \(\{x_1, \ldots, x_n\}\)

The sample mean of this distribution is defined as

\[ \bar x = \frac{1}{n} \sum_{i=1}^n x_i \]

The sample variance is defined as

\[ \frac{1}{n} \sum_{i=1}^n (x_i - \bar x)^2 \]

For the income distribution given above, we can calculate these numbers via

x = np.asarray(df['income'])
x.mean(), x.var()
(1257.4, 20412.839999999997)

Exercise 16.3

Check that the formulas given above produce the same numbers.

16.3.2. Visualization#

Let’s look at different ways that we can visualize one or more observed distributions.

We will cover

  • histograms

  • kernel density estimates and

  • violin plots

16.3.2.1. Histograms#

We can histogram the income distribution we just constructed as follows

x = df['income']
fig, ax = plt.subplots()
ax.hist(x, bins=5, density=True, histtype='bar')
plt.show()
_images/3679a9fdc1274d3e769db03b1be610b3c9f02737085af8c333c68c35ffe52e3d.png

Let’s look at a distribution from real data.

In particular, we will look at the monthly return on Amazon shares between 2000/1/1 and 2023/1/1.

The monthly return is calculated as the percent change in the share price over each month.

So we will have one observation for each month.

df = yf.download('AMZN', '2000-1-1', '2023-1-1', interval='1mo' )
prices = df['Adj Close']
data = prices.pct_change()[1:] * 100
data.head()
Hide code cell output
[*********************100%%**********************]  1 of 1 completed

Date
2000-02-01     6.679568
2000-03-01    -2.722323
2000-04-01   -17.630592
2000-05-01   -12.457531
2000-06-01   -24.838297
Name: Adj Close, dtype: float64

The first observation is the monthly return (percent change) over January 2000, which was

data[0] 
/tmp/ipykernel_7069/504633736.py:1: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`
  data[0]
6.6795679502808625

Let’s turn the return observations into an array and histogram it.

x_amazon = np.asarray(data)
fig, ax = plt.subplots()
ax.hist(x_amazon, bins=20)
plt.show()
_images/65a7de4b010b4914d43c6d048433291995b5eb26c880bbd60762251530ceb05f.png

16.3.2.2. Kernel density estimates#

Kernel density estimate (KDE) is a non-parametric way to estimate and visualize the PDF of a distribution.

KDE will generate a smooth curve that approximates the PDF.

fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax)
plt.show()
/usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
  with pd.option_context('mode.use_inf_as_na', True):
_images/fd1dd43a62f2e5630a4f3e5543e87cf3b3fa897d47d2c479b396c4526f9d620c.png

The smoothness of the KDE is dependent on how we choose the bandwidth.

fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.1, alpha=0.5, label="bw=0.1")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.5, alpha=0.5, label="bw=0.5")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=1, alpha=0.5, label="bw=1")
plt.legend()
plt.show()
/usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
  with pd.option_context('mode.use_inf_as_na', True):
/usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
  with pd.option_context('mode.use_inf_as_na', True):
/usr/share/miniconda3/envs/quantecon/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
  with pd.option_context('mode.use_inf_as_na', True):
_images/6ebb4a74d9c5e95ecd71c4e0af4b5df84ab5b43c0b1de703ef51575f326a3a09.png

When we use a larger bandwidth, the KDE is smoother.

A suitable bandwidth is not too smooth (underfitting) or too wiggly (overfitting).

16.3.2.3. Violin plots#

Yet another way to display an observed distribution is via a violin plot.

fig, ax = plt.subplots()
ax.violinplot(x_amazon)
plt.show()
_images/7ef0e6357313bff2b90a2ce1013edadf128f5ff45cc054ccd772d418d3b02054.png

Violin plots are particularly useful when we want to compare different distributions.

For example, let’s compare the monthly returns on Amazon shares with the monthly return on Apple shares.

df = yf.download('AAPL', '2000-1-1', '2023-1-1', interval='1mo' )
prices = df['Adj Close']
data = prices.pct_change()[1:] * 100
x_apple = np.asarray(data)
Hide code cell output
[*********************100%%**********************]  1 of 1 completed

fig, ax = plt.subplots()
ax.violinplot([x_amazon, x_apple])
plt.show()
_images/89bba8abdfedc429f60874189b73aa818699effd21660e20f8dff926fd9be93d.png

16.3.3. Connection to probability distributions#

Let’s discuss the connection between observed distributions and probability distributions.

Sometimes it’s helpful to imagine that an observed distribution is generated by a particular probability distribution.

For example, we might look at the returns from Amazon above and imagine that they were generated by a normal distribution.

Even though this is not true, it might be a helpful way to think about the data.

Here we match a normal distribution to the Amazon monthly returns by setting the sample mean to the mean of the normal distribution and the sample variance equal to the variance.

Then we plot the density and the histogram.

μ = x_amazon.mean()
σ_squared = x_amazon.var()
σ = np.sqrt(σ_squared)
u = scipy.stats.norm(μ, σ)
x_grid = np.linspace(-50, 65, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_amazon, density=True, bins=40)
plt.show()
_images/87fbd78fce300e89d0649dddf16a9b0f0c7445ab03a29f4fd521bf1413337d0a.png

The match between the histogram and the density is not very bad but also not very good.

One reason is that the normal distribution is not really a good fit for this observed data — we will discuss this point again when we talk about heavy tailed distributions.

Of course, if the data really is generated by the normal distribution, then the fit will be better.

Let’s see this in action

  • first we generate random draws from the normal distribution

  • then we histogram them and compare with the density.

μ, σ = 0, 1
u = scipy.stats.norm(μ, σ)
N = 2000  # Number of observations
x_draws = u.rvs(N)
x_grid = np.linspace(-4, 4, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_draws, density=True, bins=40)
plt.show()
_images/67e2362ae69c2dd4461a559cb43112e19633dd4d953db4e93abbdc3c6de89cf7.png

Note that if you keep increasing \(N\), which is the number of observations, the fit will get better and better.

This convergence is a version of the “law of large numbers”, which we will discuss later.