Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Beta-Delta (Quasi-Hyperbolic) Discounting

This notebook shows how to implement quasi-hyperbolic discounting in PyLCM using a custom aggregation function HH. It covers:

  1. The beta-delta discount function and its relation to PyLCM’s HH

  2. A simple 3-period consumption-savings model with analytical solutions

  3. Exponential, sophisticated, and naive agents

  4. Verifying numerical results against closed-form solutions

Background: Beta-Delta Discounting

Standard exponential discounting uses a single discount factor δ\delta to weight future utility:

Ut=ut+δut+1+δ2ut+2+U_t = u_t + \delta\, u_{t+1} + \delta^2\, u_{t+2} + \cdots

Quasi-hyperbolic (beta-delta) discounting (Laibson, 1997) introduces a present-bias parameter β(0,1]\beta \in (0, 1] that discounts all future periods relative to the present:

Ut=ut+βδut+1+βδ2ut+2+U_t = u_t + \beta\delta\, u_{t+1} + \beta\delta^2\, u_{t+2} + \cdots

The discount weights are {1,  βδ,  βδ2,  }\{1,\; \beta\delta,\; \beta\delta^2,\; \ldots\}. When β=1\beta = 1 this reduces to exponential discounting. When β<1\beta < 1 the agent is present-biased — they overweight current utility relative to any future period.

Mapping to PyLCM’s HH Function

PyLCM defines the value function recursively via an aggregation function HH:

Vt(s)=maxa  H(u(s,a),  Et[Vt+1(s)])V_t(s) = \max_a \; H\bigl(u(s, a),\; \mathbb{E}_t[V_{t+1}(s')]\bigr)

The default HH is additive with a discount factor: H(u,E[V])=u+δE[V]H(u, \mathbb{E}[V']) = u + \delta \cdot \mathbb{E}[V']. For beta-delta discounting, we replace this with:

H(u,E[V])=u+βδE[V]H(u, \mathbb{E}[V']) = u + \beta\delta \cdot \mathbb{E}[V']

This works because the beta-delta discount function has the recursive structure:

Vt=ut+βδVt+1V_t = u_t + \beta\delta \cdot V_{t+1}

where each Vt+1V_{t+1} already includes the β\beta factor for all periods beyond t+1t+1. The β\beta appears exactly once in each one-step discount — which is all HH needs to capture.

Sophisticated vs. Naive Agents

Beta-delta preferences are time-inconsistent: what the agent plans to do tomorrow differs from what they actually do when tomorrow arrives. This creates two behavioral types:

  • Sophisticated agents know their future selves will also be present-biased. They correctly predict future behavior and optimize accordingly. The value function VV is computed and used consistently with βδ\beta\delta discounting.

  • Naive agents believe their future selves will behave as exponential discounters (with discount factor δ\delta only). They compute a perceived value function VEV^E using exponential discounting, but when choosing actions they apply present bias βδ\beta\delta to the continuation value.

In PyLCM terms:

Agent typesolve paramssimulate params
Exponentialβ=1,δ\beta=1, \deltasame
Sophisticatedβ,δ\beta, \deltasame
Naiveβ=1,δ\beta=1, \delta (exponential VEV^E)β,δ\beta, \delta (present-biased choices)

The naive case needs different HH behavior in each phase. There are two ways to handle this:

  1. Manual approach: call solve and simulate separately with different params

  2. PhaseVariant: wrap HH in a PhaseVariant container that provides different implementations for each phase, then use simulate as usual

Both approaches are shown below.

Setup: A 3-Period Model

We use a minimal consumption-savings model:

  • 3 periods: t=0,1t = 0, 1 (decisions), t=2t = 2 (terminal)

  • 1 state: wealth ww

  • 1 action: consumption cc

  • Utility: u(c)=log(c)u(c) = \log(c)

  • Budget: w=wcw' = w - c (interest rate R=1R = 1)

  • Constraint: cwc \le w

  • Terminal: V2(w)=log(w)V_2(w) = \log(w) (consume everything)

import jax.numpy as jnp

from lcm import AgeGrid, LinSpacedGrid, Model, Regime, categorical
from lcm.typing import BoolND, ContinuousAction, ContinuousState, FloatND, ScalarInt
@categorical(ordered=False)
class RegimeId:
    working: int
    dead: int


def utility(consumption: ContinuousAction) -> FloatND:
    return jnp.log(consumption)


def terminal_utility(wealth: ContinuousState) -> FloatND:
    return jnp.log(wealth)


def next_wealth(
    wealth: ContinuousState, consumption: ContinuousAction
) -> ContinuousState:
    return wealth - consumption


def next_regime(age: float) -> ScalarInt:
    return jnp.where(age >= 1, RegimeId.dead, RegimeId.working)


def borrowing_constraint(
    consumption: ContinuousAction, wealth: ContinuousState
) -> BoolND:
    return consumption <= wealth


def beta_delta_H(
    utility: float,
    E_next_V: float,
    beta: float,
    delta: float,
) -> float:
    return utility + beta * delta * E_next_V
working = Regime(
    actions={
        "consumption": LinSpacedGrid(start=0.001, stop=50, n_points=500),
    },
    states={
        "wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
    },
    state_transitions={"wealth": next_wealth},
    constraints={"borrowing_constraint": borrowing_constraint},
    transition=next_regime,
    functions={"utility": utility, "H": beta_delta_H},
    active=lambda age: age <= 1,
)

dead = Regime(
    transition=None,
    states={
        "wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
    },
    functions={"utility": terminal_utility},
    active=lambda age: age > 1,
)

model = Model(
    regimes={"working": working, "dead": dead},
    ages=AgeGrid(start=0, stop=2, step="Y"),
    regime_id_class=RegimeId,
)

The custom beta_delta_H takes beta and delta as parameters. These appear in the params template under the H key:

from pprint import pprint

pprint(dict(model.get_params_template()))

Analytical Solution

With log\log utility, the optimal consumption rule is ct=wt/Dtc_t = w_t / D_t, where DtD_t is a “marginal propensity to save” denominator that depends on the discounting type.

Derivation at t=1t = 1 (one period before terminal):

V1(w)=maxc  [log(c)+βδlog(wc)]V_1(w) = \max_c \;\bigl[\log(c) + \beta\delta \cdot \log(w - c)\bigr]

First-order condition: 1/c=βδ/(wc)1/c = \beta\delta / (w - c), giving c1=w/(1+βδ)c_1 = w / (1 + \beta\delta).

Derivation at t=0t = 0 (two periods before terminal):

V0(w)=maxc  [log(c)+βδV1(wc)]V_0(w) = \max_c \;\bigl[\log(c) + \beta\delta \cdot V_1(w - c)\bigr]

Since V1(w)=(1+βδ)log(w)+constV_1(w) = (1 + \beta\delta)\log(w) + \text{const}, the FOC gives:

c0=w1+βδ(1+βδ)c_0 = \frac{w}{1 + \beta\delta\,(1 + \beta\delta)}

Writing d=βδd = \beta\delta for brevity:

D1D_1D0D_0
Exponential (β=1\beta = 1)1+δ1 + \delta1+δ(1+δ)1 + \delta(1 + \delta)
Sophisticated (β<1\beta < 1)1+βδ1 + \beta\delta1+βδ(1+βδ)1 + \beta\delta(1 + \beta\delta)
Naive (β<1\beta < 1)1+βδ1 + \beta\delta1+βδ(1+δ)1 + \beta\delta(1 + \delta)

At t=1t = 1, naive and sophisticated are identical — with only one future period, there is no distinction. They differ at t=0t = 0: the naive agent uses VEV^E (solved with δ\delta), so the inner denominator is 1+δ1 + \delta instead of 1+βδ1 + \beta\delta.

BETA = 0.7
DELTA = 0.95
W0 = 20.0


def analytical_consumption(beta, delta, w0, naive):
    """Return (c0, c1) from the closed-form solution."""
    bd = beta * delta
    d1 = 1 + bd
    if naive:  # noqa: SIM108
        # Naive agent thinks future self uses delta, not beta*delta
        d0 = 1 + bd * (1 + delta)
    else:
        d0 = 1 + bd * d1
    c0 = w0 / d0
    c1 = (w0 - c0) / d1
    return c0, c1


c0_exp, c1_exp = analytical_consumption(1.0, DELTA, W0, naive=False)
c0_soph, c1_soph = analytical_consumption(BETA, DELTA, W0, naive=False)
c0_naive, c1_naive = analytical_consumption(BETA, DELTA, W0, naive=True)

print(f"{'Type':<15} {'c_0':>8} {'c_1':>8}")
print("-" * 33)
print(f"{'Exponential':<15} {c0_exp:8.4f} {c1_exp:8.4f}")
print(f"{'Sophisticated':<15} {c0_soph:8.4f} {c1_soph:8.4f}")
print(f"{'Naive':<15} {c0_naive:8.4f} {c1_naive:8.4f}")

The sophisticated agent consumes more at t=0t = 0 than the exponential agent (present bias pulls consumption forward). The naive agent consumes slightly less than the sophisticated agent at t=0t = 0 because they (incorrectly) believe their future self will save more.

Solving and Simulating

Exponential Agent (β=1\beta = 1)

With β=1\beta = 1, the custom HH reduces to the default. We use simulate with a single params dict:

initial_wealth = jnp.array([W0])
initial_age = jnp.array([0.0])

result_exp = model.simulate(
    params={"working": {"H": {"beta": 1.0, "delta": DELTA}}},
    initial_conditions={
        "age": initial_age,
        "wealth": initial_wealth,
        "regime": jnp.array([RegimeId.working]),
    },
    period_to_regime_to_V_arr=None,
)

df_exp = result_exp.to_dataframe().query('regime == "working"')
print("Exponential agent:")
print(
    f"  c_0 = {df_exp.loc[df_exp['age'] == 0, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c0_exp:.4f})"
)
print(
    f"  c_1 = {df_exp.loc[df_exp['age'] == 1, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c1_exp:.4f})"
)

Sophisticated Agent (β<1\beta < 1)

The sophisticated agent solves and simulates with the same βδ\beta\delta parameters — their value function already accounts for future present-bias:

result_soph = model.simulate(
    params={"working": {"H": {"beta": BETA, "delta": DELTA}}},
    initial_conditions={
        "age": initial_age,
        "wealth": initial_wealth,
        "regime": jnp.array([RegimeId.working]),
    },
    period_to_regime_to_V_arr=None,
)

df_soph = result_soph.to_dataframe().query('regime == "working"')
print("Sophisticated agent:")
print(
    f"  c_0 = {df_soph.loc[df_soph['age'] == 0, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c0_soph:.4f})"
)
print(
    f"  c_1 = {df_soph.loc[df_soph['age'] == 1, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c1_soph:.4f})"
)

Naive Agent — Manual Approach (Separate Solve/Simulate)

The naive agent requires two separate steps:

  1. Solve with β=1\beta = 1 (exponential) to get the perceived continuation values VEV^E

  2. Simulate with β<1\beta < 1 — actions are chosen using the present-biased H(u,VE)=u+βδVEH(u, V^E) = u + \beta\delta \cdot V^E, but the continuation values VEV^E come from step 1

This works because simulate uses the pre-computed period_to_regime_to_V_arr for continuation values but evaluates QQ with its own params for action selection.

# Step 1: Solve with exponential discounting (beta=1)
V_exponential = model.solve(
    params={"working": {"H": {"beta": 1.0, "delta": DELTA}}},
)

# Step 2: Simulate with present-biased action selection
result_naive = model.simulate(
    params={"working": {"H": {"beta": BETA, "delta": DELTA}}},
    initial_conditions={
        "age": initial_age,
        "wealth": initial_wealth,
        "regime": jnp.array([RegimeId.working]),
    },
    period_to_regime_to_V_arr=V_exponential,
)

df_naive = result_naive.to_dataframe().query('regime == "working"')
print("Naive agent:")
print(
    f"  c_0 = {df_naive.loc[df_naive['age'] == 0, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c0_naive:.4f})"
)
print(
    f"  c_1 = {df_naive.loc[df_naive['age'] == 1, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c1_naive:.4f})"
)

Naive Agent — PhaseVariant Approach

PhaseVariant lets you provide different function implementations for the solve and simulate phases in a single model. The solve phase uses exponential discounting for backward induction while the simulate phase applies present bias for action selection.

The key advantage: you call simulate once with one params dict instead of manually managing separate solve/simulate steps.

Each variant can have a different signature. The params template is the union of both variants’ parameters; each variant receives only the kwargs it expects.

from lcm import PhaseVariant


def exponential_H(
    utility: float,
    E_next_V: float,
    discount_factor: float,
) -> float:
    return utility + discount_factor * E_next_V


# Build a model where H uses exponential discounting for solve,
# but beta-delta discounting for simulate.
working_pv = Regime(
    actions={
        "consumption": LinSpacedGrid(start=0.001, stop=50, n_points=500),
    },
    states={
        "wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
    },
    state_transitions={"wealth": next_wealth},
    constraints={"borrowing_constraint": borrowing_constraint},
    transition=next_regime,
    functions={
        "utility": utility,
        "H": PhaseVariant(
            solve=exponential_H,  # V^E for backward induction
            simulate=beta_delta_H,  # present-biased action selection
        ),
    },
    active=lambda age: age <= 1,
)

model_pv = Model(
    regimes={"working": working_pv, "dead": dead},
    ages=AgeGrid(start=0, stop=2, step="Y"),
    regime_id_class=RegimeId,
)

# Params are the UNION of both variants' parameters.
# exponential_H needs discount_factor; beta_delta_H needs beta and delta.
result_naive_pv = model_pv.simulate(
    params={
        "working": {
            "H": {"discount_factor": DELTA, "beta": BETA, "delta": DELTA},
        },
    },
    initial_conditions={
        "age": initial_age,
        "wealth": initial_wealth,
        "regime": jnp.array([RegimeId.working]),
    },
    period_to_regime_to_V_arr=None,
)

df_naive_pv = result_naive_pv.to_dataframe().query('regime == "working"')
print("Naive agent (PhaseVariant):")
print(
    f"  c_0 = {df_naive_pv.loc[df_naive_pv['age'] == 0, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c0_naive:.4f})"
)
print(
    f"  c_1 = {df_naive_pv.loc[df_naive_pv['age'] == 1, 'consumption'].iloc[0]:.4f}  "
    f"(analytical: {c1_naive:.4f})"
)

Comparison

Let’s compare all three agent types. The small differences between numerical and analytical solutions are due to grid discretization.

import pandas as pd

comparison = pd.DataFrame(
    {
        "Agent": ["Exponential", "Sophisticated", "Naive"],
        "c_0 (numerical)": [
            df_exp.loc[df_exp["age"] == 0, "consumption"].iloc[0],
            df_soph.loc[df_soph["age"] == 0, "consumption"].iloc[0],
            df_naive.loc[df_naive["age"] == 0, "consumption"].iloc[0],
        ],
        "c_0 (analytical)": [c0_exp, c0_soph, c0_naive],
        "c_1 (numerical)": [
            df_exp.loc[df_exp["age"] == 1, "consumption"].iloc[0],
            df_soph.loc[df_soph["age"] == 1, "consumption"].iloc[0],
            df_naive.loc[df_naive["age"] == 1, "consumption"].iloc[0],
        ],
        "c_1 (analytical)": [c1_exp, c1_soph, c1_naive],
    }
)
comparison = comparison.set_index("Agent")
comparison.style.format("{:.4f}")

Summary

Beta-delta discounting in PyLCM requires no core modifications. The key ingredients are:

  1. Custom HH function with beta and delta parameters:

    def beta_delta_H(utility, E_next_V, beta, delta):
        return utility + beta * delta * E_next_V
  2. Pass it to the regime via functions={"utility": ..., "H": beta_delta_H}

  3. Set parameters via the params dict under the "H" key:

    params = {"working": {"H": {"beta": 0.7, "delta": 0.95}}}
  4. For naive agents, there are two approaches:

    Manual: call solve and simulate separately with different params.

    PhaseVariant: wrap HH so the solve phase uses exponential discounting while the simulate phase applies present bias, then call simulate with period_to_regime_to_V_arr=None:

    from lcm import PhaseVariant
    
    functions={
        "utility": utility,
        "H": PhaseVariant(
            solve=exponential_H,    # V^E for backward induction
            simulate=beta_delta_H,  # present-biased action selection
        ),
    }