This notebook shows how to implement quasi-hyperbolic discounting in PyLCM using a custom aggregation function . It covers:
The beta-delta discount function and its relation to PyLCM’s
A simple 3-period consumption-savings model with analytical solutions
Exponential, sophisticated, and naive agents
Verifying numerical results against closed-form solutions
Background: Beta-Delta Discounting¶
Standard exponential discounting uses a single discount factor to weight future utility:
Quasi-hyperbolic (beta-delta) discounting (Laibson, 1997) introduces a present-bias parameter that discounts all future periods relative to the present:
The discount weights are . When this reduces to exponential discounting. When the agent is present-biased — they overweight current utility relative to any future period.
Mapping to PyLCM’s Function¶
PyLCM defines the value function recursively via an aggregation function :
The default is additive with a discount factor: . For beta-delta discounting, we replace this with:
This works because the beta-delta discount function has the recursive structure:
where each already includes the factor for all periods beyond . The appears exactly once in each one-step discount — which is all needs to capture.
Sophisticated vs. Naive Agents¶
Beta-delta preferences are time-inconsistent: what the agent plans to do tomorrow differs from what they actually do when tomorrow arrives. This creates two behavioral types:
Sophisticated agents know their future selves will also be present-biased. They correctly predict future behavior and optimize accordingly. The value function is computed and used consistently with discounting.
Naive agents believe their future selves will behave as exponential discounters (with discount factor only). They compute a perceived value function using exponential discounting, but when choosing actions they apply present bias to the continuation value.
In PyLCM terms:
| Agent type | solve params | simulate params |
|---|---|---|
| Exponential | same | |
| Sophisticated | same | |
| Naive | (exponential ) | (present-biased choices) |
The naive case needs different behavior in each phase. There are two ways to handle this:
Manual approach: call
solveandsimulateseparately with different paramsPhaseVariant: wrap in aPhaseVariantcontainer that provides different implementations for each phase, then usesimulateas usual
Both approaches are shown below.
Setup: A 3-Period Model¶
We use a minimal consumption-savings model:
3 periods: (decisions), (terminal)
1 state: wealth
1 action: consumption
Utility:
Budget: (interest rate )
Constraint:
Terminal: (consume everything)
import jax.numpy as jnp
from lcm import AgeGrid, LinSpacedGrid, Model, Regime, categorical
from lcm.typing import BoolND, ContinuousAction, ContinuousState, FloatND, ScalarInt@categorical(ordered=False)
class RegimeId:
working: int
dead: int
def utility(consumption: ContinuousAction) -> FloatND:
return jnp.log(consumption)
def terminal_utility(wealth: ContinuousState) -> FloatND:
return jnp.log(wealth)
def next_wealth(
wealth: ContinuousState, consumption: ContinuousAction
) -> ContinuousState:
return wealth - consumption
def next_regime(age: float) -> ScalarInt:
return jnp.where(age >= 1, RegimeId.dead, RegimeId.working)
def borrowing_constraint(
consumption: ContinuousAction, wealth: ContinuousState
) -> BoolND:
return consumption <= wealth
def beta_delta_H(
utility: float,
E_next_V: float,
beta: float,
delta: float,
) -> float:
return utility + beta * delta * E_next_Vworking = Regime(
actions={
"consumption": LinSpacedGrid(start=0.001, stop=50, n_points=500),
},
states={
"wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
},
state_transitions={"wealth": next_wealth},
constraints={"borrowing_constraint": borrowing_constraint},
transition=next_regime,
functions={"utility": utility, "H": beta_delta_H},
active=lambda age: age <= 1,
)
dead = Regime(
transition=None,
states={
"wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
},
functions={"utility": terminal_utility},
active=lambda age: age > 1,
)
model = Model(
regimes={"working": working, "dead": dead},
ages=AgeGrid(start=0, stop=2, step="Y"),
regime_id_class=RegimeId,
)The custom beta_delta_H takes beta and delta as parameters. These appear in the
params template under the H key:
from pprint import pprint
pprint(dict(model.get_params_template()))Analytical Solution¶
With utility, the optimal consumption rule is , where is a “marginal propensity to save” denominator that depends on the discounting type.
Derivation at (one period before terminal):
First-order condition: , giving .
Derivation at (two periods before terminal):
Since , the FOC gives:
Writing for brevity:
| Exponential () | ||
| Sophisticated () | ||
| Naive () |
At , naive and sophisticated are identical — with only one future period, there is no distinction. They differ at : the naive agent uses (solved with ), so the inner denominator is instead of .
BETA = 0.7
DELTA = 0.95
W0 = 20.0
def analytical_consumption(beta, delta, w0, naive):
"""Return (c0, c1) from the closed-form solution."""
bd = beta * delta
d1 = 1 + bd
if naive: # noqa: SIM108
# Naive agent thinks future self uses delta, not beta*delta
d0 = 1 + bd * (1 + delta)
else:
d0 = 1 + bd * d1
c0 = w0 / d0
c1 = (w0 - c0) / d1
return c0, c1
c0_exp, c1_exp = analytical_consumption(1.0, DELTA, W0, naive=False)
c0_soph, c1_soph = analytical_consumption(BETA, DELTA, W0, naive=False)
c0_naive, c1_naive = analytical_consumption(BETA, DELTA, W0, naive=True)
print(f"{'Type':<15} {'c_0':>8} {'c_1':>8}")
print("-" * 33)
print(f"{'Exponential':<15} {c0_exp:8.4f} {c1_exp:8.4f}")
print(f"{'Sophisticated':<15} {c0_soph:8.4f} {c1_soph:8.4f}")
print(f"{'Naive':<15} {c0_naive:8.4f} {c1_naive:8.4f}")The sophisticated agent consumes more at than the exponential agent (present bias pulls consumption forward). The naive agent consumes slightly less than the sophisticated agent at because they (incorrectly) believe their future self will save more.
Solving and Simulating¶
Exponential Agent ()¶
With , the custom reduces to the default. We use simulate
with a single params dict:
initial_wealth = jnp.array([W0])
initial_age = jnp.array([0.0])
result_exp = model.simulate(
params={"working": {"H": {"beta": 1.0, "delta": DELTA}}},
initial_conditions={
"age": initial_age,
"wealth": initial_wealth,
"regime": jnp.array([RegimeId.working]),
},
period_to_regime_to_V_arr=None,
)
df_exp = result_exp.to_dataframe().query('regime == "working"')
print("Exponential agent:")
print(
f" c_0 = {df_exp.loc[df_exp['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_exp:.4f})"
)
print(
f" c_1 = {df_exp.loc[df_exp['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_exp:.4f})"
)Sophisticated Agent ()¶
The sophisticated agent solves and simulates with the same parameters — their value function already accounts for future present-bias:
result_soph = model.simulate(
params={"working": {"H": {"beta": BETA, "delta": DELTA}}},
initial_conditions={
"age": initial_age,
"wealth": initial_wealth,
"regime": jnp.array([RegimeId.working]),
},
period_to_regime_to_V_arr=None,
)
df_soph = result_soph.to_dataframe().query('regime == "working"')
print("Sophisticated agent:")
print(
f" c_0 = {df_soph.loc[df_soph['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_soph:.4f})"
)
print(
f" c_1 = {df_soph.loc[df_soph['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_soph:.4f})"
)Naive Agent — Manual Approach (Separate Solve/Simulate)¶
The naive agent requires two separate steps:
Solve with (exponential) to get the perceived continuation values
Simulate with — actions are chosen using the present-biased , but the continuation values come from step 1
This works because simulate uses the pre-computed period_to_regime_to_V_arr for
continuation values but evaluates with its own params for action selection.
# Step 1: Solve with exponential discounting (beta=1)
V_exponential = model.solve(
params={"working": {"H": {"beta": 1.0, "delta": DELTA}}},
)
# Step 2: Simulate with present-biased action selection
result_naive = model.simulate(
params={"working": {"H": {"beta": BETA, "delta": DELTA}}},
initial_conditions={
"age": initial_age,
"wealth": initial_wealth,
"regime": jnp.array([RegimeId.working]),
},
period_to_regime_to_V_arr=V_exponential,
)
df_naive = result_naive.to_dataframe().query('regime == "working"')
print("Naive agent:")
print(
f" c_0 = {df_naive.loc[df_naive['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_naive:.4f})"
)
print(
f" c_1 = {df_naive.loc[df_naive['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_naive:.4f})"
)Naive Agent — PhaseVariant Approach¶
PhaseVariant lets you provide different function implementations for the solve and
simulate phases in a single model. The solve phase uses exponential discounting for
backward induction while the simulate phase applies present bias for action selection.
The key advantage: you call simulate once with one params dict instead of
manually managing separate solve/simulate steps.
Each variant can have a different signature. The params template is the union of both variants’ parameters; each variant receives only the kwargs it expects.
from lcm import PhaseVariant
def exponential_H(
utility: float,
E_next_V: float,
discount_factor: float,
) -> float:
return utility + discount_factor * E_next_V
# Build a model where H uses exponential discounting for solve,
# but beta-delta discounting for simulate.
working_pv = Regime(
actions={
"consumption": LinSpacedGrid(start=0.001, stop=50, n_points=500),
},
states={
"wealth": LinSpacedGrid(start=0.5, stop=50, n_points=200),
},
state_transitions={"wealth": next_wealth},
constraints={"borrowing_constraint": borrowing_constraint},
transition=next_regime,
functions={
"utility": utility,
"H": PhaseVariant(
solve=exponential_H, # V^E for backward induction
simulate=beta_delta_H, # present-biased action selection
),
},
active=lambda age: age <= 1,
)
model_pv = Model(
regimes={"working": working_pv, "dead": dead},
ages=AgeGrid(start=0, stop=2, step="Y"),
regime_id_class=RegimeId,
)
# Params are the UNION of both variants' parameters.
# exponential_H needs discount_factor; beta_delta_H needs beta and delta.
result_naive_pv = model_pv.simulate(
params={
"working": {
"H": {"discount_factor": DELTA, "beta": BETA, "delta": DELTA},
},
},
initial_conditions={
"age": initial_age,
"wealth": initial_wealth,
"regime": jnp.array([RegimeId.working]),
},
period_to_regime_to_V_arr=None,
)
df_naive_pv = result_naive_pv.to_dataframe().query('regime == "working"')
print("Naive agent (PhaseVariant):")
print(
f" c_0 = {df_naive_pv.loc[df_naive_pv['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_naive:.4f})"
)
print(
f" c_1 = {df_naive_pv.loc[df_naive_pv['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_naive:.4f})"
)Comparison¶
Let’s compare all three agent types. The small differences between numerical and analytical solutions are due to grid discretization.
import pandas as pd
comparison = pd.DataFrame(
{
"Agent": ["Exponential", "Sophisticated", "Naive"],
"c_0 (numerical)": [
df_exp.loc[df_exp["age"] == 0, "consumption"].iloc[0],
df_soph.loc[df_soph["age"] == 0, "consumption"].iloc[0],
df_naive.loc[df_naive["age"] == 0, "consumption"].iloc[0],
],
"c_0 (analytical)": [c0_exp, c0_soph, c0_naive],
"c_1 (numerical)": [
df_exp.loc[df_exp["age"] == 1, "consumption"].iloc[0],
df_soph.loc[df_soph["age"] == 1, "consumption"].iloc[0],
df_naive.loc[df_naive["age"] == 1, "consumption"].iloc[0],
],
"c_1 (analytical)": [c1_exp, c1_soph, c1_naive],
}
)
comparison = comparison.set_index("Agent")
comparison.style.format("{:.4f}")Summary¶
Beta-delta discounting in PyLCM requires no core modifications. The key ingredients are:
Custom function with
betaanddeltaparameters:def beta_delta_H(utility, E_next_V, beta, delta): return utility + beta * delta * E_next_VPass it to the regime via
functions={"utility": ..., "H": beta_delta_H}Set parameters via the params dict under the
"H"key:params = {"working": {"H": {"beta": 0.7, "delta": 0.95}}}For naive agents, there are two approaches:
Manual: call
solveandsimulateseparately with different params.PhaseVariant: wrap so the solve phase uses exponential discounting while the simulate phase applies present bias, then callsimulatewithperiod_to_regime_to_V_arr=None:from lcm import PhaseVariant functions={ "utility": utility, "H": PhaseVariant( solve=exponential_H, # V^E for backward induction simulate=beta_delta_H, # present-biased action selection ), }