VAR Value at Risk (VaR)
Quantifying your worst-case scenario — and why it still isn't enough
Learning Objectives
- •Understand what VaR actually measures (and what it doesn't)
- •Compare parametric, historical, and Monte Carlo VaR methods
- •Know when each method is appropriate and where each one breaks
- •Read VaR output and apply it to real risk decisions
Explain Like I'm 5
VaR answers one question: "What's the most I could lose on a normal bad day?" If your 95% VaR is $10K, that means 95% of the time, you won't lose more than $10K. But the other 5%? It says nothing about that. It tells you where the bad neighborhood starts — not how dangerous it gets inside.
Think of It This Way
VaR is like a weather forecast that says "5% chance of rain." Useful, but it doesn't tell you whether it'll drizzle or flood. You know rain is unlikely, but when it comes, you have no idea how bad. That's exactly why VaR alone isn't enough — you need Expected Shortfall to understand what happens in the tail.
1Three Ways to Calculate VaR
2Same Data, Different Answers
VaR Estimates by Method at Different Confidence Levels
3VaR for Prop Firm Risk
4Where VaR Falls Short
5The Tail Risk Problem — Visualized
Return Distribution with VaR Threshold
Key Formulas
Parametric VaR (Normal)
VaR assuming normal returns. μ is mean return, σ is standard deviation, z_α is the z-score for confidence level α (e.g., -1.645 for 95%). Fast to compute, but underestimates tail risk because markets aren't normal.
Historical VaR
Take the (1-α) percentile of historical returns. No distribution assumptions. The 5th percentile of 1,000 days of returns gives you the 95% VaR.
Hands-On Code
Three VaR Methods Compared
import numpy as np
from scipy import stats
def compute_var(returns, confidence=0.95):
"""Compare parametric, historical, and MC VaR."""
alpha = 1 - confidence
# Parametric VaR (normal assumption)
mu, sigma = returns.mean(), returns.std()
z = stats.norm.ppf(alpha)
parametric_var = -(mu + z * sigma)
# Historical VaR (empirical)
historical_var = -np.percentile(returns, alpha * 100)
# Monte Carlo VaR (t-distribution for fatter tails)
df = 5
sims = stats.t.rvs(df, loc=mu, scale=sigma, size=10000)
mc_var = -np.percentile(sims, alpha * 100)
print(f"Parametric VaR ({confidence:.0%}): {parametric_var:.2%}")
print(f"Historical VaR ({confidence:.0%}): {historical_var:.2%}")
print(f"Monte Carlo VaR ({confidence:.0%}): {mc_var:.2%}")
return parametric_var, historical_var, mc_varParametric VaR typically underestimates because it assumes normality. Monte Carlo with fat tails gives the most realistic estimates. Always compare methods — if they disagree significantly, dig into why.
Knowledge Check
Q1.95% VaR of $10K means:
Q2.Why would a production system prefer Monte Carlo VaR over parametric?
Assignment
Compute all three VaR estimates (parametric, historical, Monte Carlo) for a set of daily returns. Plot the return distribution and mark each VaR level. Notice how parametric VaR sits closer to zero — that's the gap fat tails create.