IFRS 13.9 defines fair value as: “The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date”. The connection of fair value with the concept of “exit price” is clarified in IFRS 13.24: “Fair value is the price that would be received to sell an asset or paid to transfer a liability in an orderly transaction in the principal (or most advantageous) market at the measurement date under current market conditions (ie an exit price) regardless of whether that price is directly observable or estimated using another valuation technique.”
According to IFRS 13.11, should take into consideration “the characteristics of the asset or liability if market participants would take those characteristics into account when pricing the asset or liability at the measurement date”. IFRS 13.56 makes clear that this includes the net exposure to credit risk arising from the asset or liability: “The entity shall include the effect of the entity’s net exposure to the credit risk of that counterparty or the counterparty’s net exposure to the credit risk of the entity in the fair value measurement when market participants would take into account any existing arrangements that mitigate credit risk exposure in the event of default”.
IFRS 13.42 explicitly explicitly refers to the necessary inclusion of a Debit Value Adjustment, stating that “The fair value of a liability reflects the effect of non‐performance risk. Non‐performance risk includes, but may not be limited to, an entity’s own credit risk”.
The principles-based approach of IFRS 13 does not prescribe a methodology for making credit adjustments to transactions or quoted prices. Rather IFRS 13.61 states: “An entity shall use valuation techniques that are appropriate in the circumstances and for which sufficient data is available to measure fair value, maximizing the use of relevant observable inputs and minismising the use of unobservable inputs.” IFRS 13.62 underscores that the valuation technique should “estimate the price at which an orderly transaction to sell the asset or to transfer the liability would take place between market participants at the measurement date under current market conditions”
Model-based fair value measurements should take into account all risk factors that market participants would consider, including credit risk. In order to reflect the credit risk of the of the counterparty in an Over-the-Counter (OTC) derivative transaction an adjustment of its valuation should be applied. The correction of fair value requires the recognition of the market value of both the counterparty’s risk - in the form of a Credit Value Adjustment (CVA) – and a company’s own risk – in the form of a Debit Value Adjustment (DVA). The valuation should be informed by current observable market data and the resultant price should approximate as far as possible the realizable value in an efficient and orderly market.
In the case of so-called so-called over-the-counter (OTC) derivatives, valuations should incorporate counterparty risk, defined as the risk that the counterparty to a financial derivative contract will enter into default prior to the expiration of the contract and will therefore fail to honour its contractual payment obligations. Only these privately negotiated contracts OTC derivatives, are subject to counterparty risk. Exchange-traded derivatives and derivatives cleared by the central clearing parties are not affected by counterparty risk, because the exchange guarantees the cash flows promised by the derivative to the counterparties. In line with market practice, the valuation methodology should take into account two features that set counterparty risk apart from more traditional forms of credit risk: 1) The uncertainty related to the future exposure and 2) default is contingent upon the contract having a positive future value. The modeling process should take into consideration the net effect of exposures arising from positions with the same counterparty.
This note demonstrates a technique to calculate a Credit Valuation Adjustment (CVA) for a portfolio of Interest Rate Swaps (IRS). The CVA is computed to reflect the present value of the credit risk related to the instrument until expiry. The expected (positive) exposure (EE) is obtained by simulating the price process of the underlying reference interest rate and the marked-to-market valuation (MTM) valuation of the swap. A Hull-White One Factor Model calibrated to the swaptions market is employed in the Monte Carlo simulation for this purpose. The probability of default (PD) of most major counterparties can be derived from the Credit Default Swap Market (CDS). We can summarize that the CVA is given by the risk-neutral expectation of the discounted loss, for which the general formula is the following:
Where:
As noted in Section 2, a Debit Value Adjustment must also be calculated. The process is similar though expected exposure for the purposes of this calculation is and the probability of default is based on the company's own credit risk.
The price process of a fixed-for-floating interest rate swap is modelled with , the instantaneous short rate at time . More precisely, in this instance, the price path of is simulated with the Hull White One Factor model1, which can be characterized as an extension of the Vasicek model2 that provides an exact fit to the initial term structure and has a time-dependent reversion level3.The Vasicek model assumes that the short rate is normal distributed with a mean reverting drift and constant volatility. The risk-neutral process for is:
Where kappa, theta, and sigma are constants and is the usual Weiner process. is the long-run value of the short term interest rate. governs the speed of mean reversion. The short rate is pulled to a level at rate
In the Vasicek model, the volatility of rates,, does not depend on the level, the Cox Ingersoll and Ross (CIR) Model proposes that the standard deviation of the change in rates is proportional to the square root of r. The CIR model assumes that the short rate is normal distributed with a mean reverting drift where volatility is a function of the square root of the rate level:
The so-called "Equilibrium" models presented above generate a predicted term structure based on assumptions about economic variables. However, model output is not consistent with today's term structure of interest rates. "No Arbitrage" models seek to remedy this by incorporating today's term structure of interest rates as an input. Hull and White proposed:
Or, equivalently:
Where the time dependent expected value is calculated from the initial term structure, i.e.:
Where
denotes the instantaneous forward rate in market observed at time 0 for for a forward loan from time t to the next instant.
The first term is the slope of the initial forward curve.
The second term is a constant.
The third term is the convexity adjustment.
The drift of r therefore is:
The first and third terms terms ensure that that changes in r follow the shape of the forward curve. The second term ensures that if r deviates from it is pulled back toward it by the parameter . is the mean reversion rate.
An crude approximate dicretization of the interest rate process in continuous time is:
Where:
is a discrete time-step
is a standard normal random variable
However, due to the mean reversion, the variance of the short-rate process is not . Rather it is:
Which leads to the following discretization of the interest rate process:
The Hull-White one-factor model is specified using the zero curve, , and parameters. It is a calibrated model, that is, its parameters have values that are consistent with market observations.The zero curves are only sufficient for the calibration of the parameter. Calibration of the HW short rate model further requires determining the short rate volatility and mean reversion parameters from a calibration routine from market data of actively traded swaptions. The calibration routines find the parameters that minimize the difference between the model price predictions and the market prices implied by the swap market. In the market, quotations of European swaption prices are available as implied Black volatilities, so the calibration function converts Black volatility to price before calculating the error metric.
The calibration routine applies the Weighted L-1 Norm loss function is the sum of weighted absolute differences between the target value and the estimated value . The (positive) weights are related to estimated uncertainties in the quotes. The uncertainty is inversely proportional to the weight that the instrument has in the calibration. The larger the weight (or the smaller the uncertainty) the closer the prediction must be to the market data. The goal is to correct for for pricing ineffeciency attributable to the illiquidity of the swaptions market. The weighted L-1 norm error metric is defined as:
Where:
is the total number of calibration instruments
is the market observation of the ith calibration instrument with weight , and
is the corresponding model prediction with the parameter values given by .
The weights are related to the data uncertainties via
This project implements the Levenberg-Marquardt minimization algorithm, an iterative algorithm that intelligently navigates parameter space to find the minimum of the error metric. This navigation involves the calculation of partial derivatives of the error metric with respect to the model parameters.
The average hazard rate (default intensity) per year s the CDS price expressed as a percentage spread and is the expected recovery rate:
The hazard rate connotes an instantaneous rate of failure. As such, it can be used with elegance in the exponential distribution to compute the cumulative probability of default and the cumulative probability of survival over a determined time period:
From this cumulative distribution function, we obtain a probability density function, which is termed the default density. By the Fundamental Theorem of Calculus, we know that the CDF of a continuous random variable may be expressed in terms of its PDF:
Wherever the derivative exists, the probability density function (PDF) is given by:
1 J. Hull and A. White, ‘‘Pricing Interest Rate Derivative Securities,’’ Review of Financial Studies, 3, 4 (1990): 573–92
2 O. A. Vasicek, ‘‘An Equilibrium Characterization of the Term Structure,’’ Journal of Financial
Economics, 5 (1977): 177–88
3 Alternatively, the HW model can be characterized as an extension of the Ho–Lee model with a mean reverting short rate. See T. S. Y. Ho and S.-B. Lee, ‘‘Term Structure Movements and Pricing Interest Rate Contingent Claims,’’ Journal of Finance, 41 (December 1986): 1011–29.
# Import the used libraries
import numpy as np
import matplotlib.pyplot as plt
import QuantLib as ql
%matplotlib inline
# Specify valuation date
today = ql.Date(7,4,2015)
ql.Settings.instance().setEvaluationDate(today)
# Observable rate data
# For simplicity we will use a flat forward curve.
# In a single curve world, the yield curve is used for discount and discounting and of the forward prices
# Constant Rate - assumed flat curve
rate = ql.SimpleQuote(0.03)
rate_handle = ql.QuoteHandle(rate)
# Day Count
dc = ql.Actual365Fixed()
# Build discount curve for the risk-free rate with extrapolation enabled
yts = ql.FlatForward(today, rate_handle, dc)
yts.enableExtrapolation()
# For the MC process, relink hyts (handle yield term structure) to the simulated yield curve
hyts = ql.RelinkableYieldTermStructureHandle(yts)
t0_curve = ql.YieldTermStructureHandle(yts)
# Create a Euribor6M instance as the reference rate and pass it the forecast curve
euribor6m = ql.Euribor6M(hyts)
# The method makeSwap create a new QuantLib plain vanilla swap.
# We use this method to setup a netting set with two swaps
def makeSwap(start, maturity, nominal, fixedRate, index, typ=ql.VanillaSwap.Payer):
"""
creates a plain vanilla swap with fixedLegTenor 1Y
parameter:
start (ql.Date) : Start Date
maturity (ql.Period) : SwapTenor
nominal (float) : Nominal
fixedRate (float) : rate paid on fixed leg
index (ql.IborIndex) : Index
return: tuple(ql.Swap, list<Dates>) Swap and all fixing dates
"""
end = ql.TARGET().advance(start, maturity)
fixedLegTenor = ql.Period("1y")
fixedLegBDC = ql.ModifiedFollowing
fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis) # day count convention for floating leg
spread = 0.0 # spread over the index rate for floating rate
fixedSchedule = ql.Schedule(start,
end,
fixedLegTenor,
index.fixingCalendar(), # calender to use, for use its Euribor 6m
fixedLegBDC, # Day Count for fixed
fixedLegBDC,# Sames as day Count for floating
ql.DateGeneration.Backward,
False)
floatSchedule = ql.Schedule(start,
end,
index.tenor(),
index.fixingCalendar(),
index.businessDayConvention(),
index.businessDayConvention(),
ql.DateGeneration.Backward,
False)
# Instatiate swap object
swap = ql.VanillaSwap(typ,
nominal,
fixedSchedule,
fixedRate,
fixedLegDC,
floatSchedule,
index,
spread,
index.dayCounter())
return swap, [index.fixingDate(x) for x in floatSchedule][:-1]
# Define characteristics of deals in swap portfolio:
# makeSwap(Initiation, Expiry, Nominal Value, Fixed Rate, Reference Rate, Vanilla Swap Side - Payer or Receiver)
# 1. Payer swap starting in 2 days and ending in 5 years, has a nominal value of $1,000,000,
# a fixed rate of 0.03 and uses the euribor6m as the floating reference rate
# 2. Receiver swap starting in 2 day and ending in 4 years, nominal value of $500,000,
# a fixed rate of 0.03 and floating index rate at eurbor6m
portfolio = [makeSwap(today + ql.Period("2d"),
ql.Period("5Y"),
1e6,
0.03,
euribor6m),
makeSwap(today + ql.Period("2d"),
ql.Period("4Y"),
5e5,
0.03,
euribor6m,
ql.VanillaSwap.Receiver),
]
# Before we can calculate the NPV we need a pricing engine.
# We use the discountingSwapEngine.
# 1) It discounts all future payments to the evaluation date
# 2) It calculates the difference between the present values of the two legs.
engine = ql.DiscountingSwapEngine(hyts)
# #Calculate the NPV of both deals in portfolio
for deal, fixingDates in portfolio:
deal.setPricingEngine(engine)
deal.NPV()
print (" NPV Values")
print ("-"*12)
for deal, fixingDates in portfolio:
print (" %8.2f"% deal.NPV())
# Store NPV's in list and sum to compute portfolio NPV.
list_val = []
for deal, fixingDates in portfolio:
list_val.append(deal.NPV())
list_val
port_npv= sum(list_val)
print (" Portfolio NPV Value")
print ("-"*20)
print (" %12.2f"% port_npv)
# Set up Hull-White Calibration Process
from collections import namedtuple
import math
# Specify date and interest rate parameters for calibration process
settlement= ql.Date(7,10,2019);
ql.Settings.instance().evaluationDate = today;
term_structure = ql.YieldTermStructureHandle(yts)
index = ql.Euribor6M(term_structure)
# Collect calibration data in a named tuple
CalibrationData = namedtuple("CalibrationData",
"start, length, volatility")
data = [CalibrationData(1, 5, 0.1148),
CalibrationData(2, 4, 0.1108),
CalibrationData(3, 3, 0.1070),
CalibrationData(4, 2, 0.1021),
CalibrationData(5, 1, 0.1000 )]
# Define function which uses elements of named tuple as inputs
# together with ref rate index, term structure and pricing engine
def create_swaption_helpers(data, index, term_structure, engine):
swaptions = []
fixed_leg_tenor = ql.Period(1, ql.Years)
fixed_leg_daycounter = ql.Actual360()
floating_leg_daycounter = ql.Actual360()
for d in data:
vol_handle = ql.QuoteHandle(ql.SimpleQuote(d.volatility))
helper = ql.SwaptionHelper(ql.Period(d.start, ql.Years), # maturity
ql.Period(d.length, ql.Years), # length
vol_handle, # vol
index,
fixed_leg_tenor,
fixed_leg_daycounter,
floating_leg_daycounter,
term_structure
)
helper.setPricingEngine(engine)
swaptions.append(helper)
return swaptions
# Run Hull-White Calibration Process
model = ql.HullWhite(term_structure);
engine = ql.JamshidianSwaptionEngine(model)
swaptions = create_swaption_helpers(data, index, term_structure, engine)
optimization_method = ql.LevenbergMarquardt(1.0e-8,1.0e-8,1.0e-8)
end_criteria = ql.EndCriteria(10000, 100, 1e-6, 1e-8, 1e-8)
model.calibrate(swaptions, optimization_method, end_criteria)
a, sigma = model.params()
print ("Alpha : %15.4f" % (a))
print ("Sigma : %15.4f" % (sigma))
# Define function to create calibration report to review error terms
def calibration_report(swaptions, data):
print ("-"*82)
print("%15s %15s %15s %15s %15s" % ("Model Price", "Market Price", "Implied Vol", "Market Vol", "Rel Error"))
print ("-"*82)
cum_err = 0.0
for i, s in enumerate(swaptions):
model_price = s.modelValue()
market_vol = data[i].volatility
black_price = s.blackPrice(market_vol)
rel_error = model_price/black_price - 1.0
implied_vol = s.impliedVolatility(model_price,
1e-5, 50, 0.0, 0.50)
rel_error2 = implied_vol/market_vol-1.0
cum_err += rel_error2*rel_error2
print("%15.5f %15.5f %15.5f %15.5f %15.5f" % (model_price, black_price, implied_vol, market_vol, rel_error))
print ("-"*82)
print ("Cumulative Error : %15.5f" % math.sqrt(cum_err))
# Run calibration report
calibration_report(swaptions, data)
# With these calibrated parameters set up the stochastic modeling process
volas = [ql.QuoteHandle(ql.SimpleQuote(sigma)),
ql.QuoteHandle(ql.SimpleQuote(sigma))]
meanRev = [ql.QuoteHandle(ql.SimpleQuote(a))]
model = ql.Gsr(t0_curve, [today+100], volas, meanRev)
process = model.stateProcess()
# Next we need to define the evaluation grid, the time period where we are generating and valuing the swap NPV’s
date_grid = [today + ql.Period(i,ql.Months) for i in range(0,12*6)] # number of months and years for simulation range
for deal in portfolio:
date_grid += deal[1]
date_grid = np.unique(np.sort(date_grid)) # finds an sorts the unique values in an array
# NumPy’s vectorize class converts a function into a function that can apply to all elements in an array.
# So here vectorises the future times with QL daycount
time_grid = np.vectorize(lambda x: ql.ActualActual().yearFraction(today, x))(date_grid)
dt = time_grid[1:] - time_grid[:-1] # defines the time interval
# Characteristics of random process
seed = 1
urng = ql.MersenneTwisterUniformRng(seed) # random number generator
usrg = ql.MersenneTwisterUniformRsg(len(time_grid)-1,urng) # specify number of randoms numbers and rng
generator = ql.InvCumulativeMersenneTwisterGaussianRsg(usrg) # specify gaussian sequence generator
# Generate paths
N = 1500
x = np.zeros((N, len(time_grid)))
y = np.zeros((N, len(time_grid)))
pillars = np.array([0.0, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
zero_bonds = np.zeros((N, len(time_grid), 12))
for j in range(12):
zero_bonds[:, 0, j] = model.zerobond(pillars[j],
0,
0)
for n in range(0,N):
dWs = generator.nextSequence().value()
for i in range(1, len(time_grid)):
t0 = time_grid[i-1]
t1 = time_grid[i]
x[n,i] = process.expectation(t0,
x[n,i-1],
dt[i-1]) + dWs[i-1] * process.stdDeviation(t0,
x[n,i-1],
dt[i-1])
y[n,i] = (x[n,i] - process.expectation(0,0,t1)) / process.stdDeviation(0,0,t1)
for j in range(12):
zero_bonds[n, i, j] = model.zerobond(t1+pillars[j],
t1,
y[n, i])
# plot the interest rate paths
for i in range(0,N):
plt.title("Simulated interest rate paths")
plt.xlabel("Time in years")
plt.ylabel("Floating Rate")
plt.plot(time_grid, x[i,:])
# Generate the discount factors
discount_factors = np.vectorize(t0_curve.discount)(time_grid)
#Swap pricing under each path
npv_cube = np.zeros((N,len(date_grid), len(portfolio)))
for p in range(0,N):
for t in range(0, len(date_grid)):
date = date_grid[t]
ql.Settings.instance().setEvaluationDate(date)
ycDates = [date,
date + ql.Period(6, ql.Months)]
ycDates += [date + ql.Period(i,ql.Years) for i in range(1,11)]
yc = ql.DiscountCurve(ycDates,
zero_bonds[p, t, :],
ql.Actual365Fixed())
yc.enableExtrapolation()
hyts.linkTo(yc)
if euribor6m.isValidFixingDate(date):
fixing = euribor6m.fixing(date)
euribor6m.addFixing(date, fixing)
for i in range(len(portfolio)):
npv_cube[p, t, i] = portfolio[i][0].NPV()
ql.IndexManager.instance().clearHistories()
ql.Settings.instance().setEvaluationDate(today)
hyts.linkTo(yts)
# Discounted NPV's
discounted_cube = np.zeros(npv_cube.shape)
for i in range(npv_cube.shape[2]):
discounted_cube[:,:,i] = npv_cube[:,:,i] * discount_factors
# Portfolio NPV and Discounted Portfolio NPV after netting
portfolio_npv = np.sum(npv_cube,axis=2)
discounted_npv = np.sum(discounted_cube, axis=2)
# Visualize the first 1000 NPV paths
n_0 = 0
n = 1000
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(12,10), sharey=True)
for i in range(n_0,n):
ax1.plot(time_grid, portfolio_npv[i,:])
for i in range(n_0,n):
ax2.plot(time_grid, discounted_npv[i,:])
ax1.set_xlabel("Time in years")
ax1.set_ylabel("NPV in time t Euros")
ax1.set_title("Simulated npv paths")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("NPV in time 0 Euros")
ax2.set_title("Simulated discounted npv paths")
# Calculate and visualize the positive exposure and discounted exposure paths
E = portfolio_npv.copy()
dE = discounted_npv.copy()
E[E<0] = 0
dE[dE<0] = 0
# Plot the first 30 exposure paths
n = 1000
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(12,10))
for i in range(0,n):
ax1.plot(time_grid, E[i,:])
for i in range(0,n):
ax2.plot(time_grid, dE[i,:])
ax1.set_xlabel("Time in years")
ax1.set_ylabel("Exposure")
ax1.set_ylim([-10000,40000])
ax1.set_title("Simulated exposure paths")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("Discounted Exposure")
ax2.set_ylim([-10000,40000])
ax2.set_title("Simulated discounted exposure paths")
# Calculate the expected (average) exposure path
E = portfolio_npv.copy()
E[E<0]=0
EE = np.sum(E, axis=0)/N
# Calculate the discounted expected (average) exposure path
dE = discounted_npv.copy()
dE[dE<0] = 0
dEE = np.sum(dE, axis=0)/N
# plot the expected (average) exposure path
n = 30
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(8,10))
ax1.plot(time_grid, EE)
ax2.plot(time_grid, dEE)
ax1.set_xlabel("Time in years")
ax1.set_ylabel("Exposure")
ax1.set_title("Expected exposure")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("Discounted Exposure")
ax2.set_title("Discounted expected exposure")
# For risk managment purposes Calculate the PFE curve (@ 95% quantile)
PFE_curve = np.apply_along_axis(lambda x: np.sort(x)[int(N*0.95)],0, E)
plt.figure(figsize=(8,5))
plt.plot(time_grid,PFE_curve)
plt.xlabel("Time in years")
plt.ylabel("Potential Future Exposure @ 95% quantile")
plt.ylim([-2000,15000])
plt.title("PFE")
# Setup Probability of Default Curve with hazard rate of 2%
pd_dates = [today + ql.Period(i, ql.Years) for i in range(11)]
hzrates = [0.02 * i for i in range(11)]
pd_curve = ql.HazardRateCurve(pd_dates,hzrates,ql.Actual365Fixed())
pd_curve.enableExtrapolation()
# Plot curve
# Calculate default probs on grid *times*
times = np.linspace(0,30,5000)
dp = np.vectorize(pd_curve.defaultProbability)(times)
sp = np.vectorize(pd_curve.survivalProbability)(times)
dd = np.vectorize(pd_curve.defaultDensity)(times)
hr = np.vectorize(pd_curve.hazardRate)(times)
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,10))
ax1.plot(times, dp)
ax2.plot(times, sp)
ax3.plot(times, dd)
ax4.plot(times, hr)
ax1.set_xlabel("Time in years")
ax2.set_xlabel("Time in years")
ax3.set_xlabel("Time in years")
ax4.set_xlabel("Time in years")
ax1.set_ylabel("Probability")
ax2.set_ylabel("Probability")
ax3.set_ylabel("Density")
ax4.set_ylabel("HazardRate")
ax1.set_title("Default Probability")
ax2.set_title("Survival Probability")
ax3.set_title("Default density")
ax4.set_title("Hazard rate")
# Calculation of the default probs
defaultProb_vec = np.vectorize(pd_curve.defaultProbability)
dPD = defaultProb_vec(time_grid[:-1], time_grid[1:])
# Calculation of the CVA
recovery = 0.4
CVA = (1-recovery) * np.sum(dEE[1:] * dPD)
print ("Credit Value Adjustment : %15.2f" % CVA)