IFRS Technical Note

Design and Implementation of Credit Valuation Adjustment (CVA) Model for a Portfolio of OTC Derivatives under IFRS 9: Financial Instruments

Paul McAteer, MSc, MBA

pcm353@stern.nyu.edu

CVA%20Image.PNG


1. Summary of the Guidance on Valuation Principles in the Standard


IFRS 13.9 defines fair value as: “The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date”. The connection of fair value with the concept of “exit price” is clarified in IFRS 13.24: “Fair value is the price that would be received to sell an asset or paid to transfer a liability in an orderly transaction in the principal (or most advantageous) market at the measurement date under current market conditions (ie an exit price) regardless of whether that price is directly observable or estimated using another valuation technique.”

According to IFRS 13.11, should take into consideration “the characteristics of the asset or liability if market participants would take those characteristics into account when pricing the asset or liability at the measurement date”. IFRS 13.56 makes clear that this includes the net exposure to credit risk arising from the asset or liability: “The entity shall include the effect of the entity’s net exposure to the credit risk of that counterparty or the counterparty’s net exposure to the credit risk of the entity in the fair value measurement when market participants would take into account any existing arrangements that mitigate credit risk exposure in the event of default”.

IFRS 13.42 explicitly explicitly refers to the necessary inclusion of a Debit Value Adjustment, stating that “The fair value of a liability reflects the effect of non‐performance risk. Non‐performance risk includes, but may not be limited to, an entity’s own credit risk”.

The principles-based approach of IFRS 13 does not prescribe a methodology for making credit adjustments to transactions or quoted prices. Rather IFRS 13.61 states: “An entity shall use valuation techniques that are appropriate in the circumstances and for which sufficient data is available to measure fair value, maximizing the use of relevant observable inputs and minismising the use of unobservable inputs.” IFRS 13.62 underscores that the valuation technique should “estimate the price at which an orderly transaction to sell the asset or to transfer the liability would take place between market participants at the measurement date under current market conditions”

2. Interpretation of the Guidance


Model-based fair value measurements should take into account all risk factors that market participants would consider, including credit risk. In order to reflect the credit risk of the of the counterparty in an Over-the-Counter (OTC) derivative transaction an adjustment of its valuation should be applied. The correction of fair value requires the recognition of the market value of both the counterparty’s risk - in the form of a Credit Value Adjustment (CVA) – and a company’s own risk – in the form of a Debit Value Adjustment (DVA). The valuation should be informed by current observable market data and the resultant price should approximate as far as possible the realizable value in an efficient and orderly market.

In the case of so-called so-called over-the-counter (OTC) derivatives, valuations should incorporate counterparty risk, defined as the risk that the counterparty to a financial derivative contract will enter into default prior to the expiration of the contract and will therefore fail to honour its contractual payment obligations. Only these privately negotiated contracts OTC derivatives, are subject to counterparty risk. Exchange-traded derivatives and derivatives cleared by the central clearing parties are not affected by counterparty risk, because the exchange guarantees the cash flows promised by the derivative to the counterparties. In line with market practice, the valuation methodology should take into account two features that set counterparty risk apart from more traditional forms of credit risk: 1) The uncertainty related to the future exposure and 2) default is contingent upon the contract having a positive future value. The modeling process should take into consideration the net effect of exposures arising from positions with the same counterparty.

3. Model Design


This note demonstrates a technique to calculate a Credit Valuation Adjustment (CVA) for a portfolio of Interest Rate Swaps (IRS). The CVA is computed to reflect the present value of the credit risk related to the instrument until expiry. The expected (positive) exposure (EE) is obtained by simulating the price process of the underlying reference interest rate and the marked-to-market valuation (MTM) valuation of the swap. A Hull-White One Factor Model calibrated to the swaptions market is employed in the Monte Carlo simulation for this purpose. The probability of default (PD) of most major counterparties can be derived from the Credit Default Swap Market (CDS). We can summarize that the CVA is given by the risk-neutral expectation of the discounted loss, for which the general formula is the following:


CVA=(1  R)oT[EEtDFt]  dPDt

Where:

R=Recovery Rate in the event of default=40%

EEt=Expected (Positive)Exposure= E[max (MTMt,0)]

DFt=1(1+Riskless Rate)t

dPDt = ΔPDt= PDti1 ,  ti =PD0,  ti1PD0,  ti  

As noted in Section 2, a Debit Value Adjustment must also be calculated. The process is similar though expected exposure for the purposes of this calculation is E[min (MTMt,0)] and the probability of default is based on the company's own credit risk.

3.1. Expected Exposure


The price process of a fixed-for-floating interest rate swap is modelled with r, the instantaneous short rate at time t. More precisely, in this instance, the price path of r is simulated with the Hull White One Factor model1, which can be characterized as an extension of the Vasicek model2 that provides an exact fit to the initial term structure and has a time-dependent reversion level3.The Vasicek model assumes that the short rate is normal distributed with a mean reverting drift and constant volatility. The risk-neutral process for r is:

dr = κ(θr)dt + σdW

Where kappa, theta, and sigma are constants and dW is the usual Weiner process. θ is the long-run value of the short term interest rate. κ governs the speed of mean reversion. The short rate is pulled to a level θ at rate κ

In the Vasicek model, the volatility of rates,σ, does not depend on the level, the Cox Ingersoll and Ross (CIR) Model proposes that the standard deviation of the change in rates is proportional to the square root of r. The CIR model assumes that the short rate is normal distributed with a mean reverting drift where volatility is a function of the square root of the rate level:

dr = κ(θr)dt + σrdW
= κ(θr)dt + σrγ=0.5dW

The so-called "Equilibrium" models presented above generate a predicted term structure based on assumptions about economic variables. However, model output is not consistent with today's term structure of interest rates. "No Arbitrage" models seek to remedy this by incorporating today's term structure of interest rates as an input. Hull and White proposed:

dr = [θ(t)αrt]dt + σdW

Or, equivalently:

dr = α[θ(t)αrt]dt + σdW

Where the time dependent expected value θ(t) is calculated from the initial term structure, i.e.:

θ(t)=fM(0,t)t+αfM(0,t)+σ22α (1e2αt)

Where fM(0,t) denotes the instantaneous forward rate in market observed at time 0 for for a forward loan from time t to the next instant. The first term fM(0,t)/t is the slope of the initial forward curve. The second term afM(0,t) is a constant. The third term σ2/2a (1e2at) is the convexity adjustment.
The drift of r therefore is:

θ(t)art= fM(0,t)t+α[fM(0,t)rt]+σ22α (1e2αt)

The first and third terms terms ensure that that changes in r follow the shape of the forward curve. The second term a[fM(0,t)r] ensures that if r deviates from fM(0,t) it is pulled back toward it by the parameter α. α is the mean reversion rate.

An crude approximate dicretization of the interest rate process in continuous time is:


r(t+δt)   r(t)=[θ(t)αrt]δt + σδt0.5εt+δt

Where:

δt is a discrete time-step

εt+δt  N(0,1)is a standard normal random variable

However, due to the mean reversion, the variance of the short-rate process is not var[r(t+δt)   r(t)]=σ2δt. Rather it is:

var[r(t+δt)   r(t)]= σ22α (1e2αδt)

Which leads to the following discretization of the interest rate process:

r(t+δt)   r(t)=[θ(t)αrt]δt + σ1e2αδt2αεt+δt

The Hull-White one-factor model is specified using the zero curve, α, and σ parameters. It is a calibrated model, that is, its parameters have values that are consistent with market observations.The zero curves are only sufficient for the calibration of the θ(t) parameter. Calibration of the HW short rate model further requires determining the short rate volatility and mean reversion parameters from a calibration routine from market data of actively traded swaptions. The calibration routines find the parameters that minimize the difference between the model price predictions and the market prices implied by the swap market. In the market, quotations of European swaption prices are available as implied Black volatilities, so the calibration function converts Black volatility to price before calculating the error metric.

The calibration routine applies the Weighted L-1 Norm loss function L~1 is the sum of weighted absolute differences between the target value yi and the estimated value y^i (p). The (positive) weights wi are related to estimated uncertainties in the quotes. The uncertainty is inversely proportional to the weight that the instrument has in the calibration. The larger the weight (or the smaller the uncertainty) the closer the prediction must be to the market data. The goal is to correct for for pricing ineffeciency attributable to the illiquidity of the swaptions market. The weighted L-1 norm error metric is defined as:


L~1=i=1Nwi|yiy^i (p)|


Where:

N is the total number of calibration instruments

yi is the market observation of the ith calibration instrument with weight wi , and

y^i (p) is the corresponding model prediction with the parameter values given by p .

The weights wi are related to the data uncertainties δyi via wi=1/δyi

This project implements the Levenberg-Marquardt minimization algorithm, an iterative algorithm that intelligently navigates parameter space to find the minimum of the error metric. This navigation involves the calculation of partial derivatives of the error metric with respect to the model parameters.

3.2. Probability of Default


The average hazard rate (default intensity) per year λ s the CDS price expressed as a percentage spread S and R is the expected recovery rate:

PD(t,t+dt)=λ(t)=S(t)1R

The hazard rate connotes an instantaneous rate of failure. As such, it can be used with elegance in the exponential distribution to compute the cumulative probability of default and the cumulative probability of survival over a determined time period:

PD(0,t)=F(t)=1eλt


From this cumulative distribution function, we obtain a probability density function, which is termed the default density. By the Fundamental Theorem of Calculus, we know that the CDF of a continuous random variable may be expressed in terms of its PDF:

F(t)= 0Tf(t)dt

Wherever the derivative exists, the probability density function (PDF) is given by:

f(t)= dF(t)dt



1 J. Hull and A. White, ‘‘Pricing Interest Rate Derivative Securities,’’ Review of Financial Studies, 3, 4 (1990): 573–92
2 O. A. Vasicek, ‘‘An Equilibrium Characterization of the Term Structure,’’ Journal of Financial Economics, 5 (1977): 177–88
3 Alternatively, the HW model can be characterized as an extension of the Ho–Lee model with a mean reverting short rate. See T. S. Y. Ho and S.-B. Lee, ‘‘Term Structure Movements and Pricing Interest Rate Contingent Claims,’’ Journal of Finance, 41 (December 1986): 1011–29.

4. Model Implementation

In [1]:
# Import the used libraries
import numpy as np
import matplotlib.pyplot as plt
import QuantLib as ql
%matplotlib inline
In [2]:
# Specify valuation date
today = ql.Date(7,4,2015)
ql.Settings.instance().setEvaluationDate(today)
In [3]:
# Observable rate data
# For simplicity we will use a flat forward curve.
# In a single curve world, the yield curve is used for discount and discounting and of the forward prices

# Constant Rate - assumed flat curve
rate = ql.SimpleQuote(0.03)
rate_handle = ql.QuoteHandle(rate)
# Day Count
dc = ql.Actual365Fixed()
# Build discount curve for the risk-free rate with extrapolation enabled
yts = ql.FlatForward(today, rate_handle, dc)
yts.enableExtrapolation()
# For the MC process, relink hyts (handle yield term structure) to the simulated yield curve
hyts = ql.RelinkableYieldTermStructureHandle(yts)
t0_curve = ql.YieldTermStructureHandle(yts)
# Create a Euribor6M instance as the reference rate and pass it the forecast curve
euribor6m = ql.Euribor6M(hyts)
In [4]:
# The method makeSwap create a new QuantLib plain vanilla swap. 
# We use this method to setup a netting set with two swaps
def makeSwap(start, maturity, nominal, fixedRate, index, typ=ql.VanillaSwap.Payer):
    """
    creates a plain vanilla swap with fixedLegTenor 1Y
    
    parameter:
        
        start (ql.Date) : Start Date
        
        maturity (ql.Period) : SwapTenor
        
        nominal (float) : Nominal
        
        fixedRate (float) : rate paid on fixed leg
        
        index (ql.IborIndex) : Index
        
    return: tuple(ql.Swap, list<Dates>) Swap and all fixing dates
    
        
    """
    end = ql.TARGET().advance(start, maturity)
    fixedLegTenor = ql.Period("1y")
    fixedLegBDC = ql.ModifiedFollowing
    fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis) # day count convention for floating leg
    spread = 0.0 # spread over the index rate for floating rate
    fixedSchedule = ql.Schedule(start,
                                end, 
                                fixedLegTenor, 
                                index.fixingCalendar(), # calender to use, for use its Euribor 6m
                                fixedLegBDC, # Day Count for fixed
                                fixedLegBDC,# Sames as day Count for floating
                                ql.DateGeneration.Backward,
                                False)
    floatSchedule = ql.Schedule(start,
                                end,
                                index.tenor(),
                                index.fixingCalendar(),
                                index.businessDayConvention(),
                                index.businessDayConvention(),
                                ql.DateGeneration.Backward,
                                False)
    # Instatiate swap object
    swap = ql.VanillaSwap(typ, 
                          nominal,
                          fixedSchedule,
                          fixedRate,
                          fixedLegDC,
                          floatSchedule,
                          index,
                          spread,
                          index.dayCounter())
    return swap, [index.fixingDate(x) for x in floatSchedule][:-1]
In [5]:
# Define characteristics of deals in swap portfolio:
# makeSwap(Initiation, Expiry, Nominal Value, Fixed Rate, Reference Rate, Vanilla Swap Side - Payer or Receiver)

# 1. Payer swap starting in 2 days and ending in 5 years, has a nominal value of $1,000,000, 
# a fixed rate of 0.03 and uses the euribor6m as the floating reference rate

# 2. Receiver swap starting in 2 day and ending in 4 years, nominal value of $500,000, 
# a fixed rate of 0.03 and floating index rate at eurbor6m

portfolio = [makeSwap(today + ql.Period("2d"),
                      ql.Period("5Y"),
                      1e6,
                      0.03,
                      euribor6m),
             makeSwap(today + ql.Period("2d"),
                      ql.Period("4Y"),
                      5e5,
                      0.03,
                      euribor6m,
                      ql.VanillaSwap.Receiver),
            ]
In [6]:
# Before we can calculate the NPV we need a pricing engine. 
# We use the discountingSwapEngine. 
# 1) It discounts all future payments to the evaluation date 
# 2) It calculates the difference between the present values of the two legs.
engine = ql.DiscountingSwapEngine(hyts)
# #Calculate the NPV of both deals in portfolio
for deal, fixingDates in portfolio:
    deal.setPricingEngine(engine)
    deal.NPV()
print (" NPV Values")
print ("-"*12)
for deal, fixingDates in portfolio:
    print (" %8.2f"% deal.NPV())
 NPV Values
------------
  2233.47
  -884.75
In [7]:
# Store NPV's in list and sum to compute portfolio NPV.
list_val = []    
for deal, fixingDates in portfolio:
    list_val.append(deal.NPV()) 
list_val
port_npv= sum(list_val)
print (" Portfolio NPV Value")
print ("-"*20)
print (" %12.2f"% port_npv)
 Portfolio NPV Value
--------------------
      1348.72
In [8]:
# Set up Hull-White Calibration Process

from collections import namedtuple
import math

# Specify date and interest rate parameters for calibration process
settlement= ql.Date(7,10,2019);
ql.Settings.instance().evaluationDate = today;
term_structure = ql.YieldTermStructureHandle(yts)
index = ql.Euribor6M(term_structure)

# Collect calibration data in a named tuple 
CalibrationData = namedtuple("CalibrationData", 
                             "start, length, volatility")
data = [CalibrationData(1, 5, 0.1148),
        CalibrationData(2, 4, 0.1108),
        CalibrationData(3, 3, 0.1070),
        CalibrationData(4, 2, 0.1021),
        CalibrationData(5, 1, 0.1000 )]


# Define function which uses elements of named tuple as inputs 
# together with ref rate index, term structure and pricing engine  
def create_swaption_helpers(data, index, term_structure, engine):
    swaptions = []
    fixed_leg_tenor = ql.Period(1, ql.Years)
    fixed_leg_daycounter = ql.Actual360()
    floating_leg_daycounter = ql.Actual360()
    for d in data:
        vol_handle = ql.QuoteHandle(ql.SimpleQuote(d.volatility))
        helper = ql.SwaptionHelper(ql.Period(d.start, ql.Years), # maturity
                                   ql.Period(d.length, ql.Years), # length
                                   vol_handle,                     # vol
                                   index,
                                   fixed_leg_tenor,
                                   fixed_leg_daycounter,
                                   floating_leg_daycounter,
                                   term_structure
                                   )
        helper.setPricingEngine(engine)
        swaptions.append(helper)
    return swaptions
In [9]:
# Run Hull-White Calibration Process

model = ql.HullWhite(term_structure);
engine = ql.JamshidianSwaptionEngine(model)
swaptions = create_swaption_helpers(data, index, term_structure, engine)

optimization_method = ql.LevenbergMarquardt(1.0e-8,1.0e-8,1.0e-8)
end_criteria = ql.EndCriteria(10000, 100, 1e-6, 1e-8, 1e-8)
model.calibrate(swaptions, optimization_method, end_criteria)

a, sigma = model.params()
print ("Alpha : %15.4f" % (a))
print ("Sigma : %15.4f" % (sigma))
Alpha :          0.0263
Sigma :          0.0034
In [10]:
# Define function to create calibration report to review error terms
def calibration_report(swaptions, data):
    print ("-"*82)
    print("%15s %15s %15s %15s %15s" % ("Model Price", "Market Price", "Implied Vol", "Market Vol", "Rel Error"))
    print ("-"*82)
    cum_err = 0.0
    for i, s in enumerate(swaptions):
        model_price = s.modelValue()
        market_vol = data[i].volatility
        black_price = s.blackPrice(market_vol)
        rel_error = model_price/black_price - 1.0
        implied_vol = s.impliedVolatility(model_price,
                                          1e-5, 50, 0.0, 0.50)
        rel_error2 = implied_vol/market_vol-1.0
        cum_err += rel_error2*rel_error2
        print("%15.5f %15.5f %15.5f %15.5f %15.5f"  % (model_price, black_price, implied_vol, market_vol, rel_error))
    print ("-"*82)
    print ("Cumulative Error : %15.5f" % math.sqrt(cum_err))
In [11]:
# Run calibration report
calibration_report(swaptions, data)
----------------------------------------------------------------------------------
    Model Price    Market Price     Implied Vol      Market Vol       Rel Error
----------------------------------------------------------------------------------
        0.00574         0.00620         0.10636         0.11480        -0.07343
        0.00639         0.00666         0.10634         0.11080        -0.04018
        0.00579         0.00582         0.10636         0.10700        -0.00597
        0.00440         0.00422         0.10641         0.10210         0.04203
        0.00241         0.00226         0.10649         0.10000         0.06457
----------------------------------------------------------------------------------
Cumulative Error :         0.11422
In [12]:
# With these calibrated parameters set up the stochastic modeling process
volas = [ql.QuoteHandle(ql.SimpleQuote(sigma)), 
         ql.QuoteHandle(ql.SimpleQuote(sigma))]
meanRev = [ql.QuoteHandle(ql.SimpleQuote(a))]

model = ql.Gsr(t0_curve, [today+100], volas, meanRev)
process = model.stateProcess()
In [13]:
# Next we need to define the evaluation grid, the time period where we are generating and valuing the swap NPV’s

date_grid = [today + ql.Period(i,ql.Months) for i in range(0,12*6)] # number of months and years for simulation range

for deal in portfolio:
    date_grid += deal[1]

date_grid = np.unique(np.sort(date_grid)) # finds an sorts the unique values in an array
# NumPy’s vectorize class converts a function into a function that can apply to all elements in an array.
# So here vectorises the future times with QL daycount
time_grid = np.vectorize(lambda x: ql.ActualActual().yearFraction(today, x))(date_grid)
dt = time_grid[1:] - time_grid[:-1] # defines the time interval
In [14]:
# Characteristics of random process
seed = 1
urng = ql.MersenneTwisterUniformRng(seed) # random number generator
usrg = ql.MersenneTwisterUniformRsg(len(time_grid)-1,urng) # specify number of randoms numbers and rng
generator = ql.InvCumulativeMersenneTwisterGaussianRsg(usrg) # specify gaussian sequence generator
In [15]:
# Generate paths
N = 1500
x = np.zeros((N, len(time_grid)))
y = np.zeros((N, len(time_grid)))
pillars = np.array([0.0, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
zero_bonds = np.zeros((N, len(time_grid), 12))

for j in range(12):
    zero_bonds[:, 0, j] = model.zerobond(pillars[j],
                                         0,
                                         0)
for n in range(0,N):
    dWs = generator.nextSequence().value()
    for i in range(1, len(time_grid)):
        t0 = time_grid[i-1]
        t1 = time_grid[i]
        x[n,i] = process.expectation(t0, 
                                     x[n,i-1], 
                                     dt[i-1]) + dWs[i-1] * process.stdDeviation(t0,
                                              x[n,i-1],
                                              dt[i-1])
        y[n,i] = (x[n,i] - process.expectation(0,0,t1)) / process.stdDeviation(0,0,t1)
        for j in range(12):
            zero_bonds[n, i, j] = model.zerobond(t1+pillars[j],
                                                 t1,
                                                 y[n, i])
In [55]:
# plot the interest rate paths
for i in range(0,N):
    plt.title("Simulated interest rate paths")
    plt.xlabel("Time in years")
    plt.ylabel("Floating Rate")
    plt.plot(time_grid, x[i,:])
In [17]:
# Generate the discount factors
discount_factors = np.vectorize(t0_curve.discount)(time_grid)
In [18]:
#Swap pricing under each path
npv_cube = np.zeros((N,len(date_grid), len(portfolio)))
for p in range(0,N):
    for t in range(0, len(date_grid)):
        date = date_grid[t]
        ql.Settings.instance().setEvaluationDate(date)
        ycDates = [date, 
                   date + ql.Period(6, ql.Months)] 
        ycDates += [date + ql.Period(i,ql.Years) for i in range(1,11)]
        yc = ql.DiscountCurve(ycDates, 
                              zero_bonds[p, t, :], 
                              ql.Actual365Fixed())
        yc.enableExtrapolation()
        hyts.linkTo(yc)
        if euribor6m.isValidFixingDate(date):
            fixing = euribor6m.fixing(date)
            euribor6m.addFixing(date, fixing)
        for i in range(len(portfolio)):
            npv_cube[p, t, i] = portfolio[i][0].NPV()
    ql.IndexManager.instance().clearHistories()
ql.Settings.instance().setEvaluationDate(today)
hyts.linkTo(yts)
In [30]:
# Discounted NPV's
discounted_cube = np.zeros(npv_cube.shape)
for i in range(npv_cube.shape[2]):
    discounted_cube[:,:,i] = npv_cube[:,:,i] * discount_factors
    

# Portfolio NPV and Discounted Portfolio NPV after netting
portfolio_npv = np.sum(npv_cube,axis=2)
discounted_npv = np.sum(discounted_cube, axis=2)
In [31]:
# Visualize the first 1000 NPV paths
n_0 = 0
n = 1000
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(12,10), sharey=True)
for i in range(n_0,n):
    ax1.plot(time_grid, portfolio_npv[i,:])
for i in range(n_0,n):
    ax2.plot(time_grid, discounted_npv[i,:])
ax1.set_xlabel("Time in years")
ax1.set_ylabel("NPV in time t Euros")
ax1.set_title("Simulated npv paths")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("NPV in time 0 Euros")
ax2.set_title("Simulated discounted npv paths")
Out[31]:
Text(0.5, 1.0, 'Simulated discounted npv paths')
In [32]:
# Calculate and visualize the positive exposure and discounted exposure paths
E = portfolio_npv.copy()
dE = discounted_npv.copy()
E[E<0] = 0
dE[dE<0] = 0
# Plot the first 30 exposure paths
n = 1000
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(12,10))
for i in range(0,n):
    ax1.plot(time_grid, E[i,:])
for i in range(0,n):
    ax2.plot(time_grid, dE[i,:])
ax1.set_xlabel("Time in years")
ax1.set_ylabel("Exposure")
ax1.set_ylim([-10000,40000])
ax1.set_title("Simulated exposure paths")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("Discounted Exposure")
ax2.set_ylim([-10000,40000])
ax2.set_title("Simulated discounted exposure paths")
Out[32]:
Text(0.5, 1.0, 'Simulated discounted exposure paths')
In [58]:
# Calculate the expected (average) exposure path
E = portfolio_npv.copy()
E[E<0]=0
EE = np.sum(E, axis=0)/N
In [34]:
# Calculate the discounted expected (average) exposure path
dE = discounted_npv.copy()
dE[dE<0] = 0
dEE = np.sum(dE, axis=0)/N
In [35]:
# plot the expected (average) exposure path
n = 30
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(8,10))
ax1.plot(time_grid, EE)
ax2.plot(time_grid, dEE)
ax1.set_xlabel("Time in years")
ax1.set_ylabel("Exposure")
ax1.set_title("Expected exposure")
ax2.set_xlabel("Time in years")
ax2.set_ylabel("Discounted Exposure")
ax2.set_title("Discounted expected exposure")
Out[35]:
Text(0.5, 1.0, 'Discounted expected exposure')
In [56]:
# For risk managment purposes Calculate the PFE curve (@ 95% quantile)
PFE_curve = np.apply_along_axis(lambda x: np.sort(x)[int(N*0.95)],0, E)

plt.figure(figsize=(8,5))
plt.plot(time_grid,PFE_curve)
plt.xlabel("Time in years")
plt.ylabel("Potential Future Exposure @ 95% quantile")
plt.ylim([-2000,15000])
plt.title("PFE")
Out[56]:
Text(0.5, 1.0, 'PFE')
In [42]:
# Setup Probability of Default Curve with hazard rate of 2%
pd_dates =  [today + ql.Period(i, ql.Years) for i in range(11)]
hzrates = [0.02 * i for i in range(11)]
pd_curve = ql.HazardRateCurve(pd_dates,hzrates,ql.Actual365Fixed())
pd_curve.enableExtrapolation()
In [43]:
# Plot curve
# Calculate default probs on grid *times*
times = np.linspace(0,30,5000)
dp = np.vectorize(pd_curve.defaultProbability)(times)
sp = np.vectorize(pd_curve.survivalProbability)(times)
dd = np.vectorize(pd_curve.defaultDensity)(times)
hr = np.vectorize(pd_curve.hazardRate)(times)
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,10))
ax1.plot(times, dp)
ax2.plot(times, sp)
ax3.plot(times, dd)
ax4.plot(times, hr)
ax1.set_xlabel("Time in years")
ax2.set_xlabel("Time in years")
ax3.set_xlabel("Time in years")
ax4.set_xlabel("Time in years")
ax1.set_ylabel("Probability")
ax2.set_ylabel("Probability")
ax3.set_ylabel("Density")
ax4.set_ylabel("HazardRate")
ax1.set_title("Default Probability")
ax2.set_title("Survival Probability")
ax3.set_title("Default density")
ax4.set_title("Hazard rate")
Out[43]:
Text(0.5, 1.0, 'Hazard rate')
In [44]:
# Calculation of the default probs
defaultProb_vec = np.vectorize(pd_curve.defaultProbability)
dPD = defaultProb_vec(time_grid[:-1], time_grid[1:])
In [45]:
# Calculation of the CVA
recovery = 0.4
CVA = (1-recovery) * np.sum(dEE[1:] * dPD)
print ("Credit Value Adjustment : %15.2f" % CVA)
Credit Value Adjustment :          228.08