Quantifying Tail Risk

Implementation of Three Techniques to Model the Value-at-Risk of a Multi-Currency Portfolio over a 10-day time horizon.

- The Delta-Normal (Variance-Covariance) Method
- The Monte Carlo Simulation Method
- The Bootstrap Historical Simulation Method

Paul McAteer, MSc, MBA

pcm353@stern.nyu.edu


Model Design


1) Value-at-Risk with the Delta-Normal (Variance-Covariance) Method.


The Delta-Normal method requires the construction of the covariance matrix of daily security returns Σ, which is a function of the correlation matrix and the sample variance of the constituent securities:

Σ = DCD

Where C is the correlation matrix and D is a diagonal matrix with the variance of securities across the diagonal and zeros elsewhere.


The 1-day variance of the portfolio VΠ1day is determined by the weights, variances and covariances on the constituent assets:


VΠ1day= wΣw


Where we post-multiply wΣ, the transpose of the weights vector and the covariance matrix, by the column vector of weightings w.

The volatility of the portfolio is simply the square root of variance, so:

σΠ1day= wΣw


Variance squares with time, t. However, volatility squares with the square root of time, so 10-day volatility is obtained in the following manner:

σΠ10day= wΣw × 10

To find the Delta-Normal Value-at-Risk at a particular confidence level, multiply the portfolio volatility by the appropriate standard normal deviate, denoted by α:


VaRΠ,10day= αwΣw × 10

So, for example, to calculate the VaR at a 95% confidence level, alpha would be set to 1.645.

2) Value-at-Risk with the Monte Carlo Simulation Method.


The Monte Carlo Method involves simulating the security returns r at each time step t in a portfolio composed of correlated assets. The multivariate normal distribution of returns is specified by by a mean vector of security returns and the covariance matrix of returns:

rtMVN(μ,Σ)

Now, to simulate a vector of security returns, it is necessary to capture the structure of the covariance matrix. This can be achieved by decomposing the matrix Σ into the product of a lower triangular matrix L and upper triangular matrix L.


We use Cholesky Factorization1 to find L such that:


Σ=LL


The simulation of the correlated returns of the portfolio constituents at each time step , can be generated by:


rt=μ + Lzt


Where u=(u1un)  is the column vector of expected security returns and L is the lower traiangular matrix. z is a column vector of independent standard normal random variables (z1zn)  , i.e.:

zN(0,1)

Lz produces a random sample of deviations corresponding to our correlated and nonstandard returns. Importantly, we can demonstrate that the covariance matrix of the elements of Lz , denoted by V(Lz) has covariance matrix Σ, that is:


V(Lz)=Σ

Given that values of (z1zn)  are independently distributed (i.e. have zero covariance) and have a variance of 1, the covariance matrix of the elements of z is an identity matrix, I:

 V(z)=V[z1z2z3zn]=In[1000010000100001]


So the covariance matrix of the elements of Lz, V(Lz) , will be:

V(Lz)=LIL


Which simplifies to:

V(Lz)=LL


We know that:

LL=Σ


So equivalently:

V(Lz)=LL=Σ


The returns will be simulated over a 10-day period (10 time steps). Portfolio returns for each Monte Carlo trial at each time step will be the inner product of the simulated correlated daily security returns and our vector of portfolio weights w. We will run 1 million trials. We calculate the cumulative portfolio return over the 10 time steps. We order by size the terminal portfolio values produced by each trial. We find the value at the appropriate percentile of this distribution which corresponds to the desired confidence level for Value at Risk.

3) Value-at-Risk with the Bootstrap Historical Simulation Method.


The Bootstrap Historical Simulation Method requires first computing the asset-weighted realized returns which gives the historical returns for the portfolio. The bootstrap procedure involves resampling from our data set of daily historical log returns with replacement. The algorithm of generating a sample distribution using the bootstrap is:

  1. Create a random subset of m=10 integers, (i1, i2,im), from a set of n integers wher n=number of trading days, (1,2,,n) by sampling with replacement. The n integers are indexed the to the returns (x1, x2,xn)
  2. Construct the bootstrap sample as (xi1, xi2,xim)
  3. Calculate the cumulative portfolio return for the bootstrap sample over the period, which is the sum of the log returns
  4. Perform the desired number of trials of this procedure to obtain the sampling distribution.

To compute the VaR, order by size the terminal portfolio values produced by each trial. Find the value at the appropriate percentile of this distribution which corresponds to the desired confidence level for Value at Risk.


1 The property of positive definiteness of Σ ensures that this can be achieved. If a matrix M has the property that for all non-zero weight vectors w, wTΣw>0, then M is said to be positive definite.

Model Implementation

1. Import libraries

In [1]:
import matplotlib.pyplot as plt
import warnings

import numpy as np
import pandas as pd
import yfinance as yf
import seaborn as sns
from tabulate import tabulate

plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = [8, 4.5]
plt.rcParams['figure.dpi'] = 300
warnings.simplefilter(action='ignore', category=FutureWarning)

%matplotlib inline
%config InlineBackend.figure_format = 'retina'

# Import cufflinks
import cufflinks as cf
cf.set_config_file(offline=True)

2. Specify constituent securities and historical time horizon

In [2]:
ASSETS = ['EURUSD=X', 'EURJPY=X', 'EURSEK=X']
START_DATE = '2016-11-14'
END_DATE = '2019-11-14'

3. Download and inspect daily price data

In [3]:
df = yf.download(ASSETS, start=START_DATE, 
                 end=END_DATE, adjusted=True)
print(f'Downloaded {df.shape[0]} rows of data.')
[*********************100%***********************]  3 of 3 completed
Downloaded 781 rows of data.
In [4]:
adj_close = df['Adj Close']
adj_close = adj_close.dropna()
adj_close.head()
Out[4]:
EURJPY=X EURSEK=X EURUSD=X
Date
2016-11-14 115.666000 9.85597 1.082485
2016-11-15 116.280998 9.82950 1.075038
2016-11-16 116.932999 9.84872 1.072731
2016-11-17 116.469002 9.82588 1.070664
2016-11-18 117.094002 9.81199 1.062304
In [5]:
adj_close.tail()
Out[5]:
EURJPY=X EURSEK=X EURUSD=X
Date
2019-11-07 120.565002 10.6464 1.107040
2019-11-08 120.773003 10.6399 1.101970
2019-11-11 120.389999 10.6936 1.102244
2019-11-12 120.313004 10.6971 1.103546
2019-11-13 120.024002 10.6926 1.101237
In [6]:
plot_title = f'Daily Prices:    {START_DATE} - {END_DATE}'
adj_close.iplot(title=plot_title,subplots=True)
plt.tight_layout()
plt.show()
2017201820191151201251301352017201820199.51010.5112017201820191.051.11.151.21.25Export to plot.ly »
EURJPY=XEURSEK=XEURUSD=XDaily Prices: 2016-11-14 - 2019-11-14
<Figure size 432x288 with 0 Axes>

4. Compute Returns

In [7]:
returns = adj_close.pct_change()
returns = returns.dropna()
plot_title = f'Daily Returns:    {START_DATE} - {END_DATE}'
returns.iplot(title=plot_title,subplots=True)
plt.tight_layout()
plt.savefig('ch6_im3.png')
plt.show()
201720182019−0.0200.02201720182019−0.0100.010.02201720182019−0.02−0.0100.01Export to plot.ly »
EURJPY=XEURSEK=XEURUSD=XDaily Returns: 2016-11-14 - 2019-11-14
<Figure size 432x288 with 0 Axes>
In [8]:
cum_ret = (adj_close[0:]/adj_close.iloc[0])-1
plot_title = f'Cumulative Returns:    {START_DATE} - {END_DATE}'
cum_ret.iplot(title=plot_title)
plt.tight_layout()
plt.savefig('ch6_im3.png')
plt.show()
Jan 2017Jul 2017Jan 2018Jul 2018Jan 2019Jul 2019−0.0500.050.10.15Export to plot.ly »
EURJPY=XEURSEK=XEURUSD=XCumulative Returns: 2016-11-14 - 2019-11-14
<Figure size 432x288 with 0 Axes>
In [9]:
# Select first prices
first_prices = adj_close.iloc[0]

# Create normalized_prices
normalized = adj_close.div(first_prices)


# Create tickers
tickers = ['EURJPY=X', 'EURSEK=X']

# Subtract the normalized benchmark from the normalized stock prices, and plot the result
normalized[tickers].sub(normalized['EURUSD=X'], axis=0).iplot(title='Relative performance Vs EURUSD')
Jan 2017Jul 2017Jan 2018Jul 2018Jan 2019Jul 2019−0.15−0.1−0.0500.050.1Export to plot.ly »
EURJPY=XEURSEK=XRelative performance Vs EURUSD
In [10]:
# Mean Annual Returns
(returns.mean()*252).iplot(kind='bar', title='Annualized Mean Daily Return')
EURJPY=XEURSEK=XEURUSD=X00.0050.010.0150.020.0250.03Export to plot.ly »
Annualized Mean Daily Return
In [11]:
(returns.std()*(252**0.5)).iplot(kind='bar', title='Annualized Mean Daily Volatility')
EURJPY=XEURSEK=XEURUSD=X00.010.020.030.040.050.060.070.08Export to plot.ly »
Annualized Mean Daily Volatility
In [12]:
sh_ratio = (returns.mean()*252)/(returns.std()*(252**0.5))
sh_ratio.iplot(kind='bar', title='Sharpe Ratio')

5. Analyse Distribution of Returns

In [13]:
# Calculate 4 moments of distribution to evaluate normality of returns distribution 
kurt = returns.kurt()
skew = returns.skew()
sigma = returns.std()
mu =  returns.mean()
Norm_df = pd.DataFrame(data=kurt, columns= ["KURT"])
Norm_df["SKEW"] = skew
Norm_df["SIGMA"] = sigma
Norm_df["MU"] = mu
Norm_df
Out[13]:
KURT SKEW SIGMA MU
EURJPY=X 3.113469 -0.248153 0.005054 0.000061
EURSEK=X 1.851760 -0.151998 0.003586 0.000114
EURUSD=X 1.151556 0.107048 0.004362 0.000032
In [14]:
# Calculate the scaled return (return expresses as Z-Score) for chosen security
# to view normality of returns distribution
Sec = 'EURJPY=X'

EmpNorm = pd.DataFrame(data=adj_close[Sec])
EmpNorm['Return'] = EmpNorm[Sec].pct_change()
EmpNorm  = EmpNorm.dropna()

sigma_sec = EmpNorm['Return'].std()
mu_sec =  EmpNorm['Return'].mean()

EmpNorm['Scaled_Return'] = EmpNorm['Return'].apply(lambda x: (x-mu_sec)/sigma_sec)
In [15]:
# Calculate minimum and maximum bin range
sr_min = np.min(EmpNorm['Scaled_Return'])
sr_max = np.max(EmpNorm['Scaled_Return'])

print(f'Minimum {sr_min:.6f}, Maximum {sr_max:.6f}')
Minimum -6.447768, Maximum 4.014616
In [16]:
# Define bins
from scipy.stats import norm

x = np.linspace(sr_min, sr_max, 100)

# Calculate normal probability density function
y = norm.pdf(x,0,1)

# Plot histogram of scaled returns
plt.hist(EmpNorm['Scaled_Return'], bins=200, density=True, color = 'blue', label = 'Empirical', alpha=1)

# Plot norm pdf
plt.plot(x, y, color = 'orange', label = 'Normal', alpha=1)

# Set x and y axis limits
plt.xlim(-4,4)
plt.ylim(0,0.8)

# Set title
plt.title('Empirical vs Normal Distribution')

# Set legends
plt.legend();

6. Compute linear dependence of security returns

In [17]:
# To add the Pearson Correlation Coefficient between two variables onto the scatterplot. 
# Write a function that takes in two arrays, calculates the statistic, and then draws it on the graph.

from scipy.stats import pearsonr

def corrfunc(x,y, ax=None, **kws):
    """Plot the correlation coefficient in the top left hand corner of a plot."""
    r, _ = pearsonr(x, y)
    ax = ax or plt.gca()
    # Unicode for lowercase rho (ρ)
    rho = '\u03C1'
    # Text, position and coordinate system for annotation
    ax.annotate(f'{rho} = {r:.3f}', xy=(.05, .95), xycoords=ax.transAxes)
In [18]:
corr = returns.corr()
ax = sns.heatmap(
    corr,
    annot = True,
    fmt='.2g',
    linewidths=.5,
    vmin=-1, vmax=1, center=0,
    cmap='RdYlGn_r',
    square=True
    )
ax.set_title("Correlation Matrix", fontsize=20)
ax.set_xticklabels(
    ax.get_xticklabels(),
    rotation=45,
    horizontalalignment='right'
);
In [19]:
# To add the Pearson Correlation Coefficient between two variables onto the scatterplot. 
# Write a function that takes in two arrays, calculates the statistic, and then draws it on the graph.

from scipy.stats import pearsonr


def corrfunc(x,y, ax=None, **kws):
    """Plot the correlation coefficient in the top left hand corner of a plot."""
    r, _ = pearsonr(x, y)
    ax = ax or plt.gca()
    # Unicode for lowercase rho (ρ)
    rho = '\u03C1'
    # Text, position and coordinate system for annotation
    ax.annotate(f'{rho} = {r:.2f}', xy=(.05, .95), xycoords=ax.transAxes)
In [20]:
g = sns.pairplot(returns, kind='scatter', diag_kind="hist")
g.map_lower(corrfunc)
g.map_upper(corrfunc)
plt.show()
In [21]:
from scipy import stats

def regfunc(x,y, ax=None, **kws):
    """Plot the reg coefficient in the top left hand corner of a plot."""
    slope, intercept, rvalue, pvalue, stderr = stats.linregress(x,y)
    ax = ax or plt.gca()
    # R Squared
    r_sq = 'Reg Coeff'
    # Text, position and coordinate system for annotation
    ax.annotate(f'{r_sq} = {rvalue**2:.2f}', xy=(.05, .95), xycoords=ax.transAxes)
In [22]:
h = sns.pairplot(returns, kind='reg', diag_kind="hist")
h.map_upper(regfunc)
h.map_lower(regfunc)
plt.show()
In [23]:
# Covariance matrix (Sigma)
cov_mat = returns.cov()
cov_mat
Out[23]:
EURJPY=X EURSEK=X EURUSD=X
EURJPY=X 0.000026 -1.028806e-06 1.071051e-05
EURSEK=X -0.000001 1.286069e-05 9.741761e-07
EURUSD=X 0.000011 9.741761e-07 1.902567e-05

7. Specify portfolio composition

In [24]:
# Create the weights vector
# Convert 1D vector to 2D arrays (numofassetx1) using np.newaxis and than transpose it
numofasset = len(ASSETS)
wts = numofasset * [1./numofasset]
wts = np.array(wts)[:,np.newaxis]
wts
Out[24]:
array([[0.33333333],
       [0.33333333],
       [0.33333333]])
In [25]:
# Overide Equal Weight?
#wts = np.array([0.33333333,0.33333333,0.33333333])[:,np.newaxis]
#wts

8. Calculate portfolio returns

In [26]:
# Calculate Simple Daily Portfolio Returns
port_ret = np.dot(returns,wts)
port_ret_df = pd.DataFrame(data=port_ret, columns=["Simple Port Ret"])
port_ret_df= port_ret_df.dropna()
In [27]:
# Calculate Daily Log Returns for portfolio
port_ret_df["Log Port Ret"] = np.log(1+port_ret_df["Simple Port Ret"])
port_ret_df.head()
Out[27]:
Simple Port Ret Log Port Ret
0 -0.001416 -0.001417
1 0.001806 0.001804
2 -0.002738 -0.002742
3 -0.001285 -0.001286
4 -0.000614 -0.000614
In [28]:
# Convert log returns dataframe to numpy array for later use.
port_logret_np = port_ret_df["Log Port Ret"].to_numpy()

9. Calculate expected (mean) portfolio return and portfolio volatility

In [29]:
# Portfolio Volatility
from numpy.linalg import multi_dot
port_stdev = np.sqrt(multi_dot([wts.T, cov_mat, wts]))
port_stdev = port_stdev.flatten()[0]

print('1D Portfolio Volatility: {0:.4f}%'.format(port_stdev))
1D Portfolio Volatility: 0.0030%
In [30]:
# Portfolio returns for securities with given weights
port_ret = np.dot(returns,wts)
port_ret.flatten()
# Expected (Mean) Portfolio Return
port_mean = port_ret.mean()
print("1D Portfolio Expected Return: {0:.4f}%".format(port_mean))
1D Portfolio Expected Return: 0.0001%

10. Compute Delta-Normal VaR

In [31]:
# Calculate 1D Portfolio VaR at different confidence levels
from scipy.stats import norm

pVaR_90 = norm.ppf(1-0.90,port_mean,port_stdev).flatten()[0]
pVaR_95 = norm.ppf(1-0.95,port_mean,port_stdev).flatten()[0]
pVaR_99 = norm.ppf(1-0.99,port_mean,port_stdev).flatten()[0]
In [32]:
# Calculate 10D Portfolio VaR at different confidence levels
pVaR_90_10D = pVaR_90*(10**(1/2))
pVaR_95_10D = pVaR_95*(10**(1/2))
pVaR_99_10D = pVaR_99*(10**(1/2))

pVaR_90_10D = pVaR_90_10D.round(5)
pVaR_95_10D = pVaR_95_10D.round(5)
pVaR_99_10D = pVaR_99_10D.round(5)
In [33]:
# Ouput results in tabular format
ptable = [['90%', pVaR_90_10D],['95%', pVaR_95_10D],['99%', pVaR_99_10D]]
header = ['\033[1mConfidence Level\033[0m', '\033[1mDelta-Normal 10D Value-at-Risk\033[0m']
print(tabulate(ptable,headers=header))
Confidence Level      Delta-Normal 10D Value-at-Risk
------------------  --------------------------------
90%                                         -0.01177
95%                                         -0.01517
99%                                         -0.02154
In [34]:
# Visualize Returns Distribution and VaR threshold

mu= port_mean*10
sigma= port_stdev*(10**(1/2))
s = np.random.normal(mu, sigma, 10000000)

plt.figure(figsize=(12,6))
count, bins, ignored = plt.hist(s, 1000, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
               np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
         linewidth=3, color='orange')
plt.xlabel('Return rate')
plt.title('Parametric Distribution of Returns with Delta-Normal VaR @ 95 Confidence Interval', fontsize=20)
plt.axvline(x = pVaR_95_10D, color = 'r', linestyle = '--', label = "95% CI of VaR: {0:.4f}%".format(pVaR_95_10D))
plt.legend();
plt.show()

print ("     At a 95% confidence level, losses will not exceed {0:.3f} over a 10 day period.".format(-pVaR_95_10D))
     At a 95% confidence level, losses will not exceed 0.015 over a 10 day period.

11. Compute Monte Carlo Simulation VaR

11 a) Download same price data and assign to new dataframe

In [35]:
stock_data = yf.download(ASSETS, start=START_DATE, 
                 end=END_DATE, adjusted=True)
print(f'Downloaded {stock_data.shape[0]} rows of data.')
[*********************100%***********************]  3 of 3 completed
Downloaded 781 rows of data.
In [36]:
stock_data = stock_data.dropna()
stock_data.head()
Out[36]:
Adj Close Close High Low Open Volume
EURJPY=X EURSEK=X EURUSD=X EURJPY=X EURSEK=X EURUSD=X EURJPY=X EURSEK=X EURUSD=X EURJPY=X EURSEK=X EURUSD=X EURJPY=X EURSEK=X EURUSD=X EURJPY=X EURSEK=X EURUSD=X
Date
2016-11-14 115.666000 9.85597 1.082485 115.666000 9.85597 1.082485 116.346001 9.86735 1.087429 115.550003 9.80100 1.071080 115.657997 9.85769 1.082603 0.0 0.0 0.0
2016-11-15 116.280998 9.82950 1.075038 116.280998 9.82950 1.075038 117.005997 9.86950 1.081666 116.080002 9.79873 1.072370 116.267998 9.82942 1.074691 0.0 0.0 0.0
2016-11-16 116.932999 9.84872 1.072731 116.932999 9.84872 1.072731 117.468002 9.85982 1.076079 116.813004 9.81140 1.067122 116.889999 9.84730 1.072616 0.0 0.0 0.0
2016-11-17 116.469002 9.82588 1.070664 116.469002 9.82588 1.070664 117.203003 9.85247 1.075038 116.278000 9.81640 1.066691 116.463997 9.82700 1.070549 0.0 0.0 0.0
2016-11-18 117.094002 9.81199 1.062304 117.094002 9.81199 1.062304 117.454002 9.83430 1.064000 116.801003 9.79400 1.057306 117.087997 9.81247 1.062473 0.0 0.0 0.0

11 b) Collect price data in a numpy array

In [37]:
# Collect Stock Price Data in an array
stock_price = stock_data.iloc[:,0:3]
stock_price = stock_price.values
In [38]:
array_sum = np.sum(stock_price)
array_has_nan = np.isnan(array_sum)
print(array_has_nan)
False

11 c) Collect daily return data in a numpy array

In [39]:
# Calculate Daily Returns
nrows = len(stock_price)
# From In all columns, from Stock Price t+1 onwards, divide price by previous price and subtract 1
stock_returns = stock_price[1:nrows,:] / stock_price[0:nrows-1,:] - 1

11 d) Define parameters for Monte Carlo simulation.

In [40]:
# Specify number of simulations
mc_rep = 100000
# Specify number of time steps
time_steps = 10

11 e) Obtain necessary vectors and matrices.

In [41]:
# Specify portfolio weights
portf_WT = np.array([1/3, 1/3, 1/3])
# Call numpy.array.flatten() to convert numpy.array to a 1D array.
portf_WT = wts.flatten()


# Compute Covariance Matrix
cov = np.cov(np.transpose(stock_returns))

# Compute Mean 1D Return for each security
mu = np.mean(stock_returns, axis=0)

# Form a 10 x 3 Matrix, based on 10 time steps for rows and 3 Securities for columns
# Populate with 1D mean return (drift) of each security
Mu = np.full((time_steps,3),mu)

# Transpose to obtain a 3 row x 10 column array representing securities and time steps respectively
Mu = np.transpose(Mu)

# Visualize as Dataframe
Mu_df = pd.DataFrame(data=Mu, index=ASSETS)
Mu_df
Out[41]:
0 1 2 3 4 5 6 7 8 9
EURUSD=X 0.000061 0.000061 0.000061 0.000061 0.000061 0.000061 0.000061 0.000061 0.000061 0.000061
EURJPY=X 0.000114 0.000114 0.000114 0.000114 0.000114 0.000114 0.000114 0.000114 0.000114 0.000114
EURSEK=X 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032 0.000032

11 f) Build and run MC Simulation to generate portfolio price paths

In [42]:
# Create initial matrix for price path of portfolio
# With dimensions of 10 rows (for time steps) and 1000 columns (for simulated price paths)
# Populate with zeros
portf_returns_30_m = np.full((time_steps,mc_rep),0.)

# Seed for reproducibility
np.random.seed(100)

for i in range(0,mc_rep):
    # Draw random numbers from Standard Normal distribution for each time step and for each asset
    # forming a 30 x 3 matrix
    Z = np.random.normal(size=3*time_steps)
    # Reshape into a 3 x 30 matrix
    Z = Z.reshape((3,time_steps))
    # Perform the Cholesky decomposition of the covariance matrix and obtain lower matrix
    L = np.linalg.cholesky(cov)
    # Fill the Mu Matrix by shocking the drift with correlated random values
    daily_returns = Mu + np.inner(L,np.transpose(Z))
    # Compute the path of portfolio values by calculating the 
    # cumulative product of the transpose of correlated daily returns by portfolio weights
    portf_Returns_30 = np.cumprod(np.inner(portf_WT,np.transpose(daily_returns)) + 1)
    # Each row of the matrix  represents a simulated price process. 
    # Obtain each simulated Portfolio Return at each time step
    portf_returns_30_m[:,i] = portf_Returns_30
In [43]:
# Correct x-axis for days
x_ticks = np.arange(1,time_steps+1)
tick_list = list(x_ticks)
str_tick_list =[]
for i in tick_list:
    str_tick_list.append(str(i))

plt.figure(figsize=(14,8))
positions = (np.arange(0,time_steps))
labels = str_tick_list
plt.xticks(positions, labels)
plt.plot(portf_returns_30_m)
plt.ylabel('Portfolio Returns')
plt.xlabel('Days')
plt.title('1,000,000 Simulated Portfolio Returns over 10 day horizon', fontsize=20)
plt.show()

11 g) Collect and order sumulated terminal portfolio values

In [44]:
# Terminal Values of Portfolio
P_T = portf_returns_30_m[time_steps-1]

P_T_sorted = np.sort(P_T)
percentiles = [10., 5., 1.]
var = np.percentile(P_T, percentiles)
In [45]:
var_list = []
for x, y in zip(percentiles, var):
    var_list.append(y-1)

11 h) Compute MC VaR at desired confidence levels

In [46]:
# Calculate 10D Portfolio VaR at different confidence levels
mVaR_90_10D = var_list[0]
mVaR_95_10D = var_list[1]
mVaR_99_10D = var_list[2]

mVaR_90_10D = mVaR_90_10D.round(5)
mVaR_95_10D = mVaR_95_10D.round(5)
mVaR_99_10D = mVaR_99_10D.round(5)

11 i) Display MC VaR Results

In [47]:
# Ouput results in tabular format
mtable = [['90%', mVaR_90_10D],['95%', mVaR_95_10D],['99%', mVaR_99_10D]]
header = ['\033[1mConfidence Level\033[0m', '\033[1mMonte Carlo 10D Value-at-Risk\033[0m']
print(tabulate(mtable,headers=header))
Confidence Level      Monte Carlo 10D Value-at-Risk
------------------  -------------------------------
90%                                        -0.01126
95%                                        -0.01459
99%                                        -0.02086
In [53]:
# Visualize Returns Distribution
P_T_df = pd.DataFrame(1-P_T)
P_T_df.hist(figsize = (12, 6), bins = 1000, alpha = 1, grid = True)
plt.xlabel('Return rate ')
plt.ylabel('Frequency')
plt.title('Distribution of Simulated Returns with MC VaR @ 95 Confidence Interval', fontsize=20)
plt.axvline(x = mVaR_95_10D, color = 'r', linestyle = '--', label = "95% CI of VaR: {0:.4f}".format(mVaR_95_10D))
plt.legend(framealpha=1, frameon=True);
plt.show()
print ("     Max loss with 95% probability will not exceed than {0:.3f} in single day.".format(-mVaR_95_10D))
     Max loss with 95% probability will not exceed than 0.015 in single day.

12. Compute Bootstrap Historical Simulation VaR

In [49]:
# Create function to randonly select subset of 10 integers out of total set integers assocoated with daily returns
def generate_random_index(n=10):
    return np.random.choice(np.arange(len(port_logret_np)), size=n, replace=True)  

# Create list to store simulated terminal values 
VaR_simulated_10 = []
# Specify number of simulations
n_bootstrap = 100000
# For each iteration calculate terminal value after 10 days and store in list
for _ in range(n_bootstrap):
    VaR = port_logret_np[generate_random_index(n=10)].sum()
    VaR_simulated_10.append(VaR)

# Compute historic VaR at desired quantiles
VaR_90 = np.percentile(VaR_simulated_10, q=10).round(5)
VaR_95 = np.percentile(VaR_simulated_10, q=5).round(5)
VaR_99 = np.percentile(VaR_simulated_10, q=1).round(5)

# Ouput results in tabular format
from tabulate import tabulate
htable = [['90%', VaR_90 ],['95%', VaR_95],['99%', VaR_99]]
header = ['\033[1mConfidence Level\033[0m', '\033[1mHistoric 10D Value-at-Risk\033[0m']
print(tabulate(htable,headers=header))
Confidence Level      Historic 10D Value-at-Risk
------------------  ----------------------------
90%                                     -0.01115
95%                                     -0.01452
99%                                     -0.02122
In [52]:
# Visualize Returns Distribution
VaR_10_df = pd.DataFrame(VaR_simulated_10)
VaR_10_df.hist(figsize = (12, 6), bins = 10000, alpha = 1, grid = True)
plt.xlabel('Return rate ')
plt.ylabel('Frequency')
plt.title('Distribution of Bootstrapped Returns with Hist. VaR @ 95 Confidence Interval', fontsize=20)
plt.axvline(x = VaR_95, color = 'r', linestyle = '--', label = "95% CI of VaR: {0:.4f}%".format(VaR_95))
plt.legend(framealpha=1, frameon=True);
plt.show()
print ("     At a 95% confidence level, losses will not exceed {0:.3f} over a 10 day period.".format(-VaR_95))
     At a 95% confidence level, losses will not exceed 0.015 over a 10 day period.

13. Compare model results

In [51]:
# Compare Results
my_dict = {'90% Conf. Level' : [VaR_90 ,pVaR_90_10D,  mVaR_90_10D], 
           '95% Conf. Level' : [VaR_95 ,pVaR_95_10D,  mVaR_95_10D], 
           '99% Conf. Level' : [VaR_99 ,pVaR_99_10D,  mVaR_99_10D]}
df = pd.DataFrame.from_dict(my_dict, orient='index')
df.columns = ['Historic', 'Delta-Normal', 'Monte Carlo']
df.round(4)

df.style.set_caption("Comparison of VaR Results").set_table_styles([{
    'selector': 'caption',
    'props': [('color', 'red'),
              ("text-align", "center"),
              ('font-size', '16px')]}])
Out[51]:
Comparison of VaR Results
Historic Delta-Normal Monte Carlo
90% Conf. Level -0.011150 -0.011770 -0.011260
95% Conf. Level -0.014520 -0.015170 -0.014590
99% Conf. Level -0.021220 -0.021540 -0.020860