CONTENTS
1. Introduction
In order to estimate the potential future losses on an option it is necessary to determine the change in the current instrument price arising from a change in the current value of the input variables, that is, changes in the value of the underlying, implied volatility, the discount rate, the expected dividend and time to expiration. For risk management purposes, we refer to these inputs as the risk factors. All the current risk factors are observable except for the implied volatility. Implied volatility must be derived from the current option price and the observable risk factors.
A naive approach to risk management would then involve applying arbitrary stresses to relevant risk factors for each instrument in the portfolio, usually, the spot of the underlying and the implied vol, in order to obtain a new price for each instrument and thus the P&L of each position, which is simply the current position value minus the initial premium incurred to enter the position.The naive approach is obviously unsatisfactory for a number of reasons. Firstly, even on a stand-alone basis, the assumption that the "spot" and "implied vol" risk factors for a single instrument are independent is clearly incorrect. Changes in the implied vols of a particular security tend to exhibits a strong relationship with changes in the spot price of the security.Secondly, the approach should, but often fails to, incorporate reasonable "shocks" to the risk factors, based on empirical or simulated price processes.
Nevertheless, this methodology is applied by brokers to calculate the margining requirements for their counterparties, based on both a so-called "Rules-Based" approach (informed by the "Regulation T" framework) and "Risk-Based" approach (informed by the "Portfolio Margining" regulations). The materialization of collateralisation obligations due to the mechanical application of these simple models has liquidity implications for the holder of the options positions. The likelihood and magnitude of future margining obligations and liquidity demands will however need to take into consideration the expected P&L of the options positions. In other words, an estimation of liquidity risk over a specified time horizon requires a reasonable estimation of the market risk of the portfolio over the same time horizon. A reasonable estimation of market risk of a portfolio must incorporate the expected volatilities (and correlations) of the portfolio's assets. For an options portfolio, this implies estimating the volatilities (and correlations) of the all the risk factors for all the constituent positions.
The virtue and necessity of volatility estimation is not limited to the valuation and risk management of options portfolios. The effective computation of the ex-ante Value-at-Risk of a portfolio of cash equities requires the construction of covariance matrices which incorporate the updated information content available from the statistical distribution of asset price returns or the the implied distribution which can be extracted from traded option prices. The simple standard deviation of historic returns as employed in Markowitz's semimal paper on efficient portfolio formation is inadequate. One characteristic of VaR measues relying on this simple defintion of volaility is a so-called "Ghost VaR" where sustained periods of low volatility have an excessive weight when the variance applies equal weighting to historic observations, which leads to understimation of risk during short episodes of elevated volatility. The "Ghost effect" refers to the VaR computation being haunted by a former low volatility regime and the failure to effectively update the volatility forecast with current price dynamics.
2. Volatility Estimation with the Generalised Autoregressive Conditional Heteroskedasticity (GARCH) model
Recall that simple volatility estimation by statistical means assume equal weights to all returns measured over the period. We know that over 1-day, the mean return is small as compared to standard deviation. If we consider a simple m-period moving average, where $σ_n$ is the volatility of return on day $n$, then with $\bar{\mu } \approx 0$, we have:
$$\sigma ^{2}_n=\frac{1}{m} \sum_{i=1}^{m}{\mu ^{2}_{n-i}}$$
where: $m$ is the number of returns, $\mu$ is return and $\sigma ^{2}$ is the variance. Volatility is the square root of variance.
Some phenomena are systematically observed in asset return time series. A good volatility estimation model should be able to capture most of these empirical "stylised" facts.
Volatility Clustering: The volatility is more likely to be high at time n if it was also high at time n−1. That is, a shock at time n−1 increases not only the variance at time n−1 but also the variance at time n. In other words, the markets are more volatile in some periods, and they are more tranquil in others. The volatilities are clustered in time.
Fat Tails: Return time series generally present fat tails, also known as excess kurtosis, or leptokurtosis. That is, their kurtosis (the fourth central moment normalized by the square of the variance) is usually greater than three, the kurtosis of a Gaussian random variable. In fact, a popular statistical test for the hypothesis of Gaussianity of a distribution, the Jarque-Bera Test, jointly test both if the distribution is symmetric and if the distribution presents kurtosis equal to three. If returns are fat-tailed, the probability of having extreme events (very high or very low returns) is higher than it would be in the normal (Gaussian) case.
Any large return within this n period will elevate the variance or standard deviation of returns until it drops out of the sample. This is often referred to as the""Ghost Effect". To address this effect, we adopt weighting schemes.
One weighting scheme popularized by RiskMetrics is the Exponentially Weighted Moving Average (EWMA) where we weight the squared returns in an exponentially declining fashion. The rate of the exponetial decline of the weights is controlled by the $\lambda$ parameter, often called the persistence parameter, which following the RiskMetrics document is often set at 0.94. The recursive formula for EWMA Volatility is the following:
$$\sigma _{n}^{2}=\lambda \sigma _{n-1}^{2}+(1-\lambda )\mu _{n-1}^{2}$$
where: $\lambda$ is applied to the lagged variance of the previous day and $1 - \lambda$ is applied to the squared return of the previous day. The lower the persitence parameter, $\lambda$ , the greater the weight of recent volatility in this weighting scheme.
Under the assumption that volatility is mean-reverting, the GARCH model adds a third variable, Long Run Variance, denoted by $V_L$ and a third coeffcient, the rate of mean reversion, denoted by $\gamma$. $\beta$ and $\alpha$ subsitute for $\lambda$ and $1 - \lambda$ respectively. The three coeffcients sum to one. Thus:
$$\sigma _{n}^{2}=\gamma V_{L}+\beta \sigma _{n-1}^{2}+\alpha\mu _{n-1}^{2}$$
For notational economy, the long run variance and the rate of mean reversion are often represented by the single input, $\omega$:
$$\sigma _{n}^{2}=\omega+\beta \sigma _{n-1}^{2}+\alpha\mu _{n-1}^{2}$$
Maximum Likeihood Estimation (MLE) is a statistical method used for fitting the data to a model. When using MLE, we first assume a distribution and then seek to determine the model parameters. To estimate GARCH(1,1) parameters, we assume distribution of returns conditional on variance are normally distributed.
To derive ω , α and β, we maximize: $$\sum_{i=1}^{n}{\log \left[ \frac{1}{\sqrt{2\pi \sigma _{i}^{2}}} e^{\frac{-(\mu _{i}-\bar{\mu })^{2}}{2\sigma _{i}^{2}}}\right]}$$
3. Volatility Estimation with the GJR-GARCH model
The GJR model, proposed by Glosten, Jagannathan and Runkle, is a GARCH variant that includes leverage terms for modeling asymmetric volatility clustering. In the GJR formulation, large negative changes are more likely to be clustered than positive changes in line with the the stylized fact that negative shocks at time $t_{-1}$ have a stronger impact in the variance at time $t$ than positive shocks. This asymmetry tends to be called the leverage effect because the increase in volatility was traditionally ascribed to to the increased leverage induced by a negative shock to market value, that is: as market value falls, leverage, $debt/mktvalue$ increases and, therefore, risk also increases.
The GJR-GARCH model relies on two equations for the variance at time $t$, $\epsilon_{t} = R_{t} - \mu_{t}$ following negative and positive expected return i.e.where $\epsilon_{t-1} < 0$ and $\epsilon_{t-1} > 0$.
In the case of a positive surprise, we take the usual GARCH equation:
$$\sigma _{n}^{2}=\omega+\beta \sigma _{n-1}^{2}+\alpha\epsilon_{n-1}^{2}$$
In the case of a negative surprise, the predicted variance should be higher than after a positive surprise, which requires a larger coefficient multiplying the squared prediction error, namely $(\alpha+\gamma)\epsilon_{n-1}$ rather than $\alpha\epsilon_{n-1}$:
$$\sigma _{n}^{2}=\omega+\beta \sigma _{n-1}^{2}+(\alpha+\gamma)\epsilon_{n-1}^{2}$$
We can unify these two formulae in a single equation thus:
$$\sigma _{n}^{2}=\omega+\beta \sigma _{n-1}^{2}+(\alpha+\gamma I_{n-1})\epsilon_{n-1}^{2}$$
Where $I$ is zero if $\epsilon_{t-1} > 0$ and 1 if $\epsilon_{t-1} < 0$
We estimate all the parameters(μ,ω,α,γ,β) simultaneously, by maximizing the log likelihood
4. VaR Estimation with Implied Volatility
Black and Scholes proposed the following stochastic differential equation to model asset prices. Here the Weiner process $W_t$ is a standard Brownian Motion. The values of $S_t$ are log-normally distributed and the returns $dS_t / S_t$ are normally distributed. The instantaneous change in the price of an asset is driven by a deterministic drift component, $(r-q)$, that is, the risk free rate minus the dividend yield in a risk neutral world, and a random component given by the normal random variable and a constant volatility $\sigma$.
$$dS_{t} = (r-q) S_{t}dt + \sigma S_{t} d W_{t}$$
According to the Black-Scholes derivation, we can further state:
$$\frac{\partial V}{\partial t}+ \frac 1{2}{\sigma^2 S^2} \frac{\partial^2 V}{\partial S^2}+ r S \frac{\partial V}{\partial S}= rV$$
Or equivalently:
$$\frac{\partial V}{\partial t}+ \frac 1{2}{\sigma^2 S^2} \frac{\partial^2 V}{\partial S^2}+ r S \frac{\partial V}{\partial S}- rV = 0$$
Where $V$ is the option value
We know that the value of a call option is:
$$C = SN(d_1) - Ke^{-rt}N(d_2)$$
and, the corresponding put option price is: $$P = Ke^{-rt}N(-d_2) - SN(-d_1)$$
Where:
$d_1= \frac{1}{\sigma \sqrt{t}}\left[\ln{\left(\frac{S}{K}\right)} +{\left(r + \frac{\sigma^2}{2}\right)}t\right]$
$d_2= d_1 - \sigma \sqrt{t}$
$S$ is the spot price of the underlying asset
$K$ is the strike price
$r$ is the annualized continuous compounded risk free rate
$σ$ is the volatility of returns of the underlying asset
$t$ is time to maturity (expressed in years)
$N(.)$ is the standard normal cumulative distribution
We can consider call and put equation as a function of the volatility parameter $σ$ . Finding implied volatility thus requires solving the nonlinear problem $f(x)=0$ where $x=σ$ given a starting estimate and $P$ and $C$ are the put and call market prices.
For call options:
$$f(x) = SN(d_1(x)) - Ke^{-rt}N(d_2(x)) - C$$
For put options:
$$f(x) = Ke^{-rt}N(-d_2(x)) - SN(-d_1(x)) -P$$
The underlying assumptions are that returns are normally distributed and the risk-neutral probability density function of $S_T$ is log normal with time-scaled implied volatility informing the shape parameter and a scale parameter informed by the expected return over time to expiry $T$, that is, $ln(S_0) + [r - \frac{\sigma^2}{2}]T$
The risk neutral pdf of $S$ time to expiry $T$:
$$g(S_T) = \frac{1}{S \sigma \sqrt{2\pi t}} e^{ \frac{-\left(\ln(S) - \ln(S_0) - \left[r - \frac{\sigma^2}{2}\right] \right)^2}{2 \sigma^2 t}} $$
Evidently, we are assuming a single unique and constant implied volatility to estimate the distribution of stock prices over the period. In reality, implied volatilities vary with both time and strike.
In the Black-Scholes framework, where the underlying follows a log-normal distribution, the risk neutral probability $P_K$
of being above a strike is $N(d_2)$, where $N(.)$ is the standard normal cumulative distribution function and $$\begin{equation*} d_{2} = \frac{Log(\frac{F}{K}) - \frac{\sigma_{K}^{2}}{2}t}{\sigma_{K} \sqrt{t}} \end{equation*}$$
with $F$, being the forward price:
$$F = S_0 e^{rt}$$
Thus, the risk neutral probability of being below a strike is:
$$1 - N(d_2)$$
Which means we can obtain an implied Value-at-Risk from the risk-neutral density function at a desired confidence level.
5. Implied Volatility: Understanding the limitations
Whilst implied volatilities are certainly useful in gauging market expectations of risk, they should be used with caution as as predictive input to model stock returns or generate forward-looking valuations of derivatives for risk management purposes. The the assumptions of constant volatility and normally distributed stock returns (lognormally distributed prices) are violated by the empirical observations of price behaviour in both the cash and options markets, notably:
6. Risk metrics derived from Implied Volatility Surface
The asymmetry of the option-implied distribution of asset returns allows us extract a number of useful metrics which provide an indication of the bullish or bearish sentiment of market participants.
7. Determining Implied Risk-Neutral Distributions from the volatility smile with the Breeden-Litzenburger approach
The price of a European Call option with strike K, as with any asset price, is simply the discounted value of the expected return:
$$ c\ =\ e^{-rt}\int_0^{\infty{}}max\left(S_T\ -K\ ,\ 0\right)g\left(S_T\right)dS_T $$
where r is the risk-free rate, $\left(S_T\right)$is the asset price at time T, $g$is the risk-neutral probability density function of $\left(S_T\right)$.
Equivalently, removing the zero term: $$ c\ =\ e^{-rt}\int_K^{\infty{}}\left(S_T\ -\ K\right)g\left(S_T\right)dS_T $$
In other words the fair value for the call price is the weighted sum of all possible outcomes multiplied by their probability of occurring.
If we differentiate the call price function with respect to the strike price, we obtain:
$$ \frac{\partial{}c}{\partial{}K}=\ e^{-rt}\int_K^{\infty{}}g\left(S_T\right)dS_T $$
Differentiating the call price function a second time with respect to the strike price, gives:
$$ \frac{{\partial{}}^2c}{\partial{}K^2}=\ e^{-rt}\ g\left(K\right) $$
Which implies:
$$ g\left(K\right)=\ e^{-rt}\ \frac{{\partial{}}^2c}{\partial{}K^2}\ \ $$
Thus the risk-neutral density can be obtained from the discounted second derivative of the call price function.
This result allows risk-neutral probability distributions to be estimated from volatility smiles. Supposing that $c_1$,$\ c_2,c_3$ are the prices of T-year European call options with strike prices of respectively where is a small value. An estimate of is obtained by approximating the discounted second derivative:
$$ g\left(K\right)\approx{}\ e^{-rt}\ \frac{\left(c_1-c_2\right)+\left(c_3-c_2\right)}{{\delta{}}^2}\ \ \approx{}\ e^{-rt}\ \frac{\left(c_1+c_3\right)-\left(2c_2\right)}{{\delta{}}^2}\ \ $$
A perhaps more intuitive way to understand the above is to view the combination of the four positions -- long $c_1$,$\ c_3$ with strikes of $K-\delta{}\ ,\ K+\delta{}$and short 2 $c_2$ with strikes of -- as a butterfly spread.
Considering the payoff triangle of the butterfly spread, we note that it has the dimensions of height, $\delta{}$, reflecting the maximum payoff and width $2\delta{}$. The area under the spike therefore is: $$ (0.5\ *\ 2\delta{})\ *\ \delta{}=\ {\delta{}}^2 $$
Dividing the costs of these trades $\left(c_1+c_3\right)-\left(2c_2\right)\ $by their payoffs ${\delta{}}^2$, and adjusting for the time value of money, yields the future probability distribution of the stock as priced by the options market:
$$ g\left(K\right)\ =\ e^{-rt}\ \frac{\left(c_1+c_3\right)-\left(2c_2\right)}{{\delta{}}^2}\ \ $$
Rearranging, we note that the price of the Butterfly Spread strategy is of course , the discounted value $e^{-rt}$ of the expected return: $$ e^{-rt}\ g\left(K\right){\delta{}}^2\ =\ \ \left(c_1+c_3\right)-\left(2c_2\right)\ \ $$
8. Generating simulated values for the volatility level (and stock price) with the Heston Process
Given the restrictive nature of the Black-Scholes framework, a number of extensions to the model have been proposed to reflect the empirical evidence which indicates that volatility in asset prices is non-normal, time-varying, dependent on stock price dynamics notably in the form of the leverage effect as well as to reconcile the fact that the volatilities implied by traded option prices exhibit a term structure (dependence on time to expiry) and a volatility smirk (dependence on moneyness).
Practitioners came to focus on stochastic volatility models. These models assume that the asset price and its volatility are both assumed to be random processes and can change over time. The most popular stochastic volatility model was introduced by Heston. It factors in a possible correlation between a stock's price and its volatility, it conveys volatility as reverting to the mean, it gives a closed-form solution for option pricing and it does not require that stock price follow a lognormal probability distribution.
In his influential paper Heston he proposed that that the risk neutral dynamics of the asset price are governed by the following system of stochastic differential equations (SDEs):
$$dS_{t} = (r-q) S_{t}dt + V_{t} S_{t} dW_{t}$$
$$dV_{t} = \kappa (\theta -V_{t}) dt + \xi \sqrt{V_{t}} dZ_{t}$$
$$dW_{t}, dZ_{t} = \rho dt$$
Where $S_{t}$is the price of a dividend paying stock at time $t$ and $V_{t}$ is its instantaneous variance. The parameter $r$ is the risk-free interest rate, and $q$ is the dividend yield. Both are assumed to be constant over time. $\kappa$ is the rate of mean reversion to the mean reversion level $\theta$, representing long run variance and and $\xi$ is the so-called volatility of volatility. $W_{t}$ and $Z_{t}$ are two correlated Brownian motions with $dW_{t}, dZ_{t} = \rho dt$ where $\rho$ is the correlation coefficient.
This mean-reverting variance process is of course square-root diffusion the process proposed by Cox, Ingersoll and Ross in their 1985 paper to model the short interest rate.
In the Heston model shocks to the asset price and volatility process may be negatively correlated, as empirical evidence suggests, meaning that a down move in the asset is accompanied by an up move in volatility and vice versa. This allows us to account for the aforementioned stylized fact called the leverage effect, which in essence states that volatility goes up in times of stress (declining markets) and goes down in times of a bull market (rising markets). Within the context of model parametrization, the more negatively correlated the stock and vol processes, the more negatively skewed the implied distribution of stock prices. If we were to reverse-engineer the Black-Scholes implied vols from the option prices generated by the Heston Process, one would expect a more pronounced risk reversal spread, that is, a greater difference between the implied vols at lower stock prices and those at higher prices. This makes intuitive sense given that we assuming a higher leverage effect by inputting a more negative correlation coefficient. A greater vol-of-vol parameter results in a higher kurtosis in the implied distribution of stock prices. Again, if we were to reverse-engineer the Black-Scholes implied vols from the option prices generated by the Heston Process, one would expect a more pronounced volatility smile. A higher volatility smile increases the the implied volatility curvature resulting, ceteris paribus, in higher butterfly spreads. A higher mean reversion factor implies a faster rate of reversion to long term vol, which in turn implies a flatter volatility term structure.
Calibration of the Heston Model: Heston allows that "One could estimate $\theta$ and other parameters by using values implied by option prices. Alternatively, one could estimate $\theta$ and $\kappa$ from the true spot-price process" It is computationally efficient to estimate the parameters with the quantlib process. Here, the calibration of the model is the process of seeking the model parameters for which the model results best match the market option data. This data usually consists of Black-Scholes implied volatilities derived from market quoted prices. Calibrating the Heston model is equivalent to solving the non-linear constrained optimization problem: Minimize the error between the market quotes of options and the options price estimated by Heston model. The object function could be the sum of squared differences.
Formally, Using the risk-neutral measure, the Heston model has five unknown parameters:
$$\Omega = \lbrace V_{t} ,\theta , \kappa , \xi , \rho \rbrace $$
By calibrating these parameters values, we seek to obtain an evolution for the underlying asset that is consistent with the current prices of plain vanilla options. In order to find the optimal parameter set, $ {\Omega }$, we need to (i) define a measure to quantify the distance between model and market prices; and (ii) run an optimization scheme to determine the parameter values that minimize such distance.
$$\min \sum_{i=1}^{N}{\frac{1}{N}}\lbrack C_{i}^{\Omega }(K_{i},T_{i})-C_{i}^{Mkt}(K_{i},T_{i})\rbrack ^{2}$$
Where $C_{i}^{\Omega }(K_{i},T_{i})$ are the option values using the parameter set , for all strikes and expiries, and $C_{i}^{Mkt}(K_{i},T_{i})$ are the market observed option price, for all strikes and expiries
1) Import Libraries
# Standard Python libraries
import pandas as pd
import numpy as np
# For plotting
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
plt.style.use('seaborn')
from matplotlib.pyplot import rcParams
rcParams['figure.figsize'] = 16, 8
# Import arch library
from arch import arch_model
import warnings
warnings.filterwarnings('ignore')
2) Retrieve Price Data
# Load locally stored data
df = pd.read_csv('vol.csv', parse_dates=True, index_col=0, dayfirst=True)
# Check first 5 values
df.head()
3) Visualize Index Price and VIX Price
figure, axis_1 = plt.subplots()
axis_1.plot(df['S&P 500'], color='green', label='S&P 500')
axis_1.legend(['S&P 500'],loc='best')
plt.title('Prices')
plt.grid(True)
axis_2 = axis_1.twinx()
axis_2.plot(df['VIX'], color='red', label='VIX')
axis_2.legend(['VIX'],loc='best');
4) Calculate log returns of index
returns = np.log(df['S&P 500']).diff().fillna(0)
5) Visualize daily log returns
# Visualize S&P Index daily returns
plt.plot(returns, color='orange')
plt.title('S&P 500 Index Returns')
plt.grid(True)
4) Define and fit models
#Define the model (Normal Distribution)
am_n = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=0, q=1, dist='Normal')
#Fit the model
garch1_1 = am_n.fit()
#Define the model (t-Distribution)
am_t = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=0, q=1, dist='StudentsT')
tgarch1_1 = am_t.fit()
5) Obtain Model Output
print(garch1_1)
print(tgarch1_1)
6) Visualize Historic Annualized Conditional Vol
# Plot annualised vol
fig1 = garch1_1 .plot(annualize='D')
fig2 = tgarch1_1.plot(annualize='D')
garch1_1.conditional_volatility*np.sqrt(252)
tgarch1_1.conditional_volatility*np.sqrt(252)
7) Visualize Historic Annualized Conditional Vol vs long run vol
# Model params
garch1_1.params
# long run variance from model forecast
lrv = garch1_1.params[0]/(1-garch1_1.params[1]-garch1_1.params[2])
# long run variance
np.sqrt(lrv*252)*100
plt.plot(garch1_1.conditional_volatility*np.sqrt(252), color='blue')
plt.axhline(y=np.sqrt(lrv*252), color='red');
7) Visualize Historic Annualized Conditional Vol vs VIX
# Visualise GARCH volatility and VIX
plt.title('Annualized Volatility')
plt.plot(returns.index, garch1_1.conditional_volatility*np.sqrt(252)*100, color='blue', label='GARCH')
plt.plot(returns.index, df['VIX'], color='orange', label = 'VIX')
plt.legend(loc=2)
plt.grid(True)
8) Forecast Volatility
# Forecast for next 60 days
model_forecast = garch1_1.forecast(horizon=60)
# Subsume forecast values into a dataframe
forecast_df = pd.DataFrame(np.sqrt(model_forecast.variance.dropna().T *252)*100)
forecast_df.columns = ['Cond_Vol']
forecast_df.head()
# Plot volatility forecast over a 60-day horizon
plt.plot(forecast_df, color='blue')
plt.axhline(y=np.sqrt(lrv*252)*100, color='red',)
plt.xlim(0,60)
plt.xticks(rotation=90)
plt.xlabel('Horizon (in days)')
plt.ylabel('Volatility (%)')
plt.title('Volatility Forecast : 60-days Ahead');
plt.grid(True)
B) Comparing S&P's GJR GARCH volatility and VIX
#Define the model (Normal Distribution)
am_lev = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=1, q=1, dist='Normal')
#Fit the model
GJRgarch1_1 = am_lev.fit()
print(GJRgarch1_1)
# Plot annualised vol
fig2 = GJRgarch1_1 .plot(annualize='D')
GJRgarch1_1.conditional_volatility*np.sqrt(252)
# Model params
GJRgarch1_1.params
# long run variance from model forecast
lrv_GJR = GJRgarch1_1.params[0]/(1-GJRgarch1_1.params[1]- (GJRgarch1_1.params[2]/2) - GJRgarch1_1.params[3])
# long run variance
np.sqrt(lrv_GJR*252)*100
plt.plot(GJRgarch1_1.conditional_volatility*np.sqrt(252), color='blue')
plt.axhline(y=np.sqrt(lrv_GJR*252), color='red')
# Visualise GARCH volatility and VIX
plt.title('Annualized Volatility')
plt.plot(returns.index, GJRgarch1_1.conditional_volatility*np.sqrt(252)*100, color='blue', label='GJR - GARCH')
plt.plot(returns.index, df['VIX'], color='orange', label = 'VIX')
plt.legend(loc=2)
plt.grid(True)
# Forecast for next 60 days
model_forecastGJR = GJRgarch1_1.forecast(horizon=60)
# Subsume forecast values into a dataframe
GJRforecast_df = pd.DataFrame(np.sqrt(model_forecastGJR.variance.dropna().T *252)*100)
GJRforecast_df.columns = ['Cond_Vol']
GJRforecast_df.head()
C) Comparing S&P's GJR-GARCH and GARCH volatilities and VIX
# Plot volatility forecast over a 60-day horizon
plt.plot(GJRforecast_df, color='blue',label='GJR - GARCH',)
plt.plot(forecast_df, color='green',label='GARCH')
plt.axhline(y=np.sqrt(lrv_GJR*252)*100, color='blue',label='Long Run Vol:GJR - GARCH',
linestyle='dashed',linewidth=4)
plt.axhline(y=np.sqrt(lrv*252)*100, color='green',label='Long Run Vol:GARCH',
linestyle='dashed',linewidth=4)
plt.xlim(0,60)
plt.xticks(rotation=90)
plt.xlabel('Horizon (in days)')
plt.ylabel('Volatility (%)')
plt.title('Volatility Forecast : 60-days Ahead');
plt.legend(loc=4)
plt.grid(True)
4. GARCH model performance during Great Financial Crisis
import yfinance as yf
SECURITIES = ['^GSPC', '^VIX']
START_DATE = '2005-07-18'
END_DATE = '2010-08-13'
raw_df = yf.download(SECURITIES, start=START_DATE,
end=END_DATE, adjusted=True)
print(f'Downloaded {raw_df.shape[0]} rows of data.')
df = raw_df['Adj Close']
df = df.dropna()
df.columns = ['S&P 500','VIX']
df.head()
figure, axis_1 = plt.subplots()
axis_1.plot(df['S&P 500'], color='green', label='S&P 500')
axis_1.legend(['S&P 500'],loc='best')
plt.title('Prices')
plt.grid(True)
axis_2 = axis_1.twinx()
axis_2.plot(df['VIX'], color='red', label='VIX')
plt.legend(loc=2)
4) Calculate log returns of index
returns = np.log(df['S&P 500']).diff().fillna(0)
5) Visualize daily returns
# Visualize S&P Index daily returns
plt.plot(returns, color='orange')
plt.title('S&P 500 Index Returns')
plt.grid(True)
4) Define and fit models
#Define the model (Normal Distribution)
am_n = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=0, q=1, dist='Normal')
#Fit the model
garch1_1 = am_n.fit()
#Define the model (t-Distribution)
am_t = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=0, q=1, dist='StudentsT')
tgarch1_1 = am_t.fit()
5) Obtain Model Output
print(garch1_1)
print(tgarch1_1)
6) Visualize Historic Annualized Conditional Vol
# Plot annualised vol
fig1 = garch1_1 .plot(annualize='D')
fig2 = tgarch1_1.plot(annualize='D')
garch1_1.conditional_volatility*np.sqrt(252)
tgarch1_1.conditional_volatility*np.sqrt(252)
7) Visualize Historic Annualized Conditional Vol vs long run vol
# Model params
garch1_1.params
# long run variance from model forecast
lrv = garch1_1.params[0]/(1-garch1_1.params[1]-garch1_1.params[2])
daily_lrvol = np.sqrt(lrv)*100
# long run variance
ann_lrvol = np.sqrt(lrv*252)*100
lrv, daily_lrvol, ann_lrvol
plt.plot(garch1_1.conditional_volatility*np.sqrt(252), color='blue')
plt.axhline(y=np.sqrt(lrv*252), color='red')
7) Visualize Historic Annualized Conditional Vol vs VIX
# Visualise GARCH volatility and VIX
plt.title('Annualized Volatility')
plt.plot(returns.index, garch1_1.conditional_volatility*np.sqrt(252)*100, color='blue', label='GARCH')
plt.plot(returns.index, df['VIX'], color='orange', label = 'VIX')
plt.legend(loc=2)
plt.grid(True)
8) Forecast Volatility
# Forecast for next 60 days
model_forecast = garch1_1.forecast(horizon=60)
# Subsume forecast values into a dataframe
forecast_df = pd.DataFrame(np.sqrt(model_forecast.variance.dropna().T *252)*100)
forecast_df.columns = ['Cond_Vol']
forecast_df.head()
# Plot volatility forecast over a 60-day horizon
plt.plot(forecast_df, color='blue')
plt.axhline(y=np.sqrt(lrv*252)*100, color='red',)
plt.xlim(0,60)
plt.xticks(rotation=90)
plt.xlabel('Horizon (in days)')
plt.ylabel('Volatility (%)')
plt.title('Volatility Forecast : 60-days Ahead');
plt.grid(True)
<
D) GJR-GARCH model performance during Great Financial Crisis
#Define the model (Normal Distribution)
am_lev = arch_model(returns, mean = 'zero', vol='Garch', p=1, o=1, q=1, dist='Normal')
#Fit the model
GJRgarch1_1 = am_lev.fit()
print(GJRgarch1_1)
# Plot annualised vol
fig2 = GJRgarch1_1 .plot(annualize='D')
GJRgarch1_1.conditional_volatility*np.sqrt(252)
# Model params
GJRgarch1_1.params
# long run variance from model forecast
lrv_GJR = GJRgarch1_1.params[0]/(1-GJRgarch1_1.params[1]- (GJRgarch1_1.params[2]/2) - GJRgarch1_1.params[3])
# long run variance
np.sqrt(lrv_GJR*252)*100
plt.plot(GJRgarch1_1.conditional_volatility*np.sqrt(252), color='blue')
plt.axhline(y=np.sqrt(lrv_GJR*252), color='red')
# Visualise GARCH volatility and VIX
plt.title('Annualized Volatility')
plt.plot(returns.index, GJRgarch1_1.conditional_volatility*np.sqrt(252)*100, color='blue', label='GJR - GARCH')
plt.plot(returns.index, df['VIX'], color='orange', label = 'VIX')
plt.legend(loc=2)
plt.grid(True)
# Forecast for next 60 days
model_forecastGJR = GJRgarch1_1.forecast(horizon=60)
# Subsume forecast values into a dataframe
GJRforecast_df = pd.DataFrame(np.sqrt(model_forecastGJR.variance.dropna().T *252)*100)
GJRforecast_df.columns = ['Cond_Vol']
GJRforecast_df.head()
# Plot volatility forecast over a 60-day horizon
plt.plot(GJRforecast_df, color='blue',label='GJR - GARCH',)
plt.plot(forecast_df, color='green',label='GARCH')
plt.axhline(y=np.sqrt(lrv_GJR*252)*100, color='blue',label='Long Run Vol:GJR - GARCH',
linestyle='dashed',linewidth=4)
plt.axhline(y=np.sqrt(lrv*252)*100, color='green',label='Long Run Vol:GARCH',
linestyle='dashed',linewidth=4)
plt.xlim(0,60)
plt.xticks(rotation=90)
plt.xlabel('Horizon (in days)')
plt.ylabel('Volatility (%)')
plt.title('Volatility Forecast : 60-days Ahead');
plt.legend(loc=4)
plt.grid(True)
E) VaR Estimation with Implied Volatility
1) Import libraries and specify stock
import yfinance as yf
import QuantLib as ql
import pandas as pd
import math
import numpy as np
aapl = yf.Ticker("aapl")
2) Retrieve price and dividend yield
# q=0.02
#spot= 146.39
q = aapl.info['dividendYield']
spot = aapl.info['ask']
print("Spot:",spot)
print("Div. Yield::",q)
3) Retrieve interest rate term structure and select appropriate risk free rate
import pandas_datareader as web
import datetime
start = datetime.datetime(2021, 6, 30)
end = datetime.datetime(2021, 7, 23)
int_rates = web.DataReader(["DGS1MO", "DGS3MO","DGS6MO", "DGS1",
"DGS2", "DGS3","DGS5","DGS7", "DGS10","DGS20","DGS30"], "fred", start, end)
int_rates = int_rates.dropna()
int_rates
# 1 year risk free rate
r = int_rates.iloc[-1,3]/100
r
4) Within the QuantLib library, define some of the basic data conventions such as the day_count, calendar, valuation date. Construct some of the basic dependencies such as the yield and dividend term structures.
day_count = ql.Actual365Fixed()
calendar = ql.UnitedStates()
calculation_date = ql.Date(str(end), '%Y-%m-%d')
ql.Settings.instance().evaluationDate = calculation_date
spot
dividend_yield = ql.QuoteHandle(ql.SimpleQuote(0.0))
risk_free_rate = r
dividend_rate = q
flat_ts = ql.YieldTermStructureHandle(
ql.FlatForward(calculation_date, risk_free_rate, day_count))
dividend_ts = ql.YieldTermStructureHandle(
ql.FlatForward(calculation_date, dividend_rate, day_count))
5) Retrieve expiration dates for option chain (as string)
expiration_dates = aapl.options
expiration_dates =list(expiration_dates)
# Except for nearest expiries
expiration_dates = expiration_dates[9:]
6) Parse dates with various methods for future use
import datetime
from dateutil.parser import parse
date_time_str = expiration_dates
# 1 string parsed as datetime.datetime (necessary)
date_time_obj = [datetime.datetime.strptime(x, "%Y-%m-%d") for x in date_time_str]
# 2 string parsed as datetime.date (necessary)
date_obj = [datetime.datetime.strptime(x, "%Y-%m-%d").date() for x in date_time_str]
# 3. string parsed witrh ql.Date (necessary)
quantDate_obj = [ql.Date(x, '%Y-%m-%d') for x in date_time_str]
date_time_obj, date_obj, quantDate_obj
5) Retrieve call and put option chains
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
aapl_call = aapl.option_chain().calls
aapl_put = aapl.option_chain().puts
aapl_call
6) Filter option chain
# Set expiry list - filter by dates if desired
expiry_list = expiration_dates
# Initialise empty list to collect option vols and values
data_callvols = []
data_callvals = []
# Set expiry list - filter by dates if desired
for expiry in expiry_list:
sub_set = aapl.option_chain(expiry).calls
sub_set = sub_set[(sub_set.strike>60) & (sub_set.strike<300)]
sub_set.set_index('strike', inplace=True)
data_callvols.append(sub_set['impliedVolatility'])
data_callvals.append(sub_set['ask'])
7) Obtain quoted volatility surface
# Collect subset of implied vols in dataframe (filtered by expiration and strike)
call_vols = pd.DataFrame(data_callvols)
# Create datetime index for expiration dates to allow time interpolation
call_vols.index = date_time_obj
call_vols
7) Obtain quoted volatility surface for most traded options
max_K = call_vols.columns.max()
min_K = call_vols.columns.min()
max_K , min_K
sel_K = np.arange(65,235,5)
sel_K
call_vols = call_vols[sel_K]
call_vols
# Interpolate filtered vols
# First: Linear interpolation by strike (horizontally)
call_vols = call_vols.interpolate(method='linear',axis=1)
call_vols
#Second: Time interpolation by expiration (vertically)
call_vols = call_vols.interpolate(method='time',axis=0)
call_vols
#call_vols = call_vols.dropna("columns")
#call_vols
8) Compute continuous volatility surface with QuantLib
# Quantlib datetime format
call_vols.index = [quantDate_obj]
call_vols
# Collect implied vols in list of lists
row_data = call_vols.values.tolist()
# Collect strikes in list and convert to float
strikes = call_vols.columns.values.tolist()
strikes = [float(i) for i in strikes]
len(strikes)
# Error Checking:
expiries_after_dropna = call_vols.index.values.tolist()
len(expiries_after_dropna) == len(quantDate_obj)
# Rearrange quoted vol matrix for input to black_var_surface
implied_vols = ql.Matrix(len(strikes), len(quantDate_obj))
for i in range(implied_vols.rows()):
for j in range(implied_vols.columns()):
implied_vols[i][j] = row_data[j][i]
# Create continously interpolated black variance surface
black_var_surface = ql.BlackVarianceSurface(calculation_date, calendar, quantDate_obj, strikes, implied_vols, day_count)
9) Compute ATM Volatility
# Obtain ATM implied volatility from surface with black_var_surface taking 2 arguments (time to expiry and strike)
ATMstrike = spot
ATMexpiry = 0.5# years
ATMvol = black_var_surface.blackVol(ATMexpiry, ATMstrike)
ATMvol
10) Obtain log normal probability density function and cumulative density function of stock price
def lognormal(x, sigma, r, t, S0):
exp_num = -(np.log(x) - np.log(S0) - (r - sigma**2 / 2) * t)**2
exp_dem = 2 * sigma**2 * t
exp_term = np.exp(exp_num / exp_dem)
prefactor = 1 / (x * sigma * np.sqrt(2 * np.pi * t))
return np.multiply(prefactor, exp_term)
from scipy.stats import lognorm, norm
x = np.linspace(0.5*spot, 1.5*spot, 1000)
sigma = ATMvol
r = r
t = ATMexpiry
S0 = spot
std = sigma * np.sqrt(t)
mean = np.log(S0) + (r - sigma**2 / 2) * t
# pdf
f = lognorm.pdf(x, std, loc = 0, scale = np.exp(mean))
# cdf
F = lognorm.cdf(x, std, loc = 0, scale = np.exp(mean))
df_density = pd.DataFrame(f, columns=['PDF'],index=x)
df_density['CDF'] = F
df_density.head()
11) Compute VaR at desired confidence level(s)
from scipy.stats import lognorm,norm
probs = [0.01, 0.05,0.10]
F_inv = lognorm(std, scale=np.exp(mean)).ppf(probs)
F_inv_5pc_val= F_inv[1]
F_inv_pc_losses = F_inv/spot-1
F_inv_5pc_loss = F_inv_pc_losses[1]
F_inv_5pc_loss
12) Visualize density functions and VaR
figure, axis_1 = plt.subplots()
axis_1.plot(x, f ,color='green', label='pdf')
plt.title('Lognormal Distribution of Stock Prices')
plt.grid(False);
axis_2 = axis_1.twinx()
axis_2.plot(x, F ,color='blue', label='cdf')
axis_1.set_xlabel("Stock Price", color='black')
axis_1.set_ylabel("Probability Density Function", color='green')
axis_2.set_ylabel("Cumulative Density Function", color='blue');
plt.axvline(x=F_inv[1], color='red',label='5% VaR',linestyle='dashed',linewidth=4);
print(f"Based on the 6 month ATM implied volatility of {ATMvol:.4f} :")
print(f"- There is a probability of 5 percent that the stock price will fall to {F_inv_5pc_val:.2f} or below in the next 6 months")
print(f"- This equates to a loss equal to or greater than {F_inv_5pc_loss:.2f}")
13) Compute VaR with closed form formula
import math as math
spot
pc_impact = np.linspace(0.6,1.4,1000)
# Probability of any security level may be input
K_5pc = F_inv_5pc_val
Fwd = spot*math.exp(r*t)
d_2 = (np.log(Fwd/K_5pc) - (((ATMvol**2)/2)*t)) / (ATMvol*math.sqrt(t))
N_d_2 = norm.cdf(d_2)
probK = 1- N_d_2
print(f"There is a risk-neutral probability of {probK:.6f} that the stock price will fall from the current value of {spot:.2f}")
print (f"to a value equal to or less than {K_5pc:.2f} over the next 6 months")
K_range = np.linspace(strikes[0],strikes[-1],1000)
column_names = ['cdf']
df_density = pd.DataFrame(index= K_range,columns=['Prob ITM','Prob OTM'])
for K in K_range:
d_2_range = (np.log(Fwd/K_range ) - (((ATMvol**2)/2)*t)) / (ATMvol*math.sqrt(t))
N_d_2_range = norm.cdf(d_2_range)
cum_prob_K_range = 1-N_d_2_range
df_density['Prob ITM'] = N_d_2_range
df_density['Prob OTM'] = cum_prob_K_range
df_density.head()
fig, ax_1 = plt.subplots()
ax_1.bar(K_range, df_density['Prob OTM'] ,color='red', label='Probability Loss')
ax_1.set_ylim(0, 0.52)
plt.title('Probaility of Max Loss')
plt.grid(True);
ax_1.set_xlabel("Stock Price", color='black')
ax_1.set_ylabel("Cumulative Probability of Price Decline", color='red')
plt.axvline(spot, color='blue',label='Current Spot',linestyle='dashed',linewidth=4);
ax_1.set_ylim(0, 0.52)
ax_1.set_xlim(K_range[0], spot)
F) VOLATILITY TERM STRUCTURE: Compare interpolated values for multiple strikes - ITM, ATM, OTM - as a function of time
def vol_ts(vol_surface, plot_years, plot_strikes):
plot_strikes.append(plot_strikes[0])
np.array(plot_strikes)
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=plot_strikes)
df = df.iloc[:,0:-1]
return df
# Strikes
list_K = [max(strikes),spot, min(strikes)]
# Compare interpolated values for several selected strikes
max_maturity = (quantDate_obj[-1] - calculation_date)/365
min_maturity = 0.25
expiration_grid = np.linspace(min_maturity, max_maturity,1000)
df_vol_ts = vol_ts(vol_surface = black_var_surface,
plot_years=expiration_grid,
plot_strikes=list_K)
df_vol_ts.head()
def vol_ts_plt(vol_surface, plot_years, plot_strikes):
plot_strikes.append(plot_strikes[0])
np.array(plot_strikes)
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=plot_strikes)
df = df.iloc[:,0:-1]
ax = plt.axes()
ax.plot(df)
plt.legend([round(num, 2) for num in plot_strikes])
ax.set(xlabel='Term to Expiry', ylabel='Implied Volatility', title='Volatility Term Structure for specified strikes');
# Strikes
list_K = [spot*1.15,spot, spot*0.85]
# Compare interpolated values for several selected strikes
max_maturity = (quantDate_obj[-1] - calculation_date)/365
min_maturity = 0.25
expiration_grid = np.linspace(min_maturity, max_maturity,1000)
df_vol_ts_plt = vol_ts_plt(vol_surface = black_var_surface,
plot_years=expiration_grid,
plot_strikes=list_K)
df_vol_ts_plt
Note that implied volatility is generally upward sloping beyond the half year point. Differences between the implied volatilities of the options with varying strike levels is at its most pronounced for shorter dated options due to the potential for extreme moves (jumps) and relatively less time for recovery. ATM and OTM have a relatively greater possibility of suffering total loss due to the risk of jumps. ITM options are less likely to suffer total loss due to short term extreme moves. The difference narrows as time to expiry increases, illustrating the lower exposure to jump risk. The volatility skew is also observed: implied volatility is generally higher acroos all maturities as we move deeper into the money.
G) VOLATILITY SMIRK : Compare implied volatilities for multiple time to expiries as function of strike
def vol_smile(vol_surface, plot_years, plot_strikes):
plot_years.append(plot_years[0])
np.array(plot_years)
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=plot_strikes)
df = df.iloc[0:-1,:]
return df
list_tte = [0.5,1, 1.5]
strikes_grid = np.linspace(strikes[0], strikes[-1],100)
df_vol_smile = vol_smile(vol_surface = black_var_surface,
plot_years=list_tte,
plot_strikes=strikes_grid)
df_vol_smile
def vol_smile_plot(vol_surface, plot_years, plot_strikes):
plot_years.append(plot_years[0])
np.array(plot_years)
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=plot_strikes)
df = df.iloc[0:-1,:]
ax = plt.axes()
ax.plot(df.T)
plt.legend(df.index);
ax.set(xlabel='Strike', ylabel='Implied Volatility', title='Volatility Smile for specified terms to expiry');
list_tte = [0.5,1, 1.5]
vol_smile_plot(vol_surface = black_var_surface,
plot_years=list_tte,
plot_strikes=strikes_grid)
Note that the volatility skew is relatively more pronounced for shorter expiries than longer expiries
H) The volatility surface: Implied Volatilities as a function of strike and time to expiry
def matrix_vol_surface_strike(vol_surface, plot_years, plot_strikes):
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=plot_strikes)
return df
# Surface as dataframe
df_vol_surf_strike = matrix_vol_surface_strike(vol_surface = black_var_surface,
plot_years=np.arange(0.25, 2, 0.1),
plot_strikes=strikes_grid)
df_vol_surf_strike
from matplotlib import rcParams
rcParams['axes.labelpad'] = 20
# Utility function to plot vol surfaces (passing in ql.BlackVarianceSurface objects).
# K, t serve to calculate and annotate specific imp vol on surface
def plot_vol_surface_strike(K,t,vol_surface, plot_years, plot_strikes):
fig = plt.figure(figsize=(14,10))
fig.suptitle('Implied Volatility Surface - As Function of Time to Expiry and Strike', fontsize=20)
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
surf = ax.plot_surface(X,Y,Z, rstride=2, cstride=2,
cmap=plt.cm.coolwarm, linewidth=0.5,
antialiased=True,alpha = 0.9)
sig = vol_surface.blackVol(t, K)
ax.text(K, t, sig, f'← K={K}, T={t}, σ={sig:.3f}', color='black', size=12)
ax.set_xlabel('Strike')
ax.set_ylabel('Expiration (Years)')
ax.set_zlabel('Implied Volatility')
ax.set(facecolor='white')
ax.dist = 10
fig.colorbar(surf, shrink=1, aspect=10)
plot_vol_surface_strike(K=spot, t=1.0,
vol_surface=black_var_surface,
plot_years=np.arange(0.5, 2, 0.1),
plot_strikes=strikes_grid)
I) The volatility surface: Implied Volatilities as a function of moneyness and time to expiry
def matrix_vol_surface_moneyness(S,vol_surface, plot_years, plot_strikes):
moneyness = plot_strikes/S
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
df = pd.DataFrame(Z,index=plot_years,columns=moneyness)
return df
df_vol_surf_moneyness = matrix_vol_surface_moneyness(S=spot,
vol_surface = black_var_surface,
plot_years=np.arange(0.25, 2, 0.1),
plot_strikes=strikes_grid)
df_vol_surf_moneyness
from matplotlib import rcParams
rcParams['axes.labelpad'] = 20
# Utility function to plot vol surfaces (passing in ql.BlackVarianceSurface objects).
# K, t serve to calculate and annotate specific imp vol on surface
def plot_vol_surface_moneyness(K,t, S, vol_surface, plot_years, plot_strikes):
fig = plt.figure(figsize=(14,10))
fig.suptitle('Implied Volatility Surface - As Function of Time to Expiry and Moneyness', fontsize=20)
ax = fig.gca(projection='3d')
moneyness = plot_strikes/S
X, Y = np.meshgrid(plot_strikes, plot_years)
Z = np.array([vol_surface.blackVol(float(y), float(x))
for xr, yr in zip(X, Y)
for x, y in zip(xr,yr) ]
).reshape(len(X), len(X[0]))
surf = ax.plot_surface(moneyness,Y,Z, rstride=2, cstride=2,
cmap=plt.cm.coolwarm, linewidth=0.5,
antialiased=True,alpha = 0.9)
sig = vol_surface.blackVol(t, K)
moneyness_level = K/S
ax.text(moneyness_level, t, sig, f'← K/S={K/S:.2f}, T={t}, σ={sig:.3f}', color='black', size=12)
ax.set_xlabel('Strike')
ax.set_ylabel('Expiration (Years)')
ax.set_zlabel('Implied Volatility')
ax.set(facecolor='white')
ax.dist = 10
fig.colorbar(surf, shrink=1, aspect=10)
plot_vol_surface_moneyness(S=spot, K=spot, t=1.0,
vol_surface=black_var_surface,
plot_years=np.arange(0.5, 2, 0.1),
plot_strikes=strikes_grid)
J) Call-put implied volatility spread
1) Retrieve put option chain
aapl_put
# Set expiry list - filter by dates if desired
expiry_list = expiration_dates
# Initialise empty list to collect option vols and values
data_putvols = []
data_putvals = []
# Set expiry list - filter by dates if desired
for expiry in expiry_list:
sub_set_p = aapl.option_chain(expiry).puts
sub_set_p = sub_set_p[(sub_set_p.strike>60) & (sub_set_p.strike<300)]
sub_set_p.set_index('strike', inplace=True)
data_putvols.append(sub_set_p['impliedVolatility'])
data_putvals.append(sub_set_p['ask'])
# Collect subset of implied vols in dataframe (filtered by expiration and strike)
put_vols = pd.DataFrame(data_putvols)
# Create datetime index for expiration dates to allow time interpolation
put_vols.index = date_time_obj
put_vols
2) Obtain quoted volatility surface for most traded options
put_vols.columns.max(), put_vols.columns.min()
sel_K
put_vols = put_vols[sel_K]
put_vols
# Interpolate filtered vols
# First: Linear interpolation by strike (horizontally)
put_vols = put_vols.interpolate(method='linear',axis=1)
# Second: Time interpolation by expiration (vertically)
put_vols = put_vols.interpolate(method='time',axis=0)
put_vols
#put_vols = put_vols.dropna("columns")
#put_vols
3) Compute continuous volatility surface with QuantLib
# Quantlib datetime format
put_vols.index = [quantDate_obj]
put_vols
# Collect implied vols in list of lists
row_data_p = put_vols.values.tolist()
# Collect strikes in list and convert to float
strikes_p = put_vols.columns.values.tolist()
strikes_p = [float(i) for i in strikes_p]
len(strikes_p)
# Error Checking:
put_expiries_after_dropna = put_vols.index.values.tolist()
len(put_expiries_after_dropna) == len(quantDate_obj)
# Rearrange quoted vol matrix for input to black_var_surface
implied_vols_p = ql.Matrix(len(strikes_p), len(quantDate_obj))
for i in range(implied_vols_p.rows()):
for j in range(implied_vols_p.columns()):
implied_vols_p[i][j] = row_data_p[j][i]
# Create continously interpolated black variance surface
black_var_surface_p = ql.BlackVarianceSurface(calculation_date,
calendar, quantDate_obj, strikes_p, implied_vols_p, day_count)
9) Compute ATM Volatility
# Obtain ATM implied volatility from surface with black_var_surface taking 2 arguments (time to expiry and strike)
ATMstrike = spot
ATMexpiry = 0.5# years
ATMvol_p = black_var_surface_p.blackVol(ATMexpiry, ATMstrike)
ATMvol_p
ATMvol - ATMvol_p
# As Dataframe
df_vol_surf_strike_p = matrix_vol_surface_strike(vol_surface = black_var_surface_p,
plot_years=np.arange(0.25, 2, 0.1),
plot_strikes=strikes_grid)
df_vol_surf_strike_p
# Call and put vol surfaces must have same index and column values.
IV_spread = df_vol_surf_strike - df_vol_surf_strike_p
IV_spread = pd.DataFrame(IV_spread)
IV_spread.index = IV_spread.index.values.round(2)
IV_spread
IV_spread.mean(axis=1).plot( kind = 'bar')
plt.ylabel('IV Spread')
plt.xlabel('Time to Expiry')
plt.title("Call-put implied volatility spread");
K) Equity Skew
1) View max and min moneyness to inform dimensions of vol surface
max_strike = strikes[-1]
max_moneyness = strikes[-1]/spot
min_strike = strikes[0]
min_moneyness = strikes[0]/spot
(max_strike, max_moneyness), (min_strike, min_moneyness),
2) Create Dataframe for OTM CALL Moneyness
list_tte = [0.5,1, 1.5]
OTM_call_moneyness = np.arange(1,1.4, 0.02)
OTM_call_strikes = OTM_call_moneyness * spot
OTM_call_vol = vol_smile(vol_surface = black_var_surface,
plot_years=list_tte,
plot_strikes=OTM_call_strikes)
OTM_call_vol.columns = OTM_call_strikes / spot
OTM_call_vol
3) Create Dataframe for comparable OTM PUT moneyness
list_tte = [0.5,1, 1.5]
# Ensure comparable moneyness of call and puts
Init_OTM_put_moneyness = 1- (OTM_call_moneyness[-1]-1)
OTM_put_moneyness = np.arange(Init_OTM_put_moneyness,1, 0.02)
OTM_put_strikes = OTM_put_moneyness * spot
OTM_put_vol = vol_smile(vol_surface = black_var_surface_p,
plot_years=list_tte,
plot_strikes=OTM_put_strikes)
OTM_put_vol.columns = OTM_put_strikes / spot
OTM_put_vol
len(OTM_call_vol) == len(OTM_put_vol)
4) Visualize skew as line graph
OTM_line = pd.concat([OTM_put_vol.iloc[:,0:-1], OTM_call_vol], axis=1)
OTM_line
OTM_line.T.plot(kind = 'line')
plt.axvline(x=1, color='black',label='ATM',linestyle='dashed',linewidth=4);
plt.ylabel('Equity Skew')
plt.xlabel('Moneyness');
5) Reverse values in OTM Call vols for calculation
rev_OTM_call_vol = OTM_call_vol.iloc[:, ::-1]
np.array(rev_OTM_call_vol)
6) Calculate skew
equity_skew_val = pd.DataFrame(np.array(OTM_put_vol) - np.array(rev_OTM_call_vol))
equity_skew_val.columns = (rev_OTM_call_vol.columns.values -1)
equity_skew_val = equity_skew_val.iloc[:, ::-1]
equity_skew_val.index = OTM_put_vol.index.values
equity_skew_val
# set width of bar
barWidth = 0.25
fig = plt.subplots()
# Set position of bar on X axis
br1 = np.arange(len(equity_skew_val.iloc[0]))
br2 = [x + barWidth for x in br1]
br3 = [x + barWidth for x in br2]
# Make the plot
plt.bar(br1, equity_skew_val.iloc[0], color ='r', width = barWidth,
edgecolor ='grey', label ='0.5 (years)')
plt.bar(br2, equity_skew_val.iloc[1], color ='g', width = barWidth,
edgecolor ='grey', label ='1 (year)')
plt.bar(br3, equity_skew_val.iloc[2], color ='b', width = barWidth,
edgecolor ='grey', label ='1.5 (years)')
# Adding Xticks
plt.xlabel('Out-of-Moneyness', fontweight ='bold', fontsize = 15)
plt.ylabel('Equity Skew', fontweight ='bold', fontsize = 15)
N = len(equity_skew_val.columns)
ind = np.arange(N)
xtick_moneyness = equity_skew_val.columns.values.tolist()
xtick_moneyness = [float(i) for i in xtick_moneyness]
xtick_moneyness = [round(num, 2) for num in xtick_moneyness ]
plt.xticks(ind, xtick_moneyness)
plt.legend()
plt.title("Equity skew for 3 tenors and multiple out-of-moneyness levels",fontweight ='bold', fontsize = 15)
plt.show()
L) Butterfly Spread
1) Compute spread for different tenors
bf0 = ((np.array(OTM_put_vol.iloc[0]) + np.array(rev_OTM_call_vol.iloc[0]))/2) - rev_OTM_call_vol.loc[:,1.0].iloc[0]
bf1 =((np.array(OTM_put_vol.iloc[1]) + np.array(rev_OTM_call_vol.iloc[1]))/2) - rev_OTM_call_vol.loc[:,1.0].iloc[1]
bf2 =((np.array(OTM_put_vol.iloc[2]) + np.array(rev_OTM_call_vol.iloc[2]))/2) - rev_OTM_call_vol.loc[:,1.0].iloc[2]
2) Collect in dataframe
bf_spread_val = pd.DataFrame(np.stack((bf0, bf1,bf2), axis=0))
bf_spread_val.columns = (rev_OTM_call_vol.columns.values -1)
bf_spread_val = bf_spread_val.iloc[:, ::-1]
bf_spread_val.index = OTM_call_vol.index.values
bf_spread_val
3) Visualize Butterfly Spread
# set width of bar
barWidth = 0.25
fig = plt.subplots()
# set height of bar
# Set position of bar on X axis
br1 = np.arange(len(bf_spread_val.iloc[0]))
br2 = [x + barWidth for x in br1]
br3 = [x + barWidth for x in br2]
# Make the plot
plt.bar(br1, bf_spread_val.iloc[0], color ='r', width = barWidth,
edgecolor ='grey', label ='0.5 (years)')
plt.bar(br2, bf_spread_val.iloc[1], color ='g', width = barWidth,
edgecolor ='grey', label ='1 (year)')
plt.bar(br3, bf_spread_val.iloc[2], color ='b', width = barWidth,
edgecolor ='grey', label ='1.5 (years)')
# Adding Xticks
plt.xlabel('Out-of-Moneyness', fontweight ='bold', fontsize = 15)
plt.ylabel('Butterfly Spread', fontweight ='bold', fontsize = 15)
N = len(bf_spread_val.columns)
ind = np.arange(N)
xtick_moneyness = bf_spread_val.columns.values.tolist()
xtick_moneyness = [float(i) for i in xtick_moneyness]
xtick_moneyness = [round(num, 2) for num in xtick_moneyness ]
plt.xticks(ind, xtick_moneyness)
plt.legend()
plt.title("Butterfly Spread for 3 tenors and multiple out-of-moneyness levels",fontweight ='bold', fontsize = 15)
plt.show()
M) Distribution of stock prices implied by volatility smile
1) Retrieve Strikes, Implied Volatilities
list_tte = [0.5,1, 1.5]
spot = spot
r = r
call_strikes = list(call_vols.columns.values)
call_vol = vol_smile(vol_surface = black_var_surface,
plot_years=list_tte,
plot_strikes=call_strikes)
call_vol
# Select desired tenor of smile
T = OTM_line.index.values[1]
X_prices = call_vols.columns.values
O_vols = call_vol.iloc[1,:].values
X_prices , O_vols,T
3) Collect Strikes and Vols in dataframe
columns = ["Strike", "ImpVol"]
df_c_opt = pd.DataFrame(np.stack((X_prices, O_vols), axis=1),columns = columns )
df_c_opt
3) Impute European Call prices from strikes and vols
def call_value(S, K, sigma, t=0, r=0):
# use np.multiply and divide to handle divide-by-zero
with np.errstate(divide='ignore'):
d1 = np.divide(1, sigma * np.sqrt(t)) * (np.log(S/K) + (r+sigma**2 / 2) * t)
d2 = d1 - sigma * np.sqrt(t)
return np.multiply(norm.cdf(d1),S) - np.multiply(norm.cdf(d2), K * np.exp(-r * t))
df_c_opt['OptPrice'] = df_c_opt.apply(lambda row: call_value(spot, row.Strike, row.ImpVol,
t=T,r=r), axis=1)
df_c_opt
4) Calculate Implied Probability distribution
data = []
for (_, left) ,(_,centre), (_, right) in zip (df_c_opt.iterrows(), df_c_opt.iloc[1:].iterrows(), df_c_opt.iloc[2:].iterrows()):
butterfly_price = left.OptPrice - 2* centre.OptPrice + right.OptPrice
max_profit = centre.Strike - left.Strike
data.append([left.Strike,centre.Strike,right.Strike, butterfly_price, max_profit])
bflys = pd.DataFrame(data, columns=["low_strike","strike","up_strike", "price", "max_profit"])
bflys["prob"] = math.exp(r*T)*(bflys.price / bflys.max_profit**2)
bflys
6) View Raw Probabilities
fig, axis_3 = plt.subplots()
axis_3.plot(bflys.strike, bflys.prob,"o",color = 'green', label='pdf', )
plt.title('Distribution of Stock Prices Implied by Volatility Smile')
plt.grid(True)
axis_3.set_xlabel("Stock Price", color='black')
axis_3.set_ylabel("Raw Probability", color='green');
7) Obtain smoothed probability density function
from scipy.ndimage import gaussian_filter1d
smoothed_prob = gaussian_filter1d(bflys.prob, 2)
plt.plot(bflys.strike, bflys.prob, "go", bflys.strike, smoothed_prob, "r-")
plt.legend(["Raw prob", "Smoothed prob"], loc="best")
plt.xlabel("Strike")
plt.ylabel("Probability")
plt.show()
8) Obtain interpolated pdf
import scipy as scipy
pdf = scipy.interpolate.interp1d(bflys.strike, smoothed_prob, kind="cubic",
fill_value="extrapolate")
x_new = np.linspace(bflys.strike.min(), bflys.strike.max(), 10000)
plt.plot(bflys.strike, smoothed_prob, "ko", x_new, pdf(x_new), "r-");
plt.legend(["smoothed prob", "fitted PDF"], loc="best")
plt.xlabel("K")
plt.ylabel("f(K)")
plt.tight_layout()
plt.show()
9) Compute cumulative density function
prob_density = pd.DataFrame(x_new[0:-1], columns = ["Spot"])
prob_density['pdf'] = pdf(x_new)[0:-1]
cum_prob = scipy.integrate.cumtrapz(y=pdf(x_new), x=x_new)
prob_density['cdf'] = cum_prob
prob_density.head()
10) Compute VaR at desired confidence level
# Important that unity is obtaned with CDF
var = 0.05
s = prob_density.set_index('Spot').sub(var).abs().idxmin()
s['cdf']
11) Visualize Distribution of Stock Prices Implied by Volatility Smile
figure, axis_3 = plt.subplots()
axis_3.plot(x_new[0:-1], pdf(x_new)[0:-1] ,color='green', label='pdf')
plt.title('Distribution of Stock Prices Implied by Volatility Smile')
plt.grid(False);
axis_4 = axis_3.twinx()
axis_4.plot(x_new[0:-1], cum_prob ,color='blue', label='cdf')
axis_3.set_xlabel("Stock Price", color='black')
axis_3.set_ylabel("Probability Density Function", color='green')
axis_4.set_ylabel("Cumulative Density Function", color='blue');
var = 0.05
s = prob_density.set_index('Spot').sub(var).abs().idxmin()
s['cdf']
imp_loss = s['cdf']/spot -1
plt.axvline(x=s['cdf'], color='red',label='5% VaR',linestyle='dashed',linewidth=4);
print(f"- There is a probability of 5 percent that the stock price will fall to {s['cdf']:.2f} or below in the next 12 months")
print(f"- This equates to a loss equal to or greater than {imp_loss:.2f}")
# Find area under curve
smoothed_total_prob = scipy.integrate.trapz(y=pdf(x_new), x=x_new)
print(f"Smoothed total probability: {smoothed_total_prob}")
# Find area under curve
raw_total_prob = scipy.integrate.trapz(y=bflys.prob, x=bflys.strike)
print(f"Raw total probability: {raw_total_prob}")
N) Generating simulated values for the volatility level (and stock price) with the Heston Process
1) Simulate asset path and volatility path.
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
from math import exp, log, sqrt, pi
import time
start_time = time.time()
# simulate the asset paths under Heston model
def EulerMilsteinSim(scheme, negvar, numPaths, numSteps, rho, S_0, V_0, T, kappa, theta, sigma, r, q ):
N = numSteps
dt = T/N
num_time = int(T/dt)
S = np.zeros((num_time+1, numPaths))
S[0,:] = S_0
V = np.zeros((num_time+1, numPaths))
V[0,:] = V_0
Vcount0 = 0
for i in range(numPaths):
for t_step in range(1, num_time+1):
# the 2 stochastic drivers for variance V and asset price S and correlated
Zv = np.random.randn(1)
Zs = rho*Zv + sqrt(1-rho**2)*np.random.randn(1)
# users can choose either Euler or Milstein scheme
if scheme == 'Euler':
V[t_step,i] = V[t_step-1,i] + kappa*(theta-V[t_step-1,i])*dt+ sigma* sqrt(V[t_step-1,i]) * sqrt(dt)*Zv
elif scheme == 'Milstein':
V[t_step,i] = V[t_step-1,i] + kappa*(theta-V[t_step-1,i])*dt + sigma* sqrt(V[t_step-1,i]) * sqrt(dt)*Zv \
+ 1/4 *sigma**2*dt*(Zv**2 -1)
if V[t_step,i] <= 0:
Vcount0 = Vcount0+1
if negvar == 'Reflect':
V[t_step,i] = abs(V[t_step,i])
elif negvar == 'Trunca':
V[t_step,i] = max( V[t_step,i] , 0 )
################ simluations for asset price S ########
S[t_step,i] = S[t_step-1,i] * np.exp( (r-q-V[t_step-1,i]/2)*dt + sqrt(V[t_step-1,i])*sqrt(dt)* Zs)
return S, V, Vcount0
2) Specify input paramenters for function
S0= 2.7 # Initial Spot
K = 2 # Strike
r = 0.03 #rfr
q = 0.0 # div yield
V0 = 0.20**2 # instantaneous variance
theta = 0.15**2 # long run variance
kappa = 0.0621 # rate reversion
rho = -0.8 # correlation
sigma = 0.50 # vol of vol
time_maturity = 3 # tenor
num_simulations = 10000
n = 250 # numbers of division of the time
sim_output = EulerMilsteinSim(scheme='Euler',
negvar='Trunca',
numPaths=num_simulations,
numSteps=n,
rho=rho,
S_0=S0,
V_0=V0,
T=time_maturity,
kappa=kappa, theta=theta, sigma=sigma, r=r, q=q)
3) Retrieve stock evolution and vol evolution outputs
stock_evolution = sim_output[0]
var_evolution = sim_output[1]
vol_evolution = np.sqrt(var_evolution)
fig = plt.figure(1)
plt.plot(stock_evolution)
plt.ylabel('Simulated asset paths from Heston model')
plt.xlabel('time step')
fig = plt.figure(2)
plt.plot(vol_evolution)
plt.ylabel('Stochastic volatility from Heston model')
plt.xlabel('time step')
4) Retrieve Terminal values for stock and vol processes
stock_term_val = stock_evolution[-1]
vol_term_val = vol_evolution[-1]
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.hist(stock_term_val, bins=50)
ax1.set_xlabel('index level')
ax1.set_ylabel('frequency')
ax2.hist(vol_term_val , bins=50)
ax2.set_xlabel('volatility');
5) Compute vol forecast from Heston process
Hest_vol_forecast = np.mean(vol_term_val)
Hest_vol_forecast
6) Compute Call Price from Heston Process
S_T = stock_term_val
T = time_maturity
# Terminal stock prices
payoff = np.maximum(S_T - K, 0)
ave_payoff = np.mean(payoff)
# Option Price
Hest_call_px = np.exp(-r* T)* ave_payoff
Hest_call_px
7) Compute multiple call and put prices from multiple strikes and maturities
# simulate the asset paths under Heston model
def EulerMilsteinSim(scheme, negvar, numPaths, rho, S_0, V_0, T, kappa, theta, sigma, r, q ):
N=250 # hardcoded number of steps
dt = T/N
num_time = int(T/dt)
S = np.zeros((num_time+1, numPaths))
S[0,:] = S_0
V = np.zeros((num_time+1, numPaths))
V[0,:] = V_0
Vcount0 = 0
for i in range(numPaths):
for t_step in range(1, num_time+1):
# the 2 stochastic drivers for variance V and asset price S and correlated
Zv = np.random.randn(1)
Zs = rho*Zv + sqrt(1-rho**2)*np.random.randn(1)
# users can choose either Euler or Milstein scheme
if scheme == 'Euler':
V[t_step,i] = V[t_step-1,i] + kappa*(theta-V[t_step-1,i])*dt+ sigma* sqrt(V[t_step-1,i]) * sqrt(dt)*Zv
elif scheme == 'Milstein':
V[t_step,i] = V[t_step-1,i] + kappa*(theta-V[t_step-1,i])*dt + sigma* sqrt(V[t_step-1,i]) * sqrt(dt)*Zv \
+ 1/4 *sigma**2*dt*(Zv**2 -1)
if V[t_step,i] <= 0:
Vcount0 = Vcount0+1
if negvar == 'Reflect':
V[t_step,i] = abs(V[t_step,i])
elif negvar == 'Trunca':
V[t_step,i] = max( V[t_step,i] , 0 )
################ simluations for asset price S ########
S[t_step,i] = S[t_step-1,i] * np.exp( (r-q-V[t_step-1,i]/2)*dt + sqrt(V[t_step-1,i])*sqrt(dt)* Zs)
return S, V, Vcount0
# calculate the put option price
def EulerMilsteinCallPrice(scheme, negvar, numPaths, rho, S_0 , V_0, Tmax, kappa, theta, sigma, r, q, MaturityList, ExercList):
OptionPriceMatrix = np.zeros((len(ExercList),len(MaturityList)))
stdErrTable = np.zeros((len(ExercList),len(MaturityList)))
# Obtain the simulated stock price S and simulated variance V
S, V, Vcount0 = EulerMilsteinSim(scheme, negvar, numPaths, rho, S_0 , V_0, Tmax, kappa, theta, sigma, r, q)
N= 250 # hardcoded number of steps
dt = Tmax/N
for i in range(len(MaturityList)):
T_temp = MaturityList[i]
T_row = int( T_temp / dt)
S_T = S[ T_row, : ]# Terminal stock prices at different time T
for j in range(len(ExercList)):
KK = ExercList[j]
# Payoff vectors
Payoff = [max( x - KK ,0) for x in S_T]
SimPrice = np.exp(-r* T_temp )* np.mean(Payoff)
OptionPriceMatrix[j][i] = SimPrice
stdDev = np.std(Payoff, dtype=np.float64)
stdErr = stdDev/sqrt(numPaths)
stdErrTable[j][i] = stdErr
return S, V, Vcount0, OptionPriceMatrix, stdErrTable, Payoff
# calculate the put option price
def EulerMilsteinPutPrice(scheme, negvar, numPaths, rho, S_0 , V_0, Tmax, kappa, theta, sigma, r, q, MaturityList, ExercList):
OptionPriceMatrix = np.zeros((len(ExercList),len(MaturityList)))
stdErrTable = np.zeros((len(ExercList),len(MaturityList)))
# Obtain the simulated stock price S and simulated variance V
S, V, Vcount0 = EulerMilsteinSim(scheme, negvar, numPaths, rho, S_0 , V_0, Tmax, kappa, theta, sigma, r, q)
N= 250 # hardcoded number of steps
dt = Tmax/N
for i in range(len(MaturityList)):
T_temp = MaturityList[i]
T_row = int( T_temp / dt)
S_T = S[ T_row, : ]# Terminal stock prices at different time T
for j in range(len(ExercList)):
KK = ExercList[j]
# Payoff vectors
Payoff = [max( KK - x ,0) for x in S_T]
SimPrice = np.exp(-r* T_temp )* np.mean(Payoff)
OptionPriceMatrix[j][i] = SimPrice
stdDev = np.std(Payoff, dtype=np.float64)
stdErr = stdDev/sqrt(numPaths)
stdErrTable[j][i] = stdErr
return S, V, Vcount0, OptionPriceMatrix, stdErrTable, Payoff
ExercList = [2, 2.5]
MaturityList = [2,3]
Hest_Calls = EulerMilsteinCallPrice('Euler',
'Trunca',
num_simulations,
rho,
S0 ,
V0,
time_maturity,
kappa,
theta,
sigma,
r,
q,
MaturityList,
ExercList)
pd.DataFrame(Hest_Calls[3], index=ExercList, columns=MaturityList)
Hest_Puts = EulerMilsteinPutPrice('Euler',
'Trunca',
num_simulations,
rho,
S0 ,
V0,
time_maturity,
kappa,
theta,
sigma,
r,
q,
MaturityList,
ExercList)
pd.DataFrame(Hest_Puts[3], index=ExercList, columns=MaturityList)