Today I am going to present a way to calculate the credit value adjustment (CVA) for a netting set of plain vanilla interest rate swaps. This Monte-Carlo method is based on the code example of my previous post about the expected exposure and PFE calculation and the first steps will be exactly the same.
What is the credit value adjustment (CVA)?
The credit value adjustment is the difference between the risk-free price of a netting set and the the price which takes the possibility of the default of the counterparty into account. A netting set is a portfolio of deals with one counterparty for which you have a netting agreement. That means you are allowed to set positive against negative market values. A netting agreement will reduce your exposure and therefore the counterparty credit risk. If the time of default and value of the portfolio are independent then the CVA is given by
with recovery rate R, portfolio maturity T (the latest maturity of all deals in the netting set), discount factor df(t) at time t, the expected exposure EE(t) at time t and the default probability PD(t).
The Monte Carlo approach
In my previous post we have seen a Monte Carlo method to estimate the expected exposure EE(t) at time t on an given time grid . We can reuse this estimator to approximate the integral by the discrete sum
As we see in the formula we need two more ingredients to calculate the CVA, the discounted expected exposure and the the default probabilities of the counterparty.
Calculate the discounted expected exposure
To convert the expected exposure at time t in its present value expressed in time-zero dollars we only need to add a few more lines of code.
vec_discount_function = np.vectorize(t0_curve.discount) discount_factors = vec_discount_function(time_grid)
We use the todays yield curve to calculate the discount factor for each point on our grid. With the numpy function vectorize
we generate a vectorized wrapper of the discount method of the QuantLib YieldTermStructure. This vectorized version can be applied on a array of ql.Dates or times instead of a scalar input. Basicalliy it’s equivalent to the following code snippet:
discount_factors = np.zeros(time_grid.shape) for i in range(len(time_grid)): time = time_grid[i] discount_factors[i] = t0_curve.discount(time)
As the numpy documentation states the np.vectorize function is provided primarily for convenience, not for performance.
After the generating the market scenarios and pricing the netting set under each scenario we will calculate the discounted NPVs for each deal in the portfolio:
# Calculate the discounted npvs discounted_cube = np.zeros(npv_cube.shape) for i in range(npv_cube.shape): discounted_cube[:,:,i] = npv_cube[:,:,i] * discount_factors # Netting discounted_npv = np.sum(discounted_cube, axis=2)
Using the discounted npv cube we can calculate the discounted expected exposure:
# Calculate the exposure and discounted exposure E = portfolio_npv.copy() dE = discounted_npv.copy() E[E<0] = 0 EE = np.sum(E, axis=0)/N dE[dE<0] = 0 dEE = np.sum(dE, axis=0)/N
The default probability of the counterparty
To derive the default probability one could either use market implied quotes (e.g. CDS) or use rating information (e.g. based on historical observations). We assume that the survival probability is given by with a deterministic piecewise-constant hazard rate function Given a grid of dates and the corresponding backward flat hazard rate $latex\lambda_0,\dots,\lambda_n$ we can use the Quantlib HazardRateCurve to build a default curve.
# Setup Default Curve pd_dates = [today + ql.Period(i, ql.Years) for i in range(11)] hzrates = [0.02 * i for i in range(11)] pd_curve = ql.HazardRateCurve(pd_dates,hzrates,ql.Actual365Fixed()) pd_curve.enableExtrapolation()
The QuantLib provides a real bunch of different types of DefaultTermStructures. You can either bootstrap a default curve from CDS quotes or you build a interpolated curve like we do here and combine one of the many interpolators (Linear, Backward Flat, etc.) with one of the possible traits (hazard rate, default probability, default density).
With the default termstructure we can calculate the probability for a default between the times and for all i in our time grid.
# Calculation of the default probs defaultProb_vec = np.vectorize(pd_curve.defaultProbability) dPD = defaultProb_vec(time_grid[:-1], time_grid[1:])
Again we use the numpy function vectorize to apply a scalar function on an array. The method defaultProbability takes two times as input, t and T. It returns the probability of default between t and T.
Now we have all pieces together and the following code snippet gives us the CVA of our netting set:
# Calculation of the CVA recovery = 0.4 CVA = (1-recovery) * np.sum(dEE[1:] * dPD)
You can download the code as an IPython (Juypter) notebook from here or just clone my repository (IPythonscripts) at GitHub.
If you want to read more about the QuantLib I would recommend to have a look on the blog and book “Implementing QuantLib” by Luigi. Another fantastic blog “Fooling around with QuantLib” by Peter has a very good and detailed post the Gsr model. Actually Peter has implemented this model in C++ and contributed it to the QuantLib.
I hope you have enjoyed reading my post and you will have fun playing around with the notebook. In one of my following posts I will extend this simulation by add a new product class to the netting set: European and bermudan swaptions.