Categories
Fixed Income

Trading-Risk

  1. LMM Implementation

In order to drive the model with fewer factors, the rank-reduced pseudo square roots of the states’ integrated covariance are required. The rank-reduced integrated states’ covariance is also required for calculations during calibration. However, calibration varies the states covariance and is it not practical to repeatedly perform rank reduction. Instead, the states’ instantaneous correlation (which is not varied) is rank reduced and used to generate an approximation to the rank reduced integrated covariance.

The volatility of the model’s states (the spanning Libors) can be specified through calibration or by inputting an instantaneous volatility surface and an instantaneous correlation matrix or by inputting parameters for a functional form volatility and correlation.

Gitbook LMM

Github libor

2. Local Market Valuation

The current process to mark FX Forwards on a corresponding curve is largely a holdover from the days before efficient and liquid derivative markets existed in Mexico. During that time, FX Forwards were the lynchpins of liquidity in FX and interest rate markets. In recent years however, the market standard for calculating and trading FX Forwards has become to interpolate interest rates and create a synthetic forward curve from Mexico zero rates, USD zero rates, binded with a cross currency basis curve (using applicable FX spot rate).

The proposed methodology suggests that Mexico Products should be marked on a single curve to enhance transparency between products, and avoid potential arbitrage between internal systems. The proposed method would offer more accurate and stable PL calculations as we would be using the most applicable curve across products.

Gitbook LMV

Github local gaussian

3. Cancellable Instrument

This is a rather broad definition, covering both trigger-type products and callable products. In practice even for callable products the decision to exercise will depend on the current state of the market, and so these are often modeled by introducing some kind of exercise boundary 6 , i.e. a function of market observables describing a multidimensional boundary beyond which it is optimal to exercise. This has the advantage of separating two problems – making a decision to exercise and calculating the value of the cancellation leg.

In general, there may be any number of cancellation legs in a product, and a cancellation leg will cancel a fixed number of other legs. Legs that can be cancelled as an effect of valuation of a cancellation leg will be referred to as cancellable legs. It is possible for a cancellation leg to be cancellable by another cancellation leg. We will however assume that in cases where there are more than one cancellation legs cancelling same legs the term-sheet defines clearly an order of precedence between them (i.e. which decision is made first), and that if two cancellation legs cancel a single leg in common, then their actions are mutually exclusive (i.e. cancellation can only occur on one of them).

Gitbook cancellable

Github callable

4. Bermudan Note

A Bermudan callable structure is a structure consisting of two sets of cashflows, one paid and one received, which can be valued in the usual Monte Carlo setting, and a set of dates (notice dates) when the structure can optionally be cancelled (or called). When this happens all payments for future periods are stopped, and possibly a penalty payment is made

The essential difficulty in estimating the best values for pi lays in the backwards induction nature of the optimal decision at every notice date – it depends on the hold value, which in turn depends on the optimal exercise decision on the next notice date. In other words: if hold values were known, we would know what the optimal decision is on each path, so we could in principle find values for pi that get our decision function as close to it as possible.

Gitbook Bermudan

Github convertible

5. Market Risk Measurement

The Market Risk Measurement and Management Process establishes the linkages which are required in an effective risk management system. All data composing these linkages can theoretically trace dependencies back to market inputs and position inputs. Market inputs are defined as the inputs required in the valuation model that are dynamic in nature and are sourced from markets. As an input to the risk process, the positions require thorough review and validation processes to ensure completeness and consistency. The set of inputs required for valuation should also be employed for other calculations such as P&L attribution (PAA) and should be simulated when computing Value at Risk (VaR).

The Market Risk Measurement and Management Review needs to consider all potential risk measurement and data process gaps during the life cycle of a trade in the risk system. From inception, when a trade is first executed, the trade needs to be modelled and captured accurately in the risk system. This includes capturing the appropriate market data and market factor sensitivities for the specific trade. Additionally, all market factors driving valuation need to be modelled in the VaR system. Thus the process will comprise five review streams to connect the necessary valuation, risk measurement and risk capture processes.

Gitbook market risk measurement

Github risk

6. Market Risk Factors

Risk factors in the VaR model define the parameters which are simulated in the Monte Carlo engine. The starting point for defining risk factors is reviewing the pricing models market parameters. These market factors and inputs are candidates to be included as risk factors to VaR. The inclusion of risk factors will be chosen to minimize the Unexplained P&L (see Unexplained P&L section below). The inclusion of risk factors also requires the generation of sensitivities (to which simulated returns are to be applied) and market data for calibration.

The selection of factors may require an evaluation of the simulation modelling under various definitions (such as simulating relative or absolute returns). To the extent that not all market inputs for valuation are represented in the VaR risk factor simulation set, this should be investigated and the justification documented. Often market inputs are rendered more coarse as they are translated to risk factors (i.e. a 27 point yield curve for valuation may be translated to a 5 point yield curve for VaR simulation). Such translations should also be reviewed and documented as to evidence that the underlying characterization of the market risk is preserved.

Gitbook market risk factors

Github market curve

7. Unexplained Profit and Loss

The impact of the sensitivity approach indicates a gap between full revaluation P&L and sensitivity based P&L. Reducing this difference requires the addition of new sensitivities to the model. A move to a full revaluation would completely remove this error in the model P&L. This difference will be tracked over time at the limit letter level and can be used to evaluate the potential improvement from changing the valuation approach in the model.

The impact of risk factor selection indicates the gap between a direct market instrument representation in valuation compared to mapping market data to risk factors and back to market instruments. As risk factors represent a selective choice of market instruments, differences attributed to risk factor selection will require a review of the risk factors and risk factor modelling assumptions. This difference will be tracked at the limit letter level.

Gitbook unexplained P&L

Github var

8. Market Risk Backtest

The backtest P&L calculations are based on the actual day-over-day changes in market inputs observed. The market inputs must be the same as those used for official valuation thereby establishing a direct linkage to P&L.

Exceptions may be classified as legitimate, or false. For instances where the exceptions are deemed as false, such as spurious market data input, and IT system issues, appropriate operational procedures need to be followed for issue resolution including reruns, market data reloading/recalibration, etc.

Gitbook backtest

Github exposure

9. Market Risk Validation

Value at Risk (VaR) is computed using risk sensitivities from the official risk systems. Given that the IPV and VA are applied outside the system, consideration must be paid to what impact these adjustments may have, if any, to these risk sensitivities. For example, a large IPV may signal a material difference between the market data in the source system and the independent market data. The source system data is used to compute the risk sensitivities for VaR. Differences in these market data may result in changes in risk sensitivities, particularly for portfolios that exhibit non-linearity, where the risk sensitivity itself changes with changes in market data.

In certain cases, fair valuation of financial instruments may require capabilities that are not present in the source system valuation models or valuation environment. In these cases, VPC will use existing vetted models outside the source systems to compute fair value. For material IPV adjustments, the factor(s) will be assessed with respect to the implications on risk sensitivities.

Gitbook risk validation

Github close out

10. Market Risk Modeling

A review of modelling assumptions has been incorporated in the Nextgen work. For example, in the work leading up to the Nextgen project, the simulation of base metal commodity futures was changed from an all-in representation to a spread against commodity forwards. The revised modelling assumption helped to improve the accuracy of the basis between futures and forwards, which previously exhibited nearly unbelievable scenarios that were far outside the realm of a 99% confidence level. As part of the commodities, equities and fixed income portions of Nextgen, joint efforts between VPC, RO, and Risk Models have reviewed the basic premises in the risk factor definitions including the use of constant maturity risk factors in commodities and zero interest rate yields for simulation.

The Risk Models quarterly benchmarking exercise between full revaluation and Greek-based VaR should be reviewed to understand the degree of approximation. A divergence greater than 5% should be investigated to understand the implications of the ‘missing’ VaR Greeks.

Gitbook market risk model

Github interpolation

11. Counterparty Exposure

Counterparty credit risk (CCR) relies on exposure profiles. They are the product of pricing all deals into the future under Monte Carlo simulation and aggregating using all relevant netting and collateral agreements. Another important feature that is shared with VaR calculation is the simulation of underlying market factor that is required in order to evaluate those deals; however for CCR the time horizon for simulation is in years rather than days or weeks for VaR.

In the CCR context, simulation models have the objective to forecast within a reasonable range and horizon market factors such as equity prices, interest and FX rates, CDS curves and so on. In order to capture a realistic view of our exposure going forward, and because CCR is not directly hedgeable, those models are typically calibrated using historical data (~3 years) and are not systematically implied from today’s market prices.

Gitbook ccr exposure

Github rate lock

12. Counterparty Risk Stress Test

CCR stress test results can be more difficult to interpret than market risk VaR – there is no single 95th percentile loss to focus on, but instead we must consider the impact on the individual exposures to thousands of different counterparties. We can make this this more manageable by for example focusing on the top 50 counterparties, or aggregating by country or industry sector.

In the context where Stress tests are based on exceptional but possible scenarios, the origin of a stress scenario is the economics department. It then gets transmitted to the Stress Test group for translation into market factor shocks that CCR models can interpret. At this point, calibration takes place: stressed market data and historical prices are taken as input to that process; it yields stressed parameters (recall kappa, sigma and theta from before) which are then passed on the CCR engine where EE, PFE are calculated for each portfolio. Depending on the current application, whether it is used for ICAAP or regulatory stress tests, the results are compiled and sent to the relevant team.

Gitbook ccr stress test

Github portfolio

13. Counterparty Risk Measure

Credit exposure is the amount a bank can potentially lose in the event that one of its counterparties defaults. Note that only OTC deals (and security financing transactions) are subject to counterparty risk. We define replacement risk in the context of this report as the maximum of the PFE at a set of pre-specified valuation time buckets.

Note that the valuation methodologies used to calculate exposure could be very different from the front office pricing since for credit exposure calculations, what is important in this project is the distribution of deal values under the real world measure at different times in the future. The valuation methodologies need to be optimized in order to perform sufficiently large number of calculations required to obtain such distribution. Because of the computational intensity required to calculate counterparty exposures, compromises are usually made with regard to the number of simulation times buckets and the number of scenarios.

Gitbook ccr measure

Github cash flow

14. Add-on Exposure

Add-on factor tables (profile basis) are uploaded to the production system to monitor the replacement risk. The system could easily pick up the tables for exposure calculation. A complete term profile of add-on factors for FX Forward trades and FX Option trades (including buy/sell domestic currency and sell/buy foreign currency, with gross exposure and collateralized exposure) are stored in production system. Also the system stores the add-on factors of Repo, Reverse Repo, Security Bought & Sold, and Security Borrow and Lending with all currencies, issuer types, credit rating, underlying type and underlying terms.

A counterparty’s exposure limit could be time-dependent and set up in other currencies rather than USD. Also, a counterparty’s exposure profile is time-dependent. In this way, the exposure calculated at each time bucket should be compared with the limit set up at the corresponding time interval. If the exposure is higher than the limit, there should be a limit breach warning triggered. Also, the limit should be compared with the exposure calculated at the related time bucket. If the limit is lower than the exposure, the system should trigger a warning/violation.

Gitbook addon

Github fair value

15. Intraday Replacement Risk

For new and amended deals completed intraday, the MTM values or premiums reflecting these values, will either be retrieved directly from the product systems (assuming that appropriate pricing parameters and market data were specified at the time of input) or entered manually by the trader. The referenced FPE, calculated with risk factor based on a transaction’s product type and underlying attributes, is then added to this MTM value to come up with the replacement risk for the deal. The overall replacement risk calculation is restricted to MAX [(0, MTM)] + FPE, such that in no instances will a negative MTM be considered in the calculation.

The percentage replacement risk factor is determined using the ratio of the upward diffused price over the strike price. For long puts, short equity forwards and short mutual fund forwards, the current price of the equity is diffused downwards with drift equal to 0 (i.e., no directional bias) and volatility set to the greatest of the 1, 3, and 5-year standard deviations. The percentage replacement risk factor is determined using the ratio of the downward diffused price over the strike price.

Gitbook intraday

Github asset

16. Collateralized Exposure

This collateral method is built on a mixture of backward and forward looking style. The counterparty exposure is measured on a date when the counterparty is deemed to be in default. This is consistent with the terminology and concept of “Exposure at Default” in CCR. Standing at a reporting time bucket t, the collateral assets has been posted in the past, and the collateralized exposure depends on the “liquidation” value of the derivative portfolio and collateral assets at some future time.

To measure the counterparty exposure at a future time t , first we need to calculate the portfolio value. The portfolio valuation will be consistent no matter if there is a collateral agreement or not. Time t is at the end of the settlement period and the beginning of the liquidation period. The Bank faces higher market risk when it needs more time to liquidate (or replace) the portfolios. The length of the liquidation period depends on trade types and traits (notional, term, etc.). It also depends on market conditions, as some products may become very illiquid during financial stress. So the liquidity period should be defined at the trade level according to some prescribed rules, and should be allowed to be changed (e.g. for stress testing purpose).

Gitbook collateral exposure

Github collateral swap

17. Collateral Methodology

When the Bank determines that the counterparty is in default, it will start to negotiate new trades to replace exist derivative portfolio. At the same time, it will take hold the collateral asset and try to sell these assets in the market. The value fluctuations of the portfolio and the collateral asset during their liquidation periods create risk to the Bank. In a CCR model which inherently incorporates the Wrong Way risk, both trades and collateral assets liquidation value need to be calculated conditional on the fact that the counterparty is in default.

Although our method is logically more consistent with the counterparty exposure definition, it can also be changed by “shifting” the exposure calculation time t along the timeline. If the time t is set at the end of the liquidation (or closeout) period, then we have a backward looking model. Or if t is set at the beginning of the settlement period, we will have a forward looking model.

Gitbook collateral methodology

Github principal

18. Counterparty Credit Risk BackTest

Backtesting is a statistical test with the significance of any result depending on the amount of data used. A backtesting data set is a set of forecasts and the corresponding realisations of those forecasts, ie what actually occurred. This backtesting data set can be put together in a number of ways.

The backtesting data set can be aggregated over time, over trades/risk factors or over both time and trades/risk factors. The time period over which data is aggregated is referred to as the observation window. There are a number of methodologies for generating a backtesting data set over a given observation window. A selection of frequently used methodologies are set out below.

Gitbook ccr backtest

Github index

19. Counterparty Credit Risk Jobs

A job is a specific instance that will be sent to the compute framework. It associates a job spec with a specific anchor timestamp and trade timestamp. These determine the precise bi-temporal version of market/reference data and trades respectively.

A market data path represents a possible evolution of market data through time. Generally, all paths start at the same place with the real world market data, but evolve differently to each other over time. Future market data points on a path may be generated either through a simulation model (Monte Carlo paths), through application of pre-specified ‘shocks’ to each market data point, or may be real world values if the path is being generated retrospectively (e.g. for back testing).

Gitbook ccr job

Github convertible factor

20. Counterparty Credit Risk Limit Monitoring

Limits are set to limit the allowable exposure for an ‘Exposure Definition’ while Trading Restrictions are set to ensure adherence to rules/policy that is not an exposure versus limit check, for example, to ensure that maximum allowable tenors are not exceeded or business rules are not broken. For example, any Repo trade must have an enforceable legal agreement governing transactions between the organization and the Counterparty of the Agreement.

It is assumed that every trade will be either directly or indirectly mappable to all Aggregation Set Dimensions. However it should be noted the Aggregation Set trade membership rules are sometimes specific to the kind of Aggregation Set (the Aggregation Set Type) along with the Risk Metric that is linked to the Aggregation Set.

Gitbook ccr limit

Github fx chooser

21. Pre-Deal Check of FX Forward

Foreign Exchange Forward Contract is an instrument that allows the buyer to lock in a foreign exchange rate for a specified date in the future. For instance, the 2-year forward rate for USD/CAD is 0.951067573351087 and 0.942640335579959 for 3-year. In order to get a forward rate for a deal that matures in 2.5Y (the system doesn’t provide 2.5Y forward rate), we can use linear interpolation method describe in Appendix to derive the appropriate forward FX rate.

After the system calculates the individual FX Forward’s exposure based on add-on factor, it will add the exposure profile on top of the pre-deal counterparty-level exposure to get post-deal counterparty-level exposure. However, the time buckets from Pre-Deal counterparty-level exposure might be defined differently from the time buckets of individual FX Forward deal’s exposure profile.

Gitbook pre-check

Github xccy