Value at risk (VAR) is a method for calculating and controlling exposure to financial risk factors. It measures the volatility of an institution’s assets – the greater the volatility, the greater is the risk of a loss.
We can define VAR as a single number (currency amount or percentage return) which estimates the expected maximum loss of a portfolio over a given time horizon (the holding period) and at a given confidence level as a result of an adverse change in risk factors (for example, interest rates, exchange rates and stock prices).
Less formally, we can say that VAR is a currency amount, say Y dollars, where the chance of losing more than Y dollars is, for example, 5% over some future time period. This is a statement of probability, so VAR calculations cannot be relied upon with certainty.
It tell us “What loss level is such that we are α (percent) confident it will not be exceeded in N business days?”. Formally, it is defined as
Where 1- α represents a small probability amount shown as the shaded area below a loss density curve. α is called Confidence Interval (CI).
Historical perspective of VaR
In 1994, Dennis Weatherstone, Former chairman of J.P. Morgan felt that a single measure of the overall risk should be calculated. He demanded that a one-page report be delivered to him after the close of business each day, summarizing the company’s global exposure and providing an estimate of potential losses over the next 24 hours. The result was JP Morgan’s famous “4.15 Report” (so-called because it was delivered to Mr. Weatherstone at 4:15pm each day) and the beginning of an amazingly successful risk-management tool known as VaR.
In Oct. 1994, J.P. Morgan launched RiskMatrics which is a tool that enable market participants to estimate their market risk exposures under VaR measure.
In June 1996, J.P. Morgan together with Reuters built a new and more powerful version of RiskMatrics model.
After many years of fine tuning and expanding, their RiskMatrics model is mature and reliable.
RiskMetrics model is a parametric VaR approach, which uses historical time series analysis to derive estimates of volatilities and correlations on a large set of financial instruments. Under normality assumption, the distribution of past returns can be modeled to provide us with a reasonable forecast of future returns over different horizons.
RiskMetrics delta VaR and delta-gamma VaR methods approximate the portfolio pricing function, around the current value of the underlying, risk factors, by a linear and quadratic function, respectively. The drawback of this approach is that these approximations may not be accurate for extreme movements of the risk factors (which produces a VaR event, or a portfolio value higher than normal distribution predits) for a derivative portfolio (I.e. where the payoff is non-linear).
Different VaR Approaches
There are two popular approaches to calculating VAR:
The first approach, which we refer to as the statistical approach, involves forecasting a portfolio’s return distribution using probability and statistical models, such as Parametric VaR approach.
The second approach is referred to as scenario analysis. This methodology simply revalues a portfolio under different scenarios of market rates and prices, such as historical scenarios and Monte Carlo scenarios.
In another words, the two approaches differs in two ways:
• How the changes in the values of financial instruments are estimated as a result of market movements.
• How the potential market movements are estimated.
In terms of estimating changes in value there are basically two approaches: analytical methods (based on Taylor expansion) and simulation methods (also refers to Full Valuation Method).
In terms of how market movements are estimated, there are RiskMetrics method, Historical Simulation method and Monte Carlo method.
– RiskMetrics method uses historical time series to estimate variance-covariance (VCV) matrix of Risk Factors, based on which, under normality assumption, RF distribution can be estimated analytically.
– Historical simulation makes no explicit assumptions about the distribution of asset returns. It is usually applied under a full valuation model.
– Under MC simulation, returns are generated using a pre-defined stochastic process.
As it stands, no single method is best for every situation. Each method has its strengths and weaknesses, and together they give a more comprehensive perspective of risk.
For example, a parametric approach may be used for instant risk measurement during a trading day, while a simulation approach may be used to provide a fuller Picture of Risk (in particular, nonlinear risks) by the end of the trading day.
Parametric VaR Approach
This is the method used in RiskMetrics, a service originally developed by bankers at JP Morgan and which is now conducted by an independent company called RiskMetrics.
This approach relies heavily on matrices to calculate the value at risk. It involves using in-house or published (from RiskMetrics) volatility and correlation data in the matrix calculations. It is also called Variance-Covariance Approach or analytic or correlation approach.
The approach makes a number of assumptions, including the crucial assumption that the returns on the assets in a portfolio are normally distributed.
Historic VaR Approach
Historical simulation is a relatively simple approach to calculating value at risk that avoids some of the drawbacks of the variance-covariance approach. In particular, it avoids the assumption that returns on the assets in a portfolio are normally distributed.
Instead, it uses actual historical returns on the portfolio assets to construct a distribution of potential future portfolio profits and losses, from which the VAR can be read. This approach requires minimal analytics. All it needs is a sample of the historic returns on the different instruments in the portfolio whose value at risk we wish to calculate.
This method uses actual daily historical returns on portfolio assets to calculate potential portfolio losses and, therefore, it captures the abnormality of asset returns if it is present in the historical data. RiskMetrics, on the other hand, uses estimates based on averages over a specified period rather than actual returns.
Because it uses actual historical returns, extreme events in the markets (such as the September 11, 2001 terrorist attack) can possibly be picked up in the results.
The assumptions that underpin the variance-covariance approach, including the key one relating to the normal distribution of returns, are not required for the historical simulation method. What’s more, it is conceptually simple and relatively easy to implement, making it a popular approach to value at risk calculation.
The historical simulation approach, which is usually applied under a full valuation model, makes no explicit assumptions about the distribution of asset returns.
So market movements are estimated form historical scenarios. Under historical simulation, portfolios are valued under a number of different historical time windows which are user defined.
These lookback periods typically range from 6 months to 2 years.
Monte Carlo VaR
In the case of historical simulation, it quantifies risk by replicating one specific historical path of market evolution, however MC simulation approaches attempt to generate many more paths of market returns.
Monte Carlo (or stochastic) simulation is a more flexible method than the other two methods. It involves simulating the random returns behavior of financial assets in a portfolio using the power of a computer. These returns are generated using a predefined stochastic process (for example, assume that interest rates follow a random walk) and statistical parameters that drive the process (for example, the mean and variance of the random variable).
Each simulation gives a possible value for the portfolio at the end of the relevant time period. After a sufficient number of simulations have been performed, the simulated distribution of portfolio values (from which the VAR estimate is inferred) should converge to the portfolio’s unknown true distribution.
Although this method is the most flexible, it also happens to be the most computer intensive, as the simulation may have to be replicated a substantial number of times to get the required level of accuracy. The more complex the portfolio, the more simulations required (as many as 10,000 may be needed, although linear positions require fewer iterations). As a result, the costs of the Monte Carlo method may outweigh the benefits in some cases.
Choose an approach
A decision must be made as to how to compute VaR.
If the user is willing to assume that the portfolio return is approximately conditionally normal, use the standard RiskMetrics approach.
If the user’s portfolio is subject to nonlinear risk to the extent that the assumption of conditional normality is no longer valid, then the user can choose between two methodologies—delta-gamma and structured Monte Carlo.
It’s important to realize that all three approaches for measuring VaR are limited by a fundamental assumption: that future risk can be predicted from the historical distribution of returns. The parametric approach assumes normally distributed returns, which implies that parametric VaR is only meant to describe “bad” losses on a “normally bad” day. While Monte Carlo simulation offers a way to address the fat-tail problem by allowing a variety of distributional assumptions, volatility and correlation forecasts are still based on statistical fitting of historical returns. While historical simulation performs no statistical fitting, it implicitly assumes that the exact distribution of past returns forecasts future return distributions. This implies that all three approaches are vulnerable to regime shifts, or sudden changes in market behavior. Stress testing should therefore explore potential regime shifts to best complement VaR and create a robust Picture of Risk.
The VAR calculation is dependent on a specified holding period, confidence level, volatility and, usually, correlation among the variables.
Broadly speaking, the calculation of VAR involves the following steps:
Step 1 – Determine the Holding Period/ Forecast horizon
The holding period used depends on the underlying assets and the underlying activities. For example, foreign exchange dealers are often interested in calculating the amount they might lose in a 1-day period. Therefore, in measuring the VAR of an active trading portfolio of liquid instruments, 1-day VAR is probably appropriate.
On the other hand, participants in more illiquid markets, or regulators, may be interested in estimating market risk over longer time horizons. In such cases, a 1-week, 10-day or even monthly holding period may be appropriate. The longer the holding period, the higher the VAR estimate will be.
Generally, active financial institutions (e.g., banks, hedge funds) consistently use a 1day forecast horizon for VaR analysis of all market risk positions. It doesn’t make sense to project market risks much further because trading positions can change dynamically from one day to the next. On the other hand, investment managers often use a 1-month forecast window, while corporations may apply quarterly or even annual projections of risk. The credit risk analysis usually assumes one year time horizon because the less frequent occurrences of the credit events.
The Basel Committee on Banking Supervision recommends that institutions use a minimum holding period of 10 days for the purposes of calculating their regulatory capital requirement. Institutions using shorter holding periods (typically one day) can scale up to 10 days by the square root of the time period required. For example, a 10day VAR estimate will be √10 (or 100.5) times larger than the corresponding 1-day VAR estimate.
The important point is that the holding period should relate to the time period over which changes can occur in the portfolio.
Step 2 – Select the Confidence Level
The confidence level is used to select the degree of certainty associated with the VAR estimate.
How to choose? In choosing confidence levels, companies should consider worst-case loss amounts that are large enough to be material and occur frequently enough to be observable.
For example, with a 95% confidence level for market risk analysis, losses should exceed VaR about once a month (or once in 20 trading days), giving this risk statistic a visceral meaning.
Another example, if a bank needs to know the expected maximum loss over a period of 99 days out of 100, then it needs to use a confidence level of 99% – on the 100th of these days, the bank expects to lose more than the VAR estimate.
Someone claim that using a higher level of confidence would be more conservative. However, a higher confidence level can lead to a false sense of security. It will not be understood as well or taken as seriously by risk takers and managers because losses will rarely exceed that level.
Furthermore, a high confidence level VaR is difficult to model and verify statistically. VaR models tend to lose accuracy after the 95% mark and certainly beyond 99%.
Note, however, that when using VaR for measuring credit risk and capital, we should apply a 99% or higher confidence level VaR because we are concerned with lowprobability, event-driven risks (i.e., tail risk).
The choice of 95% confidence level at J.P. Morgan goes back to former CEO Dennis Weatherstone, who reputedly said, “VaR gets me to 95% confidence. I pay my risk managers good salaries to look after the remaining 5%.”
The higher the confidence level, the higher the VAR amount will be. Many financial institutions use different numbers, but generally confidence levels between 95% and 99% are popular. A 95% confidence level implies that the VAR estimate will be exceeded about once a month, assuming that a year contains about 252 trading days. The Basel Committee on Banking Supervision proposes that institutions use a confidence level of 99%, which implies that only two to three breaches of the VAR estimate occur during the year.
Step 4 – Choose Base Currency
The base currency for calculating VaR is typically the currency of equity capital and reporting currency of a company. For example, Bank of America would use USD to calculate and report its worldwide risks, while the United Bank of Switzerland would use Swiss Francs.
Step 5 – Create a Probability Distribution of Likely Returns
Several methods can be used to create a probability distribution of returns for an asset or portfolio. The easiest to understand and the one most frequently used in VAR models is the normal distribution.
Such distributions have a high probability that an observation will be close to the mean and a low probability that an observation will be far away from the mean. In other words, the normal distribution peaks at the mean and tails off at the extremes. It therefore has certain properties that are useful when modeling the market risk of many financial instruments (where extreme price swings are infrequent and small price movements are common).
Real World Distributions
Most studies show that the distribution of returns of financial assets and liabilities is not normal. Due to extreme events in the financial markets (such as stock or bond market crashes), real world distributions tend to exhibit ‘fat tails’. In other words, the peak of a real world return distribution is narrower and the tails are fatter than that predicted by the normal distribution.
Why measuring and monitoring market risk gain increasing interests?
The answer lies in the significant changes in the financial markets.
-traded securities have replaced many illiquid instruments,
e.g., loans and mortgages have been securitized moving from banking book to trading book.
-Global securities markets have expanded and both exchange traded and over-thecounter derivatives have become major components of the markets.
-Increased liquidity and pricing availability along with a new focus on trading led to the change in market risk management practice: moving away from account based management to the MtM based management. An increasing number of firms manage their daily earnings from a mark-to-market perspective.
Given the move to frequently revalue market positions, managers have become more concerned with estimating the potential effect of changes in market conditions on the value of their positions.
Significant efforts have been made to develop methods to measure financial performance.
-However the exclusive focus on returns, has led to incomplete performance analysis. Return measurement gives no indication of the cost in terms of risk (volatility of returns). Higher returns can only be obtained at the expense of higher risks.
-Investors and trading managers are searching for common standards to measure market risks and to estimate the risk/return profile of individual assets or portfolios.
-The regulatory agencies have also been searching for ways to measure market risks. Pushing banks, investment firms, and corporations to integrate measures of market risk into their management philosophy.
Advantage of VaR Measure
One of the most useful features of VAR as a measure of risk is the simplicity of the end result. The fact that a measure of market risk can be conveyed to shareholders and senior management in non-technical terms (‘possible to loss Y amount of dollars over one-day with 99% probability’, and so on) makes VAR an extremely powerful risk management tool.
VaR, as a Statistical models of risk measurement, allows an objective, independent assessment of how much risk is actually being taken.
It permits consistent and comparable measurement of risk across instruments and portfolios, irrespective of the level of aggregation.
VaR is a flexible risk measure:
-VaR can be specified for various horizons (generally between 1 day and 1 month) and confidence levels (generally between 90% and 99%).
-VaR can be expressed as a percentage of market value or in absolute currency terms (e.g., USD).
It is easy to examine components contribution to total portfolio risk.
It is easy to calculate, aggregate and report
Other benefits include:
Unlike some measures of risk (such as sensitivity measures), VAR is not limited to focusing on the risk associated with individual instruments. It can aggregate risks associated with different instruments within a portfolio.
VAR promotes more efficient allocation of resources by encouraging financial institutions to avoid being over-exposed to one source of risk.
VAR is important for performance evaluation in a trading environment. The natural instinct of most traders is to take on additional risk, but VAR can help quantify this risk and so contribute to the establishment of position limits for traders.
VAR is helpful for market regulators who wish to ensure that financial institutions do not go bust. By exposing the risk profile of such institutions, regulators can assess the risk and calculate the appropriate capital requirement to cushion this risk.
Drawbacks of VAR as a Measure
Perhaps the major drawback with VAR as a measure of Market Risk is the assumption in many models (variance-covariance and some Monte Carlo simulations) that portfolio returns are normally distributed. All market participants understand that from time to time there are unusual or extreme events in the market that are not captured by a normal distribution. When such events occur, VAR calculations may underestimate the true value at risk. Therefore, relying on the assumption of normally distributed returns is dangerous when there are extreme movements in the market.
Other drawbacks include:
Some VAR models use historical return data in their calculations. The assumption in such models is that the past is a reliable guide to the future, which is not always the case.
Some VAR models (variance-covariance approaches) are unsuitable for portfolios containing options due to the non-linear behavior of options. In other words, the ratio of change in the option value with respect to changes in the underlying asset value (delta) is not constant. Having said this, while the scope is limited, there are approximating methods in variance-covariance models to estimate the VAR of portfolios with optionality (although most sophisticated institutions will use simulation techniques when options are involved).
There may be difficulties associated with both the capturing and the reliability of data. If the captured data is unreliable, VAR models are worthless.
Some VAR systems, particularly Monte Carlo simulations, are costly and can prove difficult to set up.
VAR does not always give a consistent method for calculating market risk; different methods can produce different results on a daily basis.
VaR measure is not an coherent risk measure. It violates sub-additive axiom.
Finally, it is important to note that VAR in itself is not risk management. It is a tool for measuring market risk and is therefore part of a complete range of activities and duties involved in managing and minimizing a financial institution’s risk exposure.
VAR & Extreme Events
In practice, a 1-day VAR with even a 99.999% confidence level will not be able to predict major financial disasters. A 99.999% confidence level is equivalent to approximately 4.265 standard deviations but, on Black Monday (stock market crash) of 1987, the S&P 500 Index moved by 22.3 standard deviations. VAR should therefore be used with a clear understanding of its limitations and should be supplemented by additional measures (such as stress testing) that complete the picture of the risks in the portfolio under different and more extreme conditions.