A VAR MODEL AS RISK MANAGEMENT TOOL AND RISK ADJUSTEDPERFORMANCE MEASURES
°°
Andrea CREMONINO
°°
, Marco GIORGINO
Department of Economics and Production, Politecnico di Milano
Piazza Leonardo da Vinci, 32  20133 Milano (Italy)
Abstract
We provide evidence for risk management as a value creation activity. To test this proposition, we introduce risk control in portfolio decision making where in order to assess risk we developed a VaR model that is able to take inmultidimensional risks. Then we ran out a simulation according to Italian banking operational procedures: our evidenceshows a better performance both as total return and as Sharpie ratio. We argue that Basle standard requirements are lessheavier than ones granted by internal models, modified by prudential coefficients.
Key words:
Risk Measures, Risk Management, Performance Measures, and Supervision Authorities’ Regulation.
JEL classification:
G21, G11, and G31.
Introduction
In the nineties European banks faced a dramatic change in their competitive environment: thedecrease of interest rates and the growing competition owing to the forthcoming EMU era affected banks’ earnings. In order to increase profitability most banks expanded financial activities, so theyhad to enforce risk management. Current risk management literature focuses on identifyingequilibrium scenarios in which a firm minimizes the total variability of its cash flows (e.g. Smithand Stultz (1985) and Froot, Scharfestein and Stein (1993)). In these models, the role is to mitigatethe costs induced by cash flow volatility, that result by capital market imperfections, thus creatingvalue for shareholders. However, the existing models do not specify the source of variance in thedistribution of firms’ cash flows. They focus only on the benefits of risk management for reducingthe total risk of a firm’s portfolio. Consequently, these models ignore multidimensional risks andthe possible covariation among risks within a firm. By contrast this paper provides evidence thatfirms can create value by taking risk analysis in decision making: here we introduce an example of portfolio management.On the other hand national supervising authorities, forming the “Basle Committee” developed their regulatory schemes (Accord in 1988) to take in market risks. Since they adopted a similar
°
Even if this is the output of a research work jointly carried out by the authors, Marco Giorgino has written Introduction,and Conclusions, while Andrea Cremonino Paragraphs 1, 2 and 3.
°°
Corresponding Author: Tel. ++39223992778; Fax. ++39223992720; Email: andrea.cremonino@polimi.it
approach, the output was a compulsory scheme that provides capital requirements depending on theamount and riskiness of the securities in banks portfolio. Later Basle Committee encouraged thedevelopment of internal models for risk assessment that could be used to determinate capitalrequirements according to a prudential set of coefficients.This paper provides evidence that such conditions turn internal model into inefficiency becausecapital requirements they provide are larger then ones granted by standard model (Sironi 1985).This paper proceeds as follows. Section I develops the model, the test framework and sample dataare described in Section II, while the results are discussed in Section III. Section IV concludes.
I.
Model
Our model builds on the framework of VaR (this model is known in literature, for instance
Jorion(1995) and J.P. Morgan (1995)).The widest definition of VaR is maximum expected losses in market value of a givenposition that can be expected to be incurred on a defined time horizon within a confidencerange”.If we use mathematical formulas:MV
0
* Prob (
∆
V<  C)(I)
Where C is fixed loss level andMV
0
is position market value.The chosen quantile expresses risk aversion degree because higher values correspond to higher confidence levels, but that imply larger amounts of adsorbed capital. By the way risk aversion is anelement of competitive policies
in some activities or market segments since riskiestoperations have different capital absorption and so different costs according to eachinstitution.We assume a confidence level of 95% because this model is meant as management tool,where we need verify the correctness of estimations.
1
Intuitively VaR is proportional to position sensibility (endogenous factor) times market factor expected future volatility (exogenous factor).Market value change is estimated by parametric methods: the market value of a position is
described as a function of position sensibility to relevant variables (e.g. rates and prices). Thesemethods implicitly contain hypothesis that securities are linear, i.e. change rate of their value isconstant when financial variables evolve (or that we can use a linear functional form).In order to estimate future variability we model relevant financial variables as random variablesfollowing a lognormal distribution. This is to say that logarithm returns and not financial variablesare normally distributed because the formers keep on having meaning on negative tail.We chose logarithmic returns instead of arithmetical ones since they enjoy the properties of symmetry and time series additivity.Due to the hypothesis introduced above, VaR is shaped as:VaR = MV
0
×σ×φ
1
(
α
)(II)where MV
0
is market value,
σ
is return volatility
α
chosen quantile.This formula requires implicit hypothesis that return mean is zero: J.P. Morgan assumed that because it was supported by empirical evidence and to avoid misunderstanding whether VaR should be centered on average, i.e.VaR = [
µ

σ×
φ
1
(
α
)]
×
MV
0
(III)where
µ
is return expected value.If we consider mean too, we cannot express a value in cumulative distribution as a single variablefunction of standard deviation. Moreover this means to focus on loss probability (called downsiderisk). This hypothesis is the more precise the shorter is time horizon.In order to assess variance we use a historical method instead of implied volatility because theformers may be applied to every activity and not only to traded securities and because empiricalevidence supports historical volatility
2
. Within historical estimations, we chose an EWA(Exponential Weighted Average) method, because it attributes an exponentially decreasing weightto elder data. The analytical equations are, under condition that
λ∈[0;1]
:
( ) ( )
∑
−=
−=
1122
1
t nt n FORECAST
r t
λλσ
or recursively:(IV)
( ) ( ) ( )
FORECAST t FORECAST
t r t
11
2212
−+−=
−
λσλσ
(IV’)We set
λ
= 0.94 for daily variance because this value maximizes likelihood function (RiskMetrics™
1
A residue of 5% means that we should observe a loss greater then estimated one on average once every twenty days, while with 99% level once every one hundred days.
2
Jorion [1995]
(1995)).An unique decay factor provides a simple and easy to understand method and avoids inconsistency between different estimates (that grants the feasibility of Cholesky decomposition of variancematrix) while performs better than a GARCH (1,1)
3
.In estimating volatility from time series we suppose that data are nonserially autocorrelated, i.e.there is no correlation between an observation and the previous one: that allows to estimatevolatility on a defined time horizon starting from daily standard deviation times square root of period length. The advantage is to estimate volatility with no need to update huge datasets.It is more accurate calculate time horizon as trading days because volatility arises in trading, asshown by empirical research
4
. We observe that this procedure has been adopted by Basle regulation.The non autocorrelation hypothesis provides a conceptual support not to take in considerationexpected yields: since expected returns and volatilities grow up respectively as time and as itssquare root, on the short term the latters are higher.As a proxy of liquidity risk we may use bidask spread (Cherubini (1995)) but it is very hard tocalculate because data requirements grow exponentially; thus we use transaction volumes. Actuallytrading volume standard deviation is more accurate since it is more coherent with the model wedeveloped. We point out that this analysis may be improper because it deals with a micro natureeffect, i.e. a single security spread, with a macro parameter, that is to say referred to whole market.
II.
Test
In order to prove evidence that risk management creates value we implement an empiricalexperiment of portfolio management. Thus we need define test framework, then introduceoperational hypothesis, regard sample data and finally chose performance indexes.
II.A How model works
Risk measurement model usefulness may be shown only if it is applied to a portfolio managementexample.We can may schematize how our simulation works by a flow diagram: a set of operation proposalsis an input of the model, they are analyzed sequentially because a proposal is approved only if
3
J. Boudoukh, M. Richardson, R. Stanton, R. Whitelaw [1995]
4
Hull[1993]