Slides

Jacobs Kiefer Bayes Guide 3 10 V1

Description
1. The Bayesian Approach to Default Risk: A Guide Michael Jacobs, Jr. Credit Risk Analysis Division Office of the Comptroller of the Currency Nicholas M. Kiefer Cornell…
Categories
Published
of 31
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  • 1. The Bayesian Approach to Default Risk: A Guide Michael Jacobs, Jr. Credit Risk Analysis Division Office of the Comptroller of the Currency Nicholas M. Kiefer Cornell University Departments of Economics and Statistical Science March 2010 Forthcoming: “Rethinking Risk Measurement & Reporting”, Risk Books, Ed. Klaus Blocker The views expressed herein are those of the authors and do not necessarily represent the views of the Office of the Comptroller of the Currency or the Department of the Treasury.
  • 2. Outline <ul><li>Introduction </li></ul><ul><li>Elicitation of Expert Information </li></ul><ul><li>Statistical Models for Defaults </li></ul><ul><li>Elicitation: Example </li></ul><ul><li>Inference </li></ul><ul><li>Conclusions & Directions for Future Research </li></ul>
  • 3. Introduction <ul><li>All competent statistical analyses involve subjective inputs, importance of which is often minimized in a quest for objectivity </li></ul><ul><li>Justification of these is an important part of supervisory expectations for model validation (OCC 2000, BCBS 2009b) </li></ul><ul><li>But to appear objective estimation with such judgments typically proceeds ignoring qualitative information on parameters </li></ul><ul><li>However, subject-matter experts typically have information about parameter values and model specification </li></ul><ul><li>The Bayesian approach allows formal incorporation of this by combining “hard” & “soft” data using the rules of probability </li></ul><ul><li>Another advantage is availability of powerful computational techniques such as Markov Chain Monte Carlo (MCMC) </li></ul><ul><li>A difficulty in Bayesian analysis is elicitation & representation of expert information in the form of a probability distribution </li></ul>
  • 4. Introduction (continued) <ul><li>While may not be important in &quot;large“ samples, expert information is of value if data is scarce, costly, or unreliable </li></ul><ul><li>Herein we illustrate the practical steps in the Bayesian analysis of a PD estimation for a group of homogeneous assets </li></ul><ul><li>This is required for determining minimum regulatory capital under the Basel II (B2) framework (BCBS, 2006) </li></ul><ul><li>This also implications for BCBS (2009a), which stresses the continuing importance of quantitative risk management </li></ul><ul><li>Focus 1 st on elicitation & representation of expert information, then on Bayesian inference in nested simple models of default </li></ul><ul><li>As we do not know in advance whether default occurs or not, we model this uncertain event with a probability distribution </li></ul><ul><li>We assert that uncertainty about the default probability should be modeled the same way as uncertainty about defaults - i.e, represented in a probability distribution </li></ul>
  • 5. Introduction (concluded) <ul><li>There is information available to model the PD distribution - the fact that loans are made shows that risk assessment occurs! </li></ul><ul><li>The information from this should be organized & used in the analysis in a sensible, transparent and structured way </li></ul><ul><li>First discuss the process for elicitation of expert information and later show a particular example using the maximum entropy </li></ul><ul><li>Present a sequence of simple models of default generating likelihood functions (with generalized linear mixed models ): </li></ul><ul><ul><li>The binomial model, 2-factor ASRM of B2, and an extension with an autocorrelated systematic risk factor </li></ul></ul><ul><li>We sketch the Markov Chain Monte Carlo approach to calculating the posterior distribution from combining data and expert information coherently using the rules of probability </li></ul><ul><li>Illustrate all of these steps using annual Moody's corporate Ba default rates for the period 1999-2009 </li></ul>
  • 6. Elicitation of Expert Information <ul><li>General definition: a structured algorithm for transforming an expert's beliefs on an uncertain phenomenon into a probability </li></ul><ul><li>Here a method for specifying a prior distribution of unknown parameters governing a model of credit risk (e.g., PD, LGD) </li></ul><ul><li>While focus is inference regarding unknown parameters, arises in almost all scientific applications involving complex systems </li></ul><ul><li>The expert may be an experienced statistician or a somewhat quantitatively oriented risk specialist (e.g., a loan officer, PM) </li></ul><ul><li>Situation also arises where decision-making under uncertainty needs to be expressed as such to maximize an objective </li></ul><ul><li>Useful framework: identify a model developer or econometrician as a facilitator to transform the soft data into probabilistic form </li></ul><ul><li>The facilitator should be multi-faceted, having also business knowledge and strong communication skills </li></ul>
  • 7. Elicitation of Expert Information (continued) <ul><li>In setting criteria for the quality of an elicitation, we distinguish between the quality of an expert's knowledge & the elicitation </li></ul><ul><li>By no means a straightforward task, even if it is beliefs regarding only a single of event or hypothesis (e.g., PD) </li></ul><ul><li>We seek assessment of probabilities, but it is possible that the expert is not familiar the meaning or can think in these terms </li></ul><ul><li>Even if the expert is comfortable with these, it is challenging to accurately assess numerical probabilities of a rare event </li></ul><ul><li>In eliciting a distribution for a continuous parameter it is not practical to try eliciting an infinite collection of probabilities </li></ul><ul><li>Practically an expert can make only a finite number (usually limited) number of statements of belief (quantiles or modes) </li></ul><ul><li>Given these formidable difficulties an observer may question if it is worth the effort to even attempt this! </li></ul>
  • 8. Elicitation of Expert Information (continued) <ul><li>Often a sensible objective is to measure salient features of the expert's opinion - exact details may not be of highest relevance </li></ul><ul><li>Similarity to specification of a likelihood function: infinite number of probabilities expressed as a function of small parameter sets </li></ul><ul><li>Even if the business decision is sensitive to the exact shape, it may be that another metric that is of paramount importance </li></ul><ul><li>Elicitation promotes a careful consideration by both the expert and facilitator regarding the meaning of the parameters </li></ul><ul><li>Two benefits: results in an analysis that is closer to the subject of the model & gives rise to a meaningful posterior distribution </li></ul><ul><li>A natural interpretation is as part of the statistical modeling process - this is a step in the hierarchy & the usual rules apply </li></ul><ul><li>Stylized representation has 4 stages: preparation, specific summaries, vetting of the distribution & overall assessment </li></ul>
  • 9. Elicitation of Expert Information (concluded) <ul><li>Non-technical considerations include quality of the expert and quality of the arguments made </li></ul><ul><li>While the choice of an expert must be justified, it usually not that hard identify: risk management decision-making individuals </li></ul><ul><li>It is useful to have a summary of what the expert knowledge is based on and be wary of any conflicts of interests </li></ul><ul><li>Important that if needed training should be offered on whatever concepts will be required in the elicitation (a “dry-run“) </li></ul><ul><li>The elicitation should be well documented: set out all questions & responses, and the process of fitting the distribution </li></ul><ul><li>Documentation requirements here fit well into supervisory expectations with respect to developmental evidence of models </li></ul><ul><li>Further discussion of this can be found in Kadane and Wolfson (1998), Garthwaite et al (2005) and O'Hagan et al (2006) </li></ul>
  • 10. Statistical Models for Defaults <ul><li>The simplest probability model for defaults for a homogeneous portfolio segment is Binomial, which assumes independence across assets & time, with common probability </li></ul><ul><li>As in Basel 2 IRB and the rating agencies, this is marginal with respect to conditions (e.g., through taking long-term averages) </li></ul><ul><li>Suppose the value of the i th asset in time t is: </li></ul><ul><li>Where , and default occurs if asset value falls below a common predetermined threshold , so that: </li></ul><ul><li>It follows that default on asset i is distributed Bernoulli: </li></ul><ul><li>Denote the defaults in the data and the total count of defaults </li></ul>
  • 11. Statistical Models for Defaults (continued) <ul><li>The distribution of the data and the default count is: </li></ul><ul><li>This is Model 1, which underlies rating agency estimation of default rates, where the MLE estimator is simply </li></ul><ul><li>Basel II guidance suggests there may be heterogeneity due to systematic temporal changes in asset characteristics or to changing macroeconomic conditions, giving rise to our Model 2: </li></ul><ul><li>Where is a common time-specific shock or a systematic factor and is asset value correlation </li></ul>
  • 12. Statistical Models for Defaults (continued) <ul><li>We have that and the conditional (or period-t) default probability is (4): </li></ul><ul><li>We can invert this for the distribution function of the year t default rate for (5): </li></ul><ul><li>Differentiating with respect to A yields the well-known Vasicek distribution (5.1): </li></ul>
  • 13. Statistical Models for Defaults (continued) <ul><li>The conditional distribution of the number of defaults in each period is (6): </li></ul><ul><li>From which we obtain the distribution of defaults conditional on the underlying parameters by integration over the default rate distribution (6.1): </li></ul><ul><li>By intertemporal independence we have the data likelihood across all years (7): </li></ul><ul><li>Model II allows clumping of defaults within time periods, but not correlation across time periods, so the next natural extension lets the systematic risk factor x t follow an AR(1) process: </li></ul>
  • 14. Statistical Models for Defaults (concluded) <ul><li>The formula for the conditional PD (4) still holds, but we don’t get the Vasicek distribution of the default rate (5) and (6)-(6.1) becomes this without the Vasicek distributed default rate: </li></ul><ul><li>Now the unconditional distribution is given by the T-dimensional integration as the likelihood now can’t be broken up the period-by-period (8): </li></ul><ul><li>Where is the joint-density of a zero-mean random variable following and AR(1) process </li></ul><ul><li>While Model 1 is a very simple example of a Generalized Linear Model - GLMs (McCullagh and Nelder, 1989), Models II &III are Generalized Linear Mixed Models - GLMMs), a parametric mixture (McNeil and Wendin, 2007; Kiefer, 2009) </li></ul>
  • 15. Elicitation: Example <ul><li>We asked an expert to consider a portfolio of middle market loans in a bank's portfolio, typically commercial loans to un-rated companies (if rated, these be about Moody's Ba-Baa) </li></ul><ul><li>This is an experienced banking professional in credit portfolio risk management and business analytics, having seen many portfolios of this type in different institutions </li></ul><ul><li>The expert thought the median value was 0.01, minimum of 0.0001, that a value above 0.035 would occur with probability less than 10%, and an absolute upper bound was 0.3 </li></ul><ul><li>Quantiles were assessed by asking the expert to consider the value at which larger or smaller values would be equiprobable given the value was less or greater than the median </li></ul><ul><li>The 0.25 (0.75) quantile was assessed at 0.0075 (0.0125), and he added a 0.99 quantile at 0.2, splitting up the long upper tail from 0.035 to 0.3 </li></ul>
  • 16. Elicitation: Example (continued) <ul><li>How should we mathematically express the expert information? </li></ul><ul><li>Commonly we specify a parametric distribution, assuming standard identification properties (i.e., K conditions can determine a K-parameter distribution-see Kiefer 2010 a) </li></ul><ul><li>Disadvantage: rarely good guidance other than convenience of functional form & this can insert extraneous information </li></ul><ul><li>We prefer the nonparametric the maximum entropy (ME) approach (Cover & Thomas, 1991), where we choose a probability distribution p that maximizes the entropy H subject to K constraints: </li></ul><ul><li>. </li></ul>
  • 17. Elicitation: Example (continued) <ul><li>Our constraints are the values of the quantiles , and we can express the solution in terms of the Lagrangian multipliers chosen such they are satisfied, so from the 1 st order conditions: </li></ul><ul><li>This is a piecewise uniform distribution, which we decide to smooth with an Epanechnikov kernel, under the assumption that discontinuities are unlikely to reflect the expert’s view: </li></ul><ul><li>Where h is the bandwidth, chosen such that the expert was satisfied with the final product </li></ul><ul><li>. </li></ul>
  • 18. Elicitation: Example (continued) <ul><li>We address the boundary problem, that K has a larger support by p ME , using the reflection technique (Schuster, 1985): </li></ul><ul><li>For asset correlation in Models 2 & 3, B2 recommends a value of about 0.20 for this segment, so due to little expert information on this, we choose a Beta(12.6, 50.4) prior centered at to 0.20 </li></ul><ul><li>With even less guidance on the autocorrelation in Model 3, other than from asset pricing literature that is likely to be positive, we chose a uniform prior in [-1,1], with the B2 value of 0 as its mean </li></ul><ul><li>. </li></ul>
  • 19. Elicitation: Example (continued)
  • 20. Elicitation: Example (concluded)
  • 21. Inference: Bayesian Framework <ul><li>Let us write the likelihood function of the data generically: </li></ul><ul><li>The joint distribution of the data R and the prior p is: </li></ul><ul><li>The marginal (predictive) distribution of R is: </li></ul><ul><li>Finally, we obtain the posterior (conditional) distribution of the parameter as: </li></ul><ul><li>Perhaps take a summary statistic like , the posterior expectation, for B2 or other purposes, which is (asymptotically) optimal under (bowl-shaped) quadratic loss </li></ul><ul><li>Computationally high dimensional numerical integration may be hard and inference a problem, therefore simulation techniques </li></ul>
  • 22. Inference: Computation by Markov Chain Monte Carlo <ul><li>MCMC methods are a wide class of procedures for sampling from a distribution when the normalizing constant is unknown </li></ul><ul><li>In the simple case, the Metropolis method, we sample from our posterior distribution that is only know up to a constant: </li></ul><ul><li>We construct a Markov Chain which has this as its stationary distribution by starting with a proposal distribution </li></ul><ul><li>The new parameter depends upon the old one stochastically, and the diagonal covariance matrix of the normal error is chosen specially to make the algorithm work </li></ul><ul><li>We draw from this distribution and accept the new draw according to the ratio of joint likelihoods of the data and the parameter, known as the acceptance rate </li></ul>
  • 23. Inference: Computation by MCMC (concluded) <ul><li>Note and therefore is easy to calculate in that: </li></ul><ul><li>The resulting sample is an MC with this equilibrium distribution & moments calculated from it approximate the target </li></ul><ul><li>We use the mcmc package (Geyer, 2009) used in the R programming language (R Development Core Team, 2009) </li></ul><ul><li>The package takes into account that standard errors from this are not independent in computation of confidence bounds </li></ul>
  • 24. Inference: Data <ul><li>We construct a segment of upper tier high-yield corporate bonds of firms rated Ba by Moody's Investors Service </li></ul><ul><li>Use the Moody's Default Risk Service TM (DRS TM ) database (release date 1-8-2010) </li></ul><ul><li>These are restricted to U.S. domiciled, non-financial and non-sovereign entities </li></ul><ul><li>Default rates were computed for annual cohorts of firms starting in January 1999 and running through January 2009 </li></ul><ul><li>Use the Moody’s adjustment for withdrawals (i.e., remove ½ from the beginning count) </li></ul><ul><li>In total there are 2642 firm-years of data and 24 defaults, for an overall empirical rate of 0.00908 </li></ul>
  • 25. Inference: Data (continued)
  • 26. Inference: Empirical Results <ul><li>PD estimates in 2- & 3-parameter models are only very slightly higher than in the 1-parameter model </li></ul><ul><li>Higher degree of variability of AVC e
  • We Need Your Support
    Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

    Thanks to everyone for your continued support.

    No, Thanks
    SAVE OUR EARTH

    We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

    More details...

    Sign Now!

    We are very appreciated for your Prompt Action!

    x