Fan Fiction


of 24
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  PERSONNEL PSYCHOLOGY2007, 60, 475–498 A MULTIDIMENSIONAL APPROACH FOREVALUATING VARIABLES IN ORGANIZATIONALRESEARCH AND PRACTICE JAMES M. L E BRETONPurdue UniversityMICHAEL B. HARGISUniversity of Central ArkansasBRIAN GRIEPENTROGFors Marsh Group, LLCFREDERICK L. OSWALDMichigan State UniversityROBERT E. PLOYHARTUniversity of South Carolina Oneofthemostdifficulttasksfacingindustrial-organizationalpsycholo-gists is evaluating the importance of variables, especially new variables,to be included in the prediction of some outcome. When multiple re-gression is used, common practices suggest evaluating the usefulnessof new variables by showing incremental validity beyond the set of ex-isting variables. This approach assures that the new variables are notstatistically redundant with this existing set, but this approach attributesany shared criterion-related validity to the existing set of variables andnone to the new variables. More importantly, incremental validity alonefails to answer the question directly about the importance of variablesincluded in a regression model—arguably the more important statisti-cal concern for practitioners. To that end, the current article reviews 2indices of relative importance, general dominance weights and relativeweights, which may be used to complement incremental validity evi-dence and permit organizational decision makers to make more preciseandinformeddecisionsconcerningtheusefulnessofpredictorvariables.We illustrate our approach by reanalyzing the correlation matrices from2 published studies. The authors would like to thank three anonymous reviewers for their constructive com-ments and feedback on earlier drafts of this article. A previous version of this article waspresented at the 17th Annual Conference of the Society for Industrial and OrganizationalPsychology in Toronto, Canada in April 2002.Correspondence and requests for reprints should be addressed to James M. Le-Breton, Department of Psychological Sciences, Purdue University, West Lafayette, IN; COPYRIGHT  C  2007 BLACKWELL PUBLISHING, INC. 475  476 PERSONNEL PSYCHOLOGY Multiple regression (MR) is one of the most popular statistical tech-niques used in the organizational sciences. As noted by a number of authors, MR can be used for two different yet related purposes: predictionand explanation (Azen & Budescu, 2003; Courville & Thompson, 2001;Johnson & LeBreton, 2004; Pedhazur, 1997). When used for the purposeof   prediction , industrial-organizational (I-O) psychologists seek to iden-tify a model with practically useful levels of explained variance (  R 2 ). Incontrast, when used for the purpose of   explanation , I-O psychologists areinterested in the extent to which each predictor contributes to the overall  R 2 relative to the other predictors.Inreality,mostapplicationsofMRinvolveelementsofbothpredictionand explanation (Azen & Budescu, 2003). This is especially true in theorganizational sciences where I-O psychologists seek to predict an orga-nizationally valued criterion variable but also to understand the relativeimportance of the variables used to predict that criterion. Examples in-clude trying to understand the relative contributions various job attitudesmake to the prediction of turnover or the relative contribution various in-dividual difference variables make to the prediction of job performance.AlthoughinpracticethereareamultitudeofrelevantfactorsexternaltotheMR model to consider while evaluating variables within the MR model(e.g., cost, utility, fairness, reading level of participants, organizationalgoals, and managerial acceptance), statistical criteria often help informsuch decisions (e.g., SIOP  Principles , 2003). Indeed, I-O psychologistsoften statistically evaluate new variables by examining the importance of those new variables compared to an existing set of variables. One de fi -nition of variable importance emphasizes the incremental validity of thenew measure, which we call  incremental importance . This de fi nition of importancewassuggestedbyDarlington(1968)withhisusefulnessstatis-tic.Incrementalimportanceisvaluablebecauseitensuresthatthevariableof interest is tapping unique variance in the criterion variable above andbeyond that of the other variables in the regression model (Cronbach &Gleser, 1957; Sechrest, 1963).Although I-O psychologists try to avoid introducing new predictor variables that are too highly correlated with existing variables, there maybe reasons why new variables correlate with other variables due to either theoretical or measurement similarities. For instance, new organizationaltheories often lead to the development of new or re fi ned constructs  —  and consequently new or re fi ned predictor variables. These new predictor variables may produce some empirical overlap with the existing predictor battery, even though they have strong correlations with the criterion.For example, an I-O psychologist trying to investigate and reduce theprevalence of sexual harassment may wish to examine whether an anti-social personality characteristic, such as sub clinical psychopathy (Le-Breton, Binning, Adorno, 2006; Lilienfeld & Andrews, 1996; Paulhus  LEBRETON ET AL. 477 & Williams, 2002), can predict sexual harassment above and beyond theprediction provided by normal personality characteristics (e.g., the BigFive; Lee, Gizzarone, & Ashton, 2003; Olapegba, 2004) as well as vari-ous situational factors (e.g., organizational climate, organizational power,toleranceforharassment;Fitzgerald,Drasgow,Hulin,Gelfand,&Magley,1997; Harned, Ormerod, Palmieri, Collinsworth, & Reed, 2002). Prior toanalyzing the data using a multiple regression analysis, the psychologistwould likely establish the bivariate signi fi cance of each predictor variablewith the criterion variable. Any variable lacking a signi fi cant bivariaterelationship would likely be omitted from the regression model becausethe lack of a signi fi cant bivariate relationship is indicative of a speci fi -cation error (i.e., an irrelevant variable has been erroneously included inthe model; Azen & Budescu, 2003; Budescu, 1993; LeBreton, Young, &Ladd, 2005). If the predictor   –  criterion correlation is nonzero, and the newpredictor variable is correlated with the other predictor variables, then itmakes sense to determine the extent to which the predictor has a  “ unique ” or incremental contribution by conducting a regression analysis.In an incremental validity analysis, however, any criterion variancepredicted by both the new variable and the existing set of variables isautomatically  “ credited ”  toward the latter. Thus, an incremental validityanalysis might lead one to make incorrect or misinformed decisions aboutthe relative ef  fi cacy of the new variable. As such, it is possible that anew measure of subclinical psychopathy might yield relatively small in-crements in prediction (e.g.,   R 2 = .02) but that the overall contributionthat this new trait makes to the  R 2 is as high as (or higher than) the other predictors in the model.To address this potential problem, I-O psychologists might be inter-ested in the new variable ’ s  relative importance,  which we de fi ne as thecontribution each predictor makes to the  R 2 , considering both its uniquecontribution and its contribution in the presence of the other predictors(Johnson & LeBreton, 2004). Although statistics are available for con-ducting relative importance analyses, I-O psychologists rarely concernthemselves with conducting such analyses, apart from a casual inspectionof standardized regression weights, which may prove confusing or mis-leading in the presence of multicollinearity (Darlington, 1968; Johnson,2001a; Johnson & LeBreton, 2004; LeBreton, Ployhart, & Ladd, 2004;Wainer, 1978). This is unfortunate because we believe that includinginformation about a variable ’ s  incremental importance  and  relative im- portance  will permit I-O psychologists to evaluate the statistical qualityof the variables included in their regression models in a more balancedand thorough manner.Returning to the sexual harassment example, let us assume that aregression equation containing the Big Five personality traits and sev-eral power variables yields an  R 2 =  .28. If a practitioner learned that a  478 PERSONNEL PSYCHOLOGY subclinical psychopathy scale yielded only a 2% increment in  R 2 over thistest battery (for a total  R 2 = .30), he or she might decide that the addedtime and expense associated with measuring this trait was not worth themeager return on validity. In this instance, the incremental importanceof subclinical psychopathy was weak, suggesting that it only accountedfor 7% of the total predicted criterion variance (.02/.30 = .07). However,let us assume that this practitioner also conducted a relative importanceanalysis. This type of analysis, described below in detail, involves divid-ing the total  R 2 = .30 into the proportion of criterion variance attributedto each predictor. If a relative importance analysis revealed a weight  = .15 for subclinical psychopathy, then we would conclude that this trait,considered alone and in combination with the other predictor variables,accounted for 50% of the total predicted criterion variance (.15/.30  = .50), a more substantial contribution. It is important to note here that therelative importance analyses did not supersede the incremental analysis.Instead, the relative importance analysis, when used in conjunction withan incremental analysis, better illustrated the overall contribution subclin-ical psychopathy made to the prediction of sexual harassment in the MRmodel.We believe practitioners would  fi nd such methodologies very useful.The purpose of the current article is threefold: (a) to introduce two indicesof relative importance; (b) to offer a framework for evaluating variablesthatcombinehierarchicalmultipleregressionanalyseswiththerelativeim-portance analyses; and (c) to illustrate the practical bene fi t of this generalapproach by reanalyzing data presented in previously published articles.We conclude by providing practical guidance for presenting and summa-rizing the results of a relative importance analysis and for implementingthese analyses using common statistical software packages. Traditional Measures of Relative Importance Traditional approaches for testing or rank-ordering variables on their relative importance usually consider variables: (a) in isolation from oneanother by calculating the squared zero-order correlations ( r   2 yj  ) betweenthe dependent variable (  y ) and each of the  j   independent variables, or (b)in the presence of one another by calculating the squared standardizedregression weights ( β 2  j  ) obtained by regressing the dependent variable si-multaneouslyonallvariables(Darlington,1968;Johnson,2001a;Johnson& LeBreton, 2004).When variables are uncorrelated, then most measures of relative im-portance yield convergent, if not identical, conclusions regarding rela-tive importance. However, this is generally not true when variables arecorrelated (Budescu, 1993; Darlington, 1968; Johnson, 2000; LeBreton,  LEBRETON ET AL. 479 Binning,Adorno,&Melcher,2004;LeBreton,Ployhart,etal.,2004),asiscommon in organizational research and practice. Under moderate-to-highlevels of variable collinearity, neither zero-order correlation coef  fi cientsnor standardized regression coef  fi cients can adequately partition the jointpredictedvariancesharedamongmultiplecorrelatedvariablesandthecri-terion(Johnson,2000;LeBreton,Binningetal.,2004).Infact,undertheseconditions correlations and regression coef  fi cients (or   r   2 yj   or  β 2  j   ) can yieldsubstantially different conclusions about relative importance (e.g., their values can result in different rank orders of the variables). Two New Relative Importance Statistics General dominance weights  and  relative importance weights  are twonewer statistics speci fi cally designed to determine relative importancewhen there are correlations between the predictors. General Dominance Weights General dominance weights (Azen & Budescu, 2003; Budescu, 1993)address the problem of correlated variables using the logic of all possiblesubset regression analyses. Statistically, a dominance analysis involvescomputing the mean of each predictor  ’ s squared semipartial correla-tion (i.e.,    R 2 ) across  all possible subset regression models . Lindeman,Merenda, and Gold (1980) were the  fi rst to suggest this quantitative mea-sure of importance. Basically ,  for each of the  j   variables, the  generaldominance  weight is calculated ( C  j  ) and represents the average  “ useful-ness ”  of a variable across all subset regression models (Azen & Budescu,2003;Darlington,1968).Generaldominanceweightssumtothemodel  R 2 and can therefore be interpreted as estimates of effect size. Furthermore, rescaled dominance weights  may be computed by dividing each  C  j   by themodel  R  2 , resulting in the proportion of variance predicted in the depen-dent variable that may be attributed to that variable. Thus, a dominanceanalysis permits a truly meaningful and interpretable decomposition of the model  R  2 (Azen & Budescu, 2003; Budescu, 1993).Azen and Budescu (2003) noted that in addition to a variable ’ s generaldominance, a dominance analysis also reveals more speci fi c patterns of avariable ’ s dominance in a regression model. Speci fi cally,  complete domi-nance  involves comparing each pair of variables in all possible subsets of regression models to which they belong. For example, take  X  CONSC  and  X  GMA  as two predictor variables, Conscientiousness and general mentalability, and [  X  OTHER ], which corresponds to a set of remaining predic-tor variables: biodata, structured interview, and a conditional reasoningtest.  X  GMA  “ completely dominates ”  X  CONSC  if   X  GMA  has a larger squared
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks