Mobile

A national critical loads framework for atmospheric deposition effects assessment: IV. Model selection, applications, and critical loads mapping

Description
A national critical loads framework for atmospheric deposition effects assessment: IV. Model selection, applications, and critical loads mapping
Categories
Published
of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A National Critical Loads Framework for Atmospheric Deposition Effects Assessment: IV. Model Selection, Applications, and Critical Loads Mapping GEORGE R. HOLDREN, JR. 2 * TIMOTHY C. STRICKLAND ManTech Environmental Technology, Inc. c/o Environmental Research Laboratory 200 SW 35th Street Corvallis, Oregon 97333, USA BERNARD J. COSBY Department of Environmental Sciences, Clark Hall University of Virginia Charlottesville, Virginia 22903, USA DAVID MARMOREK DAVID BERNARD Environmental and Social Systems, Ltd., 3rd Floor 1765 W. 8th Ave. Vancouver, British Columbia V6J 5C6 Canada ROBERT SANTORE CHARLES T. DRISCOLL LINDA PARDO Department of Civil & Environmental Engineering Syracuse University Syracuse, New York 13210, USA CAROLYN HUNSAKER ROBERT S. TURNER Oak Ridge National Laboratory-Environmental Sciences Division P.O. Box 2008 Oak Ridge, Tennessee 37831, USA JOHN ABER Complex System Research Center, EOS University of New Hampshire Durham, New Hampshire 03824, USA ABSTRACT/The critical loads approach is emerging as an attractive means for evaluating the effects of atmospheric deposition on sensitive terrestrial and aquatic ecosystems. Various approaches are available for modeling ecosystem responses to deposition and for estimating critical load values. These approaches include empirical and statistical relationships, steady-state and simple process models, and integrated-effects models. For any given ecosystem, the most technically sophisticated approach will not necessarily be the most appropriate for all applications; identification of the most useful approach depends upon the degree of accuracy needed and upon data and computational requirements, biogeochemical processes being modeled, approaches used for representing model results on regional bases, and desired degree of spatial and temporal resolution. Different approaches are characterized by different levels of uncertainty. If the limitations of individual approaches are known, the user can determine whether an approach provides a reasonable basis for decision making. Several options, including point maps, grid maps, and ecoregional maps, are available for presenting model results in a regional context. These are discussed using hypothetical examples for choosing populations and damage limits. KEY WORDS: Sulfate; Nitrate; Critical loads; Modeling; Deposition standards 1The research described in this article has been funded by tile US Environmental Protection Agency. This document has been pre- pared at the EPA Environmental Research Laboratory in Corvallis, Oregon, through contract #68-C8-0006 with ManTech Environ- mental Technology, Inc., and Interagency Agreement #1824- B014-A7 with the U.S. Department of Energy and at Oak Ridge National Laboratory managed by Martin Marietta Energy Systems, Inc., under Contract DE-AC05-84OR21400 with tile US Depart- ment of Energy. Environmental Sciences Division Publication No. 3904. It has been subjected to the agency's peel" and administrative review and approved for publication. Mention of trade names or commercial products does not constitute endorsement or recom- mendation for use. *Author to whom correspondence should be addressed. ~Present address: Battelle Pacific Northwest Laboratory, P.O. Box 999, Richland, Washington 99352, USA. Environmental Management Vol. 17, No. 3, pp. 355-363 9 1993 Springer-Verlag New York Inc.  356 G.R. Holdren, Jr., and others Predicting ecosystem responses to changing levels of deposition is an essential component of the critical loads process. As such, the use of numerical or statisti- cal models is an almost unavoidable step in the devel- opment of critical load limits. This is the fourth of a series of papers describing approaches for developing critical load estimates. We discuss the use of different modeling methods for defining deposition limits, and the means for presenting the results in a spatially rep- resentative manner. Previous papers in this series (Strickland and others 1992, Hunsaker and others 1992, Hicks and others 1992) discuss the processes associated with population identification, indicator and end-point selection, regionalization, and the se- lection of an appropriate ecosystem reference condi- tion. Ideally, to make reliable projections of ecosystem responses to deposition, one should be aware of major geochemical and biological processes effecting the systems. The biogeochemical processes controlling soil and surface water chemical responses to sulfate deposition are reasonably well known (Galloway and others 1983, Reuss and Johnson 1985, Aber and oth- ers 1989). In contrast, a consensus regarding biologi- cal consequences of surface water acidification has not yet developed, although new and better information is constantly becoming available (Schindler 1988, Baker and others 1990). Both the chemical and bio- logical implications of elevated nitrogen deposition for terrestrial ecosystems remain a matter of debate, largely because of the effects resulting from the com- plex sets of biochemical transformations among nitro- gen species. Similarly, the relatively large number of interacting stresses affecting terrestrial systems make the reliable quantification of a critical load using pro- cess level models extremely difficult for most pollut- ants. As such, alternative approaches to process mod- eling may be required if we are to make reasonable estimates of critical load values for a range of sensitive ecosystems using the information currently available. Model Selection and Response Forecasting Three types of approaches, reflecting a hierarchy in the levels of complexity and modeling philosophy (Schecher and Driscoll 1989), may be used to forecast ecosystem response. These are: (1) statistical and em- pirical analyses, (2) steady-state and simple-process models, and (3) integrated process models. The dif- ferent types of models provide a natural trade-off between resolution and levels of uncertainty. Differ- ent classes of models serve different functions and implementation requires differing levels of data and information. The application of several models offers a robust approach to the assessment of critical loads. If the predictions from different types of models are similar, confidence in the output (critical load) in- creases. Conversely, divergent model results suggest the need for better models of additional research. Nevertheless, because the use of different models in assessments is likely, intercomparison of prediction methods will be important in reaching consensus. Regardless of the approach taken, the models used to estimate critical loads address, in some form, pollutant-dose/ecosystem-response relationships. Be- cause processes are best understood at individual sites, model complexity is generally inversely related to system scale. However, the different approaches offer various strengths and limitations, and each type of model can contribute to the process of determining an estimate for critical loads. Conceptually, models from any of the levels within the hierarchy can be used to obtain critical load esti- mates. However, to be useful, a model should have the pollutant of concern as an independent variable, and the output from the model should include the indicator variable or a surrogate that can be linked to the indicator (Hunsaker and others 1992). Another constraint on model selection, especially for the larger, more complex, process-level models, may arise from the input data requirements of the model. For example, in attempting to define critical loads for sur- face waters, a government might have voluminous water chemistry data, but very little information on the bedrock, soils, and vegetation in the upland areas of the associated watersheds. Even though these latter factors may be responsible for controlling surface wa- ter compositions and quality, it might be prudent to develop statistical or empirical relations among water quality variables and deposition in order to assess the affects of deposition on the health of the aquatic eco- systems. Empirical and Statistical Analyses This approach is based on the use of statistical anal- yses to identify patterns and relationships between deposition and ecological effects. This approach, in itself, is flexible and can accommodate a range of lev- els of analyses. For example, simple regression analy- ses between deposition and surface water concentra- tions of nitrate can yield estimates about the deposition levels required to attain "nitrogen satura- tion" in forested ecosystems (Kaufmann and others 1991, Stoddard 1992). As the linkages between the pollutant variable(s) and the indicator become more complex, multivariate analyses, principal component  Critical Loads Framework: Modeling and Mapping 357 analyses, cluster analyses, or related techniques might be required to delineate statistical relationships among deposition and ecological effects. Use of empirical and statistical approaches offers a number of advantages. In doing these analyses, one works with the field data, and, in doing so, potentially derives a better understanding of field relationships. The statistical models allow a preliminary evaluation of assumptions and hypotheses, and, while cause-and- effect relationships cannot be inferred, the models do facilitate the identification of hypotheses or mecha- nisms that are consistent with field observations. One must be aware when developing statistical re- lationships, however, that current conditions might not be representative of the final steady-state condi- tions. In many cases, ecosystem characteristics adjust slowly to changes in deposition, and the current state of the system could continue to change for, perhaps, decades. These changes can occur even in the absence of additional depositionai changes. For example, in the mid-Appalachian and southeastern portions of the United States, soils are continuing to retain sulfate by inorganic absorption onto oxide surfaces and will not achieve steady state for several decades (Rochelle and Church 1987). Therefore, one would not want to use observed relationships between depositional fluxes of sulfate and the concentrations of this species in surface waters to project future surface water sul- fate concentrations. Changes of this nature, however, can be modeled in many instances, so mechanisms are available tor addressing long-term responses (Church and others 1989). Once reasonable statistical relationships are identi- fied, it is possible, with proper caveats, to use observed relationships as predictive models. Most commonly, this approach is applied using known field relation- ships. The procedure can be extended, however, to estimate input parameters for the large, integrated models when only a limited amount of field data are available. While this latter procedure can be a cost- effective means to extend a limited information base to whole regions or populations of ecosystems, the approach should be used with some caution because of the potential for compounding errors of unknown magnitude. The advantages of using these approaches are: (1) they provide useful screening tools to determine the populations, regions, or ecosystems that require fur- ther attention; (2) they provide an initial assessment tool; (3) they typically have minimal requirements for data input; and (4) they are computationally simple. The disadvantages of these methods are: (1) they are statistical rather than mechanistic (results do not pro- vide specific information on the relative importance of different processes); (2) if they are built on observa- tional data, they do not prove cause and effect rela- tionships; and (3) due to region-to-region variations in biogeochemical transformations, it may be difficult to extrapolate empirical relationships from one re- gion to another. As a result, confounding factors may mask or mislead the analyst in determining the true relationship between deposition and ecosystem re- sponse. Steady-State and Single-Process Models Steady-state models have been used widely to pre- dict the response of a range of ecosystems. Selection of appropriate steady-state and single-process models is directly linked to the type of ecosystem being stud- ied, the pollutant and problem of concern, and the time frames over which the ecosystems are expected to respond. A model should be selected that addresses changes iu the indicator(s), and it should either di- rectly or indirectly characterize the association be- tween the pollutant and modeled effects. These mod- els work best to describe simple systems or systems in which one or two dominant biogeochemical processes effectively control the expected responses. For exam- ple, Henriksen's F-factor model (Henriksen 1979, Wright 1983) provides a good description of the re- sponse of surface waters to the effects of acid sulfate deposition in shield-type lakes. A model with this type of structure is less data-intensive than integrated (dy- namic) models. However, steady-state models do not consider the time-dependent processes mediating soll and vegetation responses to deposition, so the appli- cation of this type of an approach to systems that are not close the a steady-state condition is questionable. Single-process models, in which one or a select few biogeochemical process(es) are examined, provide an- other alternative for evaluating the possible extent of deposition-induced ecological damage. While this ap- proach has been criticized for not mimicking whole ecosystem responses, it can provide bounding infor- mation about the magnitude of responses that might be expected. For example, the soil cation exchange model developed by Reuss and.Johnson (1985, 1986) is a useful tool for identifying those systems that might be at risk for losing their buffering capacity (Church and others 1989). Although the model does not purport to yield actual water chemistries, the re- suits can be used to identify those resources that may be at risk of acidification and are deserving of addi- tional study. Another advantage of the single-process models is that their data requirements are usually sub- stantially less than those for integrated models.  358 G.R. Holdren, Jr., and others Single-process models also have limited capabilities to describe systems having multiple feedbacks. As the complexity of the linkages between the pollutant and the associated indicator variable increases, results from single-process models become more uncertain. Moreover, real problems arise in the application of these models when the user wishes to gain an under- standing of the whole ecosystem response. This diffi- culty occurs because there may be a few direct link- ages between pollutant concentrations in deposition and the observed effects in the biosphere. For exam- ple, due to the complexity of transformation pro- cesses among nitrogen species, single-process models will probably not prove satisfactory in attempting to understand the effects of nitrogen deposition on long-term forest productivity and surface water acidi- fication. While steady-state and single-process models are likely to find wide applicability in a range of systems, additional information relevant to the formulation of policy may be provided by using more complex mod- els in some ecosystems. Integrated Process (Dynamic) Modeling The use of dynamic models involves the represen- tation of multiple process transformations and as- sumes that most major processes relevant to the state of the ecosystem have been included. The advantages of these models are: (1) they allow one to investigate more fully how interacting processes affect the overall response of complex systems. This information has considerable relevance to the formulation of sched- ules for abatement strategies with respect to timing of response. The disadvantages of the approach are: (1) the integrated system models tend to have data-intensive requirements for operation; (2) in some cases, model execution requires significant computational re- sources; (3) because data and computational require- ments are high, it is difficult to apply dynamic models on regional scales; and (4) the complexity of the mod- els may lead one to place unfounded confidence in the predictions. Uncertainty Analysis A fundamental aspect of any modeling activity is that the results are, at best, only an approximation of "truth." That is to say, there will be some level of uncertainty associated with predictions of ecosystem response to changing conditions. Reckhow and Cha- pra (1983) define uncertainty as the inverse of reli- ability, or more specifically, as not having complete knowledge about an effect or situation. Although most of the individuals who construct and employ models are well aware of model limitations, the levels of uncertainty associated with any model result are frequently poorly communicated to the users of the information, namely policy makers and the general public. Yet, it is important that information regarding the reliability of specific projections be conveyed to the users. In fact, an evaluation of the uncertainty should be an integral part of any study in which the results are to be used for setting policy. Even if errors cannot be determined quantitatively, qualitative esti- mates should be sought. Uncertainty analyses are assessments of the quality of information comprising the various components of the study. Uncertainty cannot be removed from anal- yses, but it can be estimated if sources of error can be identified and quantified. Fox (1984) has categorized uncertainty into two components: inherent and re- ducible error. Inherent errors are derived from the stochastic, or random, nature of processes and inter- actions that occur. Stochastic variability represents the intrinsic noise in the system (e.g., random variations in meteorological input data). Reducible errors, on the other hand, are the uncertainties arising from imprecisions or inadequacies in the gathering of field measurements and in process representations in mod- els. The reducible errors come from a wide range of sources and can be either stochastic in nature or sys- tematic. In either case, much of the work that has been done under the guise of model development and enhancement has been done with the intent of reduc- ing the magnitude of this type of error. Unfortu- nately, because it is difficult to known the "correct" value for a prediction, in many instances it is not pos- sible to obtain quantitative estimates for all compo- nents of the error. Field measurements used to support modeling ac- tivities can introduce errors from a number of sources. The design of the field sampling program can be a major source of uncertainty in many studies. If the samples are not typical of the population they purport to represent, extrapolations of model results to a region or population will be inaccurate. For ex- ample, in soils, the spatial variability of chemical and physical properties is such that obtaining a represen- tative sample is a nontrivial process (Nielsen and Bouma 1985). Sampling errors, however, can be esti- mated based on sampling theory and accepted statisti- cal procedures (Cochran 1977). During the process of sample analysis, both inherent and reducible errors are evident. Uncertainties arising for analytical proce- dures can usually be controlled by careful use of ap- propriate standards and through the use of compara- tive studies employing multiple procedures for measuring given physical or chemical parameters.  Critical Loads Framework: Modeling and Mapping 359 Once field measurements have been obtained, the re- sults must usually be extrapolated or interpolated to the resource population. Various statistical proce- dures, such as kriging, are available to accomplish this, but the techniques might not be useful for quan- tifying all of the sources of error (e.g., establishing proper boundary conditions). Model-based uncertainties arise from three major sources: model parameterization, model formulation, and user errors. Errors associated with model param- eters can be quantified through formal sensitivity and statistical error analysis (Beck 1983). Model formula- tion, or structural error, and incorrect model applica- tion are more difficult to evaluate. If a major process is omitted from a model, the model results will not represent the system adequately. Without formal model evaluations and comparisons with field data, structural errors might not be detected. In addition, poor model performance can be expected if a model is inappropriately applied or an inappropriate model is applied. Models that are applied to systems with attributes outside the range used in model develop- ment would be an example of an inappropriate model application. Another type of uncertainty can arise through the linkage of models. Mismatches in temporal or spatial scales between output from one model and input to another introduce error. For example, if a model's output is expressed as average annual fluxes or con- centrations on an 80-kin grid cell but the input re- quired by another model is monthly concentrations at the individual watershed scale, the inputs will have to be interpolated or imputed, and this can introduce errors which are difficult to quantify. In the modeling context, the inherent and reduc- ible errors together are usually quantified in terms of precision, accuracy, and representativeness. (Com- pleteness and comparability are two other measures of data quality, although they are characteristics of a data set, per se, rather than being a property of indi- vidual datum.) Turner (1979) has prepared a succinct summary describing the interpretation and use of these measures for evaluating the overall uncertainty associated with modeling efforts and their application to policy-based issues. Accepting the difficulty in making quantitative es- timates of the uncertainty associated with specific crit- ical load values, it remains important to communicate the level or degree of confidence the modeler has in the results. The quality of model results forms a con- tinuum from fairly vague or general projections about the direction(s) that a system might follow up through highly accurate and specific predictions of response. While quantitative estimates of these confi- dence bounds are desirable for the technical commu- nity, we have found it useful to employ a more subjec- tive classification of the levels of uncertainty (Church and others 1989) when conveying the results to a wider audience. This system defines the following lev- els of certainty as: 9 Prediction--an estimate of some current or future condition within specified and tightly constrained confidence limits. Usually, this level of confidence is reserved for a relatively few systems that have been extensively studied and are well character- ized. For example, one should be able to predict the response of a soil sample to changing sulfate loading in a laboratory setting if the soil pH, SO4 adsorption characteristics (isotherms), and fluid flow rate are known. 9 Forecast--a statement that some future event or condition will occur with a characterizable proba- bility. Forecasts can be made for systems for which there is a moderate level of confidence in the like- lihood of a specific occurrence or condition com- ing to pass. Daily weather forecasts (i.e., 60% chance of precipitation tomorrow) are classic ex- amples of this level of confidence in modeling re- suits. 9 Projection--a statement that describes possible re- sponses of an ecosystem to specified driving func- tions. This category represents the lowest level of confidence in modeled outcomes. Results from most ecological models continue to fall within this category, partly because we are still learning a great deal about tim dynamics of these complex systems, but also because many of the driving functions, such as daily or monthly precipitation or daily temperatures, will have a significant im- pact on the outcome. These variables are essen- tially unknowable to the degree of accuracy neces- sary in order to make forecasts or predictions. Determination of Critical Load This section describes the synthesis of the informa- tion developed in the previous steps into a critical load. For most countries, the policy and technical per- sonnel will have made a priori decisions regarding the most sensitive resource to be protected. These re- sources may be major, high-profile components of ecosystems, such as lakes or streams, or the decision might be made using lower-visibility species that con- vey more specific information about the nature of the stress being imposed on the system. For example, about 15% of the plants currently listed under the Endangered Species Act are found in ombrotrophic
Search
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks