Monday, July 8, 2013

Should TMDL Modeling Include Uncertainty Analysis?

Will TMDL decisions  be improved with knowledge of the uncertainty in outcomes from proposed pollutant load reductions? That is, will our decisions generally be better if we have some idea of the range of possible outcomes that might result? I believe that the answer is yes, and yet current practice in water quality assessment and management suggests that others may believe that decision making might be undermined with full disclosure of uncertainties, or perhaps believe that uncertainty is small enough that it can be safely ignored.

Despite these reservations, it is noteworthy that the U.S. EPA also believes the answer is ‘yes, although their reasoning is unclear. EPAs perspective is implicit in their technical requirement for an uncertainty-based ‘margin of safety (MOS) in a TMDL application; however, absent from EPA guidance is an explanation as to why decisions improve with an uncertainty analysis.

Despite the requirement for an uncertainty-based MOS estimate, few TMDLs are accompanied by actual estimates of forecast uncertainty. Instead, TMDLs are typically proposed with either ‘conservative modeling assumptions or an arbitrarily chosen MOS (often implemented as an additional 10% pollutant load reduction). Neither approach explicitly links the MOS to TMDL forecast uncertainty. However, by hedging the TMDL decision in the direction of environmental protection, the MOS effectively increases the assurance that water quality standards will be achieved. This may seem reasonable and even desirable, but it must be noted that this hedging comes at a cost, and the basis for the hedging cost is totally arbitrary in most cases.

The National Research Council Committee to Assess the Scientific Basis of the Total Maximum Daily Load Approach to Water Pollution Reduction has recognized the arbitrary way in which the margin of safety has been applied. Specifically, their Executive Summary contains the following recommendation (National Research Council (NRC). (2001). Assessing the TMDL approach to water quality management, National Academy Press, Washington, D.C.):

The TMDL program currently accounts for the uncertainty embedded in the modeling exercise by applying a Margin of Safety (MOS); EPA should end the practice of arbitrary selection of the MOS and instead require uncertainty analysis as the basis for MOS determination.

However, acknowledging and computing model prediction uncertainty is not without challenges, as I learned many years ago. While in graduate school, I became involved in a proposed consulting venture in New Hampshire focusing on 208 planning. As a young scientist, I was eager to apply my new scientific knowledge, so I suggested to my consulting colleagues that we add uncertainty analysis to our proposed 208 study; everyone agreed. After we made our presentation to the client, perhaps predictably the clients first question was, ‘the previous consultants didnt mention uncertainty in their proposed modeling study, whats wrong with your model?’ This experience made me realize that I had much to learn about the role of science in decision making and about effective presentations!

While this story may give the impression that I’m being critical of the client for not recognizing the ubiquitous uncertainty in environmental forecasts, in fact I believe the fault to lie primarily with the scientists and engineers who fail to fully inform clients of the uncertainty in their assessments. Partially in their defense, water quality modelers may not see why decision makers are better off knowing the forecast uncertainty, and perhaps modelers may not want to be forced to answer the embarrassing question like the one posed to me years ago in New Hampshire.

For this situation to change, that is, for decision makers to demand estimates of forecast error, decision makers first need (1) motivation—that is, they must become aware of the substantial magnitude of forecast error in many water quality assessments, and (2) guidance—ideally, they need relatively simple heuristics that will allow them to use this knowledge of forecast error to improve decision making in the long run. Once this happens, and decision makers demand that water quality forecasts be accompanied with error estimates, water quality modelers can support this need through distinct short-term and long-term strategies.
Short-term approaches are necessary since existing mechanistic water quality models are over-parameterized and thus not supportive of a complete error analysis. Thus procedures are needed to immediately (1) conduct an informative, but incomplete error analysis, and (2) use that incomplete error analysis to improve decision making. In the long term, recommendations can be made to (1) restructure the models so that a relatively complete error analysis is feasible, and (2) employ Bayesian approaches that are compatible with adaptive assessment techniques that provide the best approach for improving forecasts over time.

In the short term, if knowledge, data, or model structure prevents uncertainty analysis from being complete, is there any value in conducting an incomplete uncertainty analysis? Stated another way, is it reasonable that decision making will be improved with even partial information on uncertainties, in comparison to current practice with no reporting of prediction uncertainties? Often, but not always, the answer is ‘yes, although the usefulness of incomplete uncertainty characterization, like the analysis itself, is limited.

Using decision analysis as a prescriptive model, we know that uncertainty analysis can improve decision making when prediction uncertainty is integrated with the utility (or loss, damage, net benefits) function to allow decision makers to maximize expected utility (or maximize net benefits). When uncertainty analysis is incomplete (and perhaps more likely, the utility function is poorly characterized) the concepts of decision analysis may still provide a useful guide.

For example, triangular distributions could be assessed for all uncertain model terms, and then ignoring correlation between model parameters, limited systematic sampling (e.g., Latin hypercube) from these distributions could be used to simulate the prediction error. The result of this computation could be either over- or underestimation of error, but it would provide some indication of error magnitude. However, this information alone, while perhaps helpful for research and monitoring needs, does not aid decision making. The approximate estimates of prediction uncertainty need to be considered in conjunction with the attitudes toward risk for the key decision variables.

Implicit in this attitude toward risk is an expression of preferences regarding trade-offs. For example, are decision makers sufficiently risk-averse concerning noncompliance with a water quality standard and its associated designated use, such that they are willing to increase pollutant control costs in order to increase the chance of attainment of certain water uses? Suppose a reasonable quantification of prediction uncertainty were available for a fecal coliform water quality criterion and its designated use of commercial shellfishing. Then alternative TMDL predictions might be expressed as ‘theres a 40% chance of loss of commercial shell- fishing with plan A, but only a 5% chance of loss with plan B.’ When costs of the plans are considered in conjunction with these uncertain TMDL forecasts, the trade-off between shellfishing loss and cost may be enhanced by awareness of risk that comes from the prediction uncertainty estimates. Since risk is not evident from deterministic (point) predictions of the decision attributes, the decision is likely to be better informed with risk assessment made possible through estimation of prediction uncertainty.

Given the components of a TMDL, how might the requirement for uncertainty analysis change the TMDL analysis and selection process? The TMDL margin of safety is protective in one direction only: it is protective of the environment, but at the possible unnecessary expense of pollution control overdesign. Thus, knowledge of prediction uncertainty and risk attitudes can be helpful primarily in determining the magnitude (not direction) of the margin of safety. One strategy, therefore, is to set the MOS as a multiplier of the TMDL prediction uncertainty, with the magnitude of this multiplier reflecting the risk assessment discussed above.

In the long run, the best strategy for TMDL forecasting and assessment will probably be to restructure the models, emphasizing the development of techniques that are compatible with the need for error propagation and adaptive implementation. Adaptive TMDLs, following an adaptive management format, make use of post-implementation monitoring data to assess standards compliance. Under adaptive implementation, if the data imply that water quality standards will not be met, then adjustments can be made to the TMDL implementation plan.

Of course, this raises another issue concerning uncertainty that warrants comment. Specifically, even the best compliance monitoring involves sampling a population, which implies sampling error. That does not even begin to cover standards compliance assessment based on no water quality data (just expert judgment alone) or methods to assess compliance with narrative standards. All of these imply uncertainties in the 303(d) listing of impaired waters in need of a TMDL. For compliance assessment, the solution seems clear. States should translate any narrative water quality standards into quantitative metrics, and they should employ statistical hypothesis testing with water quality data to rigorously assess compliance.

While acknowledging the error in monitoring data, I nonetheless believe that the use of adaptive implementation is a prudent response to the large forecast errors expected using current water quality models. In brief, if TMDL forecasts may be substantially in error, then corrections for these TMDLs are likely to be necessary. In that situation, recognition and accommodation at the initiation of the TMDL, allowing for refinement of the TMDL over time, is a pragmatic strategy. Since analytic approaches supporting adaptive implementation are likely to be based on combining initial TMDL forecasts with post-implementation monitoring, error terms are needed for both the model forecasts and the monitoring data to efficiently combine forecasts with observations in order to adaptively update the TMDL forecast. Bayesian (probability) networks are particularly suitable for this task.

In conclusion, estimation of TMDL forecast uncertainty should not be a requirement merely because the margin of safety requires it. Rather, uncertainty should be computed because it results in better decisions. In the short run, this can happen when the TMDL assessment is based on considerations of risk. In the long run, adaptive implementation should improve the TMDL program, and effective use of adaptive implementation is facilitated with uncertainty analysis. Regardless of time frame, the TMDL program will be better served with complete estimates of uncertainty than with arbitrary hedging factors that simply fulfill an administrative requirement.



No comments:

Post a Comment