Monday, December 9, 2013

Dealing Effectively with Uncertainty

Are we better off knowing about the uncertainty in outcomes from proposed actions? That is, will our decisions generally be better if we have some idea of the range of possible outcomes that might result? I have always thought so, and yet current practice in water quality modeling and assessment suggests that others feel differently or perhaps believe that uncertainty is small enough so that it can be safely ignored.

Consider my experience from many years ago. While in graduate school, I became involved in a proposed consulting venture in New Hampshire. As a young scientist, I was eager to “shake up the world” with my new scientific knowledge, so I suggested to my consulting colleagues that we add uncertainty analysis to our proposed 208 (remember the Section 208 program?) study. Everyone agreed; thus we proposed that uncertainty analysis be a key component of the water quality modeling task for the 208 planning process. Well, after we made our presentation to the client, the client’s first question was essentially, “The previous consultants didn’t acknowledge any uncertainty in their proposed modeling study, what’s wrong with your model?” This experience made me realize that I had much to learn about the role of science in decision making and about effective presentations!

While this story may give the impression that I’m being critical of the client for not recognizing the ubiquitous uncertainty in environmental forecasts, in fact I believe that the fault primarily lies with the scientists and engineers who fail to fully inform clients of the uncertainty in their assessments. Partially in their defense, water quality modelers may fail to see why decision makers are better off knowing the forecast uncertainty, and perhaps modelers may not want to be forced to answer the embarrassing question like that posed to me years ago in New Hampshire.

For this situation to change, that is, for decision makers to demand estimates of forecast error, decision makers first need: (1) motivation - that is, they must become aware of the substantial magnitude of forecast error in many water quality assessments, and (2) guidance – they must have simple heuristics that will allow them to use this knowledge of forecast error to improve decision making in the long run. Once this happens, and decision makers demand that water quality forecasts be accompanied with error estimates, water quality modelers can support this need through distinct short-term and long-term strategies.

Short-term approaches are needed due to the fact that most existing water quality models are incompatible with complete error analysis as a result of overparameterization; thus short-term strategies should be proposed for: (1) conducting an informative, but incomplete error analysis, and (2) using that incomplete error analysis to improve decision making. In the long-term, recommendations can be made to: (1) restructure the models so that a relatively complete error analysis is feasible, and/or (2) employ Bayesian approaches that are compatible with adaptive management techniques that provide the best approach for improving forecasts over time.

In the short-term, if knowledge, data, and/or model structure prevents uncertainty analysis from being complete, is there any value in conducting an incomplete uncertainty analysis? Stated another way, is it reasonable that decision making will be improved with even partial information on uncertainties, in comparison to current practice with no reporting of prediction uncertainties? Often, but not always, the answer is “yes,” although the usefulness of incomplete uncertainty characterization, like the analysis itself, is limited.

Using decision analysis as a prescriptive model, we know that uncertainty analysis can improve decision making when prediction uncertainty is integrated with the utility (or loss, damage, net benefits) function to allow decision makers to maximize expected utility (or maximize net benefits). When uncertainty analysis is incomplete (and perhaps more likely, when the utility function is poorly characterized) the concepts of decision analysis may still provide a useful guide.

For example, triangular distributions could be assessed for uncertain model terms, and assuming that parameter covariance is negligible (which unfortunately may not be the case), then limited systematic sampling (e.g., Latin hypercube) could be used to simulate the prediction error. The result of this computation could be either over/under estimation of error, but it does provide some indication of error magnitude. However, this information alone, while perhaps helpful for research and monitoring needs, is not sufficient for informed decision making. The approximate estimates of prediction uncertainty need to be considered in conjunction with decision maker attitudes toward risk for key decision variables.

Implicit in this attitude toward risk is an expression of preferences concerning tradeoffs. For example, are decision makers (or stakeholders, or other affected individuals/groups) risk averse with respect to ecological damage, such that they are willing to increase project costs in order to avoid species loss? If a reasonable quantification of prediction uncertainty were available for the decision attribute - loss of an endangered species, then the prediction might be expressed as “there’s a 40% chance of loss of this species with plan A, but only a 5% chance of loss with plan B.” When costs of the plans are also considered, the tradeoff between species loss and cost is augmented by awareness of risk that comes from the prediction uncertainty characterization. Risk is not evident from deterministic (point) predictions of the decision attributes, so the decision is likely to be better informed with the risk assessment that is made possible with prediction uncertainty.

In the long run, a better strategy is to restructure the models, emphasizing the development of models that are compatible with the need for error propagation and adaptive assessment/management. Bayesian (probability) networks are particularly suitable for this task (see http://kreckhow.blogspot.com/2013/07/bayesian-probability-network-models.html), as are simulation techniques that address the problem of equifinality resulting from overparameterized models (see: http://kreckhow.blogspot.com/2013/06/an-assessment-of-techniques-for-error.html).

No one can claim that scientific uncertainty is desirable; yet, no one should claim that scientific uncertainty is best hidden or ignored. Estimates of uncertainty in predictions are not unlike the point estimates of predicted response. Like the point predictions, the uncertainty estimates contain information that can improve risk assessment and decision making. The approaches proposed above will not eliminate this uncertainty nor will it change the fact that, due to uncertainty, some decisions will yield consequences other than those anticipated. They will, however, allow risk assessors and decision makers to use the uncertainty to structure the analysis and present the scientific inferences in an appropriate way. In the long run, that should improve environmental management and decision making.

No comments:

Post a Comment