Thursday, August 13, 2015

EPA’s Approach to Decision Support is in need of a Sea Change

In the past few decades, the USEPA has widely recognized the importance of economic analysis to the EPA mission. As a consequence, EPA has hired environmental economists and supported research on benefits assessment. This has greatly enhanced EPA’s knowledge base for decision support. EPA should now make a further significant improvement to their decision support by establishing prescriptive decision analysis as the best way to present uncertain scientific knowledge for informed decision making.

Decision analysis, based on the normative model of decision theory, is a well-established discipline that is taught in many university public policy and business programs. There are two fundamental elements in a decision analysis:
  •      A utility function that characterizes the values, or perhaps net benefits, associated with outcomes of interest that result from a management action,
  •    A probability model that quantifies the uncertainty in the outcomes of interest that result from a management action.

The economic analysis that is now embraced by EPA may be used to provide the first element of the decision analysis quantifying value. An uncertainty analysis can provide the second essential element.

Why has EPA recognized the importance of economic benefits assessment to inform decision making, yet seems oblivious to the need to follow the decision analysis model that is so well-established as an academic discipline? I think that a major reason for this situation is that the environmental engineering and ecology programs that have provided the academic training for many scientists in EPA and in state environmental agencies do not include a course in decision analysis, nor do they recommend a curriculum that includes decision analysis taught in another academic department.

Perhaps to better appreciate the role of this decision analytic framework, consider the following example from everyday life. All of us have made decisions on outdoor activities in consideration of the forecast for rain. In deciding whether to hold or postpone an outdoor activity, we typically seek (scientific) information on such things as the probability (reflecting uncertainty) of rain. Further, it is not uncommon  to hear the weather forecast on the evening news, but still defer a final decision on the activity until an updated weather prediction in the morning (in other words, get more sample information).
Beyond consideration of the scientific assessment in the weather forecast, we also think about how important the activity is to us. Do we really want to participate in the activity, such that a little rain will not greatly reduce our enjoyment? Or, is the activity of only limited value, such that a small probability of rain may be enough so that we choose not participate?
Every day, we make decisions based on an interplay, or mix, of uncertainty in an event (e.g., rain) and value (enjoyment) of an activity. We are used to weighing these considerations in our minds and deciding. These same considerations--getting new information on the weather (which is analogous to supporting new scientific research, as in adaptive management), and deciding how valuable the activity is to us (which is what we determine through cost/benefit analysis)--are key features of decision analysis.

Public sector decisions involving uncertain knowledge and uncertain forecasts should follow this same decision analytic paradigm. Given the consequences of most public sector decisions and the uncertainties in environmental modeling, it is essential that this happen. Failure on the part of EPA to use decision analysis as their prescriptive model for decision support means that many of EPA’s assessments and models will continue to ignore uncertainty in model predictions, resulting in many unexpected management outcomes because stakeholders are unaware of the large uncertainties in predictions from the deterministic models that EPA provides in its decision support. In my view, this situation is inexcusable.

Wednesday, August 5, 2015

Unattainable Surface Water Quality Standards may Diminish Widespread Public Support for Water Quality Improvements

Many state water quality standards were established in the early years of the Clean Water Act (CWA) when a key goal of the 1972 CWA was “to eliminate pollutant discharge to navigable waters by 1985.” Unfortunately, this admirable goal sometimes has resulted in required pollutant load reductions (e.g., TMDLs) that are based on unattainable water quality standards that reflect the environmental euphoria of the 1970s and 1980s. In my view, it is wise to consider if we should continue to develop water quality management plans focused on achievement of those goals, or if it is better to develop realistic goals and set attainable water quality standards. 

From a pragmatic perspective, working toward unattainable water quality standards diminishes our ability to achieve widespread buy-in on pollutant load controls.  I see this reaction to water pollutant control now in North Carolina, where unattainable standards are leading to a backlash against pollutant reduction, due primarily to extremely high costs of compliance with a TMDL. 
Unfortunately, this perspective may be given further support by long lag times between implementation of nonpoint source controls and observable water quality improvements, leading to skepticism that the required pollutant load reductions will have any effect.

For example, Falls Reservoir in North Carolina has a TMDL mandating a 77% reduction in phosphorus loading to attain the 40 ug/l chlorophyll a water quality criterion. Given the preponderance of nonpoint sources of phosphorus in the Falls Reservoir watershed, a 77% phosphorus load reduction is not feasible; even if it were, the cost of attainment almost certainly will far exceed the benefits derived for designated use. Given that situation, Falls Reservoir is in need of a Use Attainability Analysis (which determines if a designated use is technologically and economically feasible) or new site-specific nutrient criteria.

I believe that realistic and achievable water quality standards, with designated use (e.g., recreational fishing) improvements that is causally-linked to attainment of water quality criteria (e.g., chlorophyll a), are needed to gain widespread support for pollutant controls for water quality improvements. In Falls Reservoir, the backlash against the high cost of phosphorus load reductions has resulted in a state-sponsored plan for in-lake artificial mixing (using SolarBees). This is a waste of money, as whole-lake mixing is not feasible due to the large size of Falls Reservoir, and in-lake mixing will have little effect on nutrient concentrations. While I do not believe that water column mixing in Falls Reservoir is scientifically-defensible, I do understand that local and state elected officials may feel desperate enough to embrace even ineffective “solutions” in the hope of reducing pollutant control costs for their constituents.

It is unfortunate that the laudatory goals of the Clean Water Act are not everywhere attainable. Given that fact, I believe that the most effective way to achieve additional protection of designated uses is to adopt technologically and economically feasible water quality standards. This is likely to result in relaxation of a limited number of current water quality criteria. I wish that we could do better and eliminate pollutant discharges to navigable waters, but that is not going to happen. In my view, recognition of the need to set realistic water quality goals is the best pathway to achieve and maintain meaningful water quality improvements. 

Monday, April 6, 2015

Meaningful Compliance Assessment of Water Quality Standards

The two primary components of a water quality standard are the designated (beneficial) use and the water quality criterion (criteria). The criterion serves as an easily-measurable indicator of designated use attainment. Thus an effective water quality standard must have a criterion that causally relates to designated use, and a criterion level that best discriminates between attainment and nonattainment of the designated use. All of this seems self-evident.

An important consideration that generally is not considered is the space/time for which designated use is relevant. An example of this is a “swimmable” designated use when a waterbody is covered with ice. This hypothetic example is obvious, but are there others that are not so obvious? Yes, there are.

Consider a dendritic reservoir that has a nutrient TMDL and the state agency is monitoring for compliance based on the water quality criterion. In this case, the designated use is “swimmable and warm water fishery” primarily for recreation. For this beneficial use, common water quality criteria include dissolved oxygen and chlorophyll a.

If it is causally determined that the fishery responds to chlorophyll levels in spring, but not to DO levels in winter, then the monitoring design for compliance with these water quality criteria should explicitly consider these temporal aspects. This might, for example, result in an intense monitoring of spring chlorophyll and no winter monitoring of DO.

From a spatial perspective in this dendritic reservoir, there may be reservoir segments (or discrete basins) where these designated uses are irrelevant. For example, this might be the case for shallow embayments that are subject to drought-related periods of surface area shrinkage and sediment exposure. In locations where this is the case, water quality criteria monitoring may be unnecessary; this also might be a situation where site-specific water quality criteria should be considered for different segments of the reservoir.

For effective water quality management, it is important that resources invested to meet water quality standards are efficiently utilized. This will more likely occur for standards in which the water quality criteria are causally-connected to the designated uses, and the space/time monitoring of the water quality criteria is designed to minimize irrelevant data collection and thus unambiguously assess compliance with the designated use. 

Friday, February 6, 2015

Lessons from the Blizzard of 2015 Concerning Uncertain Science/Weather Forecasts

The aftermath from the recent blizzard striking the northeast provides a fascinating example of decision making under uncertainty, without an available estimate of uncertainty. Meteorologists knew that there was uncertainty in their forecasts of the expect path of the blizzard, but most did not provide an estimate of that uncertainty. In contrast, meteorologists who provide forecasts of hurricane trajectories routinely provide a graphical assessment of landfall location probabilities, and local weather forecasters almost always provide a probability of rain for upcoming days.

Of course, as I have noted in previous blog posts, water quality modelers generally do not provide an estimate of the uncertainty in their forecasts. Without attempting to understand why different approaches to scientific uncertainty have emerged in these fields, it is quite clear that the deterministic forecasts of the blizzard trajectory (particularly around New York City) were believed by many to be precisely what would happen. Undoubtedly, the same people who expect daily probability of rain forecasts apparently were willing to accept no uncertainty in the blizzard forecasts.
 
After the blizzard, the “Monday morning quarterbacks” criticized meteorologists for their “faulty” forecasts, and they criticized decision makers for mandating extreme measures in preparation for the expected blizzard. In retrospect, meteorologists should have provided a “cone of uncertainty” in their forecasts of the trajectory of the blizzard.  In general, this would be useful for individual and societal blizzard preparation decision making. Further, it would have been helpful for other fields (such as water quality modeling) by acknowledging scientific uncertainty to the public.

Yet, if the blizzard forecasters had provided a visual estimate of uncertainty in the blizzard trajectory, similar to the forecasts provided by their hurricane-forecasting brethren, what might have changed? Probably very little, other than silencing most of the Monday morning quarterbacks. I conclude this because the blizzard was a high-consequence event, and people tend to be risk-averse.
Should blizzard preparation decisions have been made by meteorologists who knew of the forecast uncertainty, as some people have suggested? No. In my blog post “The Role of Scientists in Decision Making” (http://kreckhow.blogspot.com/2013/10/the-role-of-scientists-in-decision.html), I stress the point that to inform public sector decisions, scientists provide scientific assessment, but not the values necessary for decision making. These are public values, and they are provided by elected or appointed public officials as representatives of the public.
As I concluded in a previous blog post (“Scientific Uncertainty and Risk Assessment,” http://kreckhow.blogspot.com/2013/04/scientific-uncertaintyand-risk.html), every day, we make decisions based on an interplay, or mix, of uncertainty in an event (e.g., rain) and value (enjoyment) of an activity. We are used to weighing these considerations in our minds and deciding. These same considerations--getting new information on the weather (which is analogous to supporting new scientific research, as in adaptive management), and deciding how valuable the activity is to us (which is what we determine through cost/benefit analysis)--are key features of risk assessment. So let us move from our informal, everyday risk assessment to formal, scientific risk assessment, and identify the lesson and the opportunity as they relate to environmental management.

To me, the lesson in risk assessment is to recognize that the science in support of environmental management is usually uncertain, and sometimes highly uncertain. But the opportunity that is provided by risk assessment should result in improved decision making. To accomplish this, we must first require scientists to quantify or estimate the scientific uncertainty. Then we must require our decision makers to use the estimate of uncertainty to properly weigh the scientific information (not unlike what we do in our informal, everyday risk assessment). In the long run, this should improve environmental management decisions by making better use of the available information.

Thursday, February 5, 2015

Lessons from the Blizzard of ’15 Concerning Uncertain Science/Weather Forecasts

The aftermath from the recent blizzard striking the northeast provides a fascinating example of decision making under uncertainty, without an available estimate of uncertainty. Meteorologists knew that there was uncertainty in their forecasts of the expect path of the blizzard, but most did not provide an estimate of that uncertainty. In contrast, meteorologists who provide forecasts of hurricane trajectories routinely provide a graphical assessment of landfall location probabilities, and local weather forecasters almost always provide a probability of rain for upcoming days.

Of course, as I have noted in previous blog posts, water quality modelers generally do not provide an estimate of the uncertainty in their forecasts. Without attempting to understand why different approaches to scientific uncertainty have emerged in these fields, it is quite clear that the deterministic forecasts of the blizzard trajectory (particularly around New York City) were believed by many to be precisely what would happen. Undoubtedly, the same people who expect daily probability of rain forecasts apparently were willing to accept no uncertainty in the blizzard forecasts.

After the blizzard, the “Monday morning quarterbacks” criticized meteorologists for their “faulty” forecasts, and they criticized decision makers for mandating extreme measures in preparation for the expected blizzard. In retrospect, meteorologists should have provided a “cone of uncertainty” in their forecasts of the trajectory of the blizzard.  In general, this would be useful for individual and societal blizzard preparation decision making. Further, it would have been helpful for other fields (such as water quality modeling) by acknowledging scientific uncertainty to the public.

Yet, if the blizzard forecasters had provided a visual estimate of uncertainty in the blizzard trajectory, similar to the forecasts provided by their hurricane-forecasting brethren, what might have changed? Probably very little, other than silencing most of the Monday morning quarterbacks. I conclude this because the blizzard was a high-consequence event, and people tend to be risk-averse.

Should blizzard preparation decisions have been made by meteorologists who knew of the forecast uncertainty, as some people have suggested? No. In my blog post “The Role of Scientists in Decision Making” (October 15, 2013), I stress the point that to inform public sector decisions, scientists provide scientific assessment, but not the values necessary for decision making. These are public values, and they are provided by elected or appointed public officials as representatives of the public.

As I concluded in a previous blog post (“Scientific Uncertainty and Risk Assessment,” April 26, 2013), every day, we make decisions based on an interplay, or mix, of uncertainty in an event (e.g., rain) and value (enjoyment) of an activity. We are used to weighing these considerations in our minds and deciding. These same considerations--getting new information on the weather (which is analogous to supporting new scientific research, as in adaptive management), and deciding how valuable the activity is to us (which is what we determine through cost/benefit analysis)--are key features of risk assessment. So let us move from our informal, everyday risk assessment to formal, scientific risk assessment, and identify the lesson and the opportunity as they relate to environmental management.

To me, the lesson in risk assessment is to recognize that the science in support of environmental management is usually uncertain, and sometimes highly uncertain. But the opportunity that is provided by risk assessment should result in improved decision making. To accomplish this, we must first require scientists to quantify or estimate the scientific uncertainty. Then we must require our decision makers to use the estimate of uncertainty to properly weigh the scientific information (not unlike what we do in our informal, everyday risk assessment). In the long run, this should improve environmental management decisions by making better use of the available information.