In overly
simple terms, the surface water quality modeling community can be divided into
two camps – mechanistic and statistical. In much of my recent work, I have
sought a middle ground that is loyal to process understanding yet yields a
measure of uncertainty in predictions. I have argued for years that we should
not provide decision makers with predictions of the impact of management actions
without an estimate of the confidence we have in those predictions. To me, this
point is irrefutable, and it continues to dismay me that EPA and other agencies
largely ignore this point in the water quality models they support.
My mechanistic modeling colleagues have
tended to stress large elaborate models that are generally motivated by the assumption
that models must be sufficiently detailed so the modelers can “get the processes
right.” This is a goal that likely will never be achieved. The result is that
these elaborate models are overparameterized; this
condition, called ‘‘equifinality,’’ is well-documented in the hydrologic
sciences, but the concept rarely has been discussed in the water quality
modeling literature. Among experienced hydrologic modelers, it is
well-recognized that many ‘‘sets’’ of parameter values will fit large simulation
models about equally well; unfortunately, this can create problems with the
interpretation of sensitivity analyses, since different (equally well-fitting)
parameter sets can lead to quite different causal conclusions about the effect
of management actions.
I have
discussed these points in previous blog posts. Here, I want to comment on the
use of water quality models (statistical and mechanistic) for drawing
conclusions concerning future trends in water quality over time. Defensible
conclusions about water quality trends are made in a statistical hypothesis
testing context. A mechanistic water quality model without an error term simply
cannot provide a defensible conclusion on trends. Yet in data-poor situations, there
appears to be a tendency among some mechanistic modelers to suggest that a
mechanistic model forecast can provide a substitute in those situations. Sure, a
mechanistic model can yield a deterministic trajectory of expectations associated
with management actions, but most mechanistic models cannot provide a
confidence interval. The “disconnect” that I am concerned about is that stakeholders
and decision makers may accept this deterministic trend forecast while simply failing
to request a measure of the reliability of the trend forecast, something they
would expect from a statistical analysis. Unfortunately, this perspective is
created by those mechanistic modelers who have established this false
deterministic modeling environment.
No comments:
Post a Comment