The International Stormwater Best Management Practices (BMP) Database project website (http://bmpdatabase.org) features a database that includes over 600 BMP performance studies. This is a tremendous free resource for engineers, scientists, policymakers and others who seek to characterize the pollutant removal and hydrologic performance of various stormwater treatment systems. And, like any database, it is only as reliable as the information that goes into it. To improve reliability, their website also includes monitoring program design guidance and data input tools that allow a standardized format and content for summary reports. However, whether looking at an individual test of a single treatment system or a database with hundreds of results, it is important to focus not only what the data tells us, but also what stories we may be missing.
How many times have you seen a presentation or read a field monitoring paper that includes this familiar storyline: A project moves from the active construction phase to the field monitoring stage only to discover that there were obvious errors made during design or construction that needed to be fixed prior to monitoring. Once corrected, data is collected and shows less than ideal performance, and as a result, changes are made or extra maintenance is performed until information consistent with the project goals is obtained. After this shakedown period, the “real” data collection period begins. This happens a lot. Monitored projects get a much higher level of scrutiny than non-monitored projects and as a result, they often include repairs, renovations, and maintenance that similar systems under a less watchful eye would not receive. The incentive to get it right is powerful. It’s easier to get a paper or presentation accepted or to secure additional funding if there’s a positive story to tell about something working well.
But, it’s important to recognize that this storyline includes several forms of bias that compound to create stormwater management program vulnerability. Reporting bias occurs when positive results are viewed more favorably or promoted more actively than negative results. Selection bias occurs when an exemplary BMP is chosen as a test subject rather than a typical one, or when poor initial results are omitted. And, of course, any extra BMP TLC given by the research team introduces bias, however well-intentioned.
I attended the ASCE EWRI conference on the operation and maintenance of stormwater measures in Denver last November. Along with the usual academics, consulting engineers, and exhibitors in the crowd were many municipal stormwater maintenance program managers. Their stories painted a very different picture than the usual idealized story about how Green Infrastructure and Low Impact Development BMPs are almost universally applicable, are easier and cheaper to construct and maintain, and offer great improvements in stormwater infrastructure robustness. After hearing so many stories of pesky gophers, floating mulch, invasive weeds, inferior material substitutions, construction phase failures, standing stagnant water, and excessive litter control costs, I began to wonder if I had somehow ended up on the wrong side of the looking glass.
The truth, of course, spans the entire range from cost-effective, perfectly designed, constructed and operating BMPs to those that are abject failures. So, since we are using BMP performance data to make decisions at the programmatic level about which BMPs we accept on land development projects and to plan TMDL- based watershed retrofit strategies, it seems to me that we need to broaden our perspective when we ask questions about BMP performance. It’s really not enough to look at water quality or hydrologic performance of a particular BMP type without also asking how often are these results likely to be replicated by systems not being actively monitored and coaxed into providing optimal results. Given the myriad of potential failure points in the design, procurement, installation, inspection, and maintenance steps of a project, we need to be focusing as much research effort on systematic performance as on individual performance. After all, if an on-specification BMP provides 80% load reduction but in practice is only on-spec 50% of the time, from a programmatic perspective it’s probably no better than a BMP that achieves only 50% load reduction but does it reliably.