My previous post Academics getting it wrong about the role of private sector consultants in BC’s Environmental Assessment processes was started in November and sat in my “drafts” folder for over a month. The reason I finished it during my break was that I had been directed to a Narwhal article featuring a critique of Environmental Assessments (EAs) by a group of academics titled The Insignificance of Thresholds in Environmental Impact Assessment: An Illustrative Case Study in Canada by Murray et al. It was the article cited by the authors of the open letter from my previous post. In this post I want to further expand on what academics typically get wrong in their critiques of private sector EAs by looking at this paper in detail.
To begin, I want to point out something that I find somewhat odd about this article. Looking at the affiliation of the primary authors I can’t help but notice that the two primary authors present dual affiliations. Both report being associated with the “Institute for Resources, Environment and Sustainability at the University of British Columbia” but they also both report an affiliation with “WWF-Canada“. At the end of the article the authors declare that “they have no conflict of interest” but it sure has a funny smell when an article of this sort is authored by individuals with ties to an organization that has historically opposed most large resource developments in BC.
My first serious concern about this article goes, ironically, back to the major complaint (debunked in my last post) the authors of the open letter had about private sector EAs: lack of transparency. This article is completely opaque, there is zero transparency. As I mentioned in my previous post, when I do an assessment, I am required to present all the supporting material in a location where interested observers can examine it in detail. In this report the authors create spreadsheets detailing information like 1) the number of impacts evaluated, 2) the significance of the impacts and 3) the rationale for significance determination as well as the approaches used to evaluate this significance. Unfortunately, none of this information is available for review as supporting information. How is a reader to independently evaluate the authors’ work when there is literally no information presented to allow an evaluation to be carried out?
Another odd thing about this paper is that it uses the term “thresholds” liberally throughout the document without fully defining the term. Having read and re-read the paper, it is clear to me they do so to simplify their analysis, but in doing so they conflate an incredibly diverse number of indicators. There is a substantial difference between a regional guideline and a regulatory standard but both are treated identically in this analysis.
To understand why this is important, it is necessary to recognize that BC has a lot of regulations, guidelines and criteria. Some regulations come with strict standards, defined within those regulations, while others are much more flexible. Moreover, even in cases where apparently strict regulatory standards exist, these standards can be adjusted based on site-specific conditions.
To provide an example look at Schedule 3.1 through 3.4 of the Contaminated Sites Regulation (CSR). They provide the standards for soils, groundwater and vapours at contaminated sites but are used as criteria for any site that may eventually be drawn into the CSR process. Looking at the standards you see numbers that appear fixed in regulation, except in Section 18 of the CSR it is acknowledged that these fixed standards can be revised to address site-specific conditions.
The issue with creating standards under a provincial regulatory regime is that the standard necessarily has to be very generic. In order to provide a single number the regulator has to make a lot of different assumptions. Generic standards are created to provide simple guidelines for easy cases. They will usually involve the most conservative assumptions (most protective) with added room for conservatism. What does this mean? Well for water quality guidelines they usually take the most sensitive receptor in the entire province and then add a safety factor (often by dividing that number by 10).
Sometimes (in cases like Schedule 3.1) the regulator provides more specific standards (like the standard for protection of toxicity to soil invertebrates and plants) while in Schedule 3.2 they simply provide a single standard for all soils. Moreover, look at that standard for toxicity of soil invertebrates and plants. It is based on a toxicity reference value (TRV) that is the most conservative (most protective) for all provincial soil invertebrates and plants. If that standard is based on the TRV for a specific plant and that plant does not exist in the vicinity of a site, then an alternative standard for the most conservative regionally-appropriate plants/invertebrate would sensibly be considered to be appropriate.
In a proper assessment the first step is to identify what receptors are in the region. That allows you to establish site-specific standards that are tailored to the area under consideration.
The other thing to understand is that the science always evolves. The original soil standards for the CSR were created when the CSR was initially adopted (in 1996) and they were only just updated in 2018. During that update some of those standards were increased and some were lowered, all to reflect the latest science. That is to say, sometimes standards will increase when new and better information replaces older, less reliable information and in other cases standards will decrease.
In the technical guidance to the CSR, the Ministry of Environment & Climate Change Strategy (BC ENV) provides the information necessary for an expert to derive new risk-based standards to reflect the latest state-of-the-art in toxicology. This includes getting information from other jurisdictions, like the US, which spends a lot of research money on toxicology and generates lots of newer, more reliable numbers.
So you may ask why I went into this level of detail? Well the answer is because this flexibility is literally what the academics feel is wrong about current EAs. Their major critiques involve the fact that the EAs were flexible and that thresholds used were site-specific. Essentially the entire document argues that thresholds “should be consistent and transparent across EISs” (Environmental Impacts Statements). Now this type of thing sounds logical until you translate it in a way one can understand. The best analogy is them arguing that the entire province should have the same speed limit and there should be no flexibility for special cases (like say highways or school zones).
Now let’s look at the “six common rationales” the authors argue are not appropriate to explain cases where a generic threshold was exceeded:
(1) baseline conditions already exceeding thresholds; (2) uncertainty in the assessment models; (3) availability of different guidelines that allow for higher threshold values; (4) the scale of impact (temporal and spatial); (5) literature review contradicted threshold values; and (6) other reasoned argumentation.
Thinking to my earlier explanation about the necessity for site-specific standards, let’s consider that list in order.
- If a site has naturally high concentrations of a metal in the water then it would be assumed that the biota in the region may not be sensitive to that metal. Relying on a generic provincial standard in that case would not acknowledge local conditions and would not be scientifically justified.
- Assessment models are typically also generic in nature and include a lot of conservatism to address their generic nature. Once you have site-specific information you can correct the work to address the generic considerations. This means the models will better reflect conditions at the site.
- When the authors say “different” what they mean is site- or regionally-specific thresholds. Good science mean tuning your models for the local conditions. We don’t use the same speed limit for all roads, why would we have the same guidelines for all conditions?
- Scale matters. If you don’t believe that then I really can’t help you. Pretending that an impact that has regional effects is the same as an effect that might effect a couple acres and can be mitigated simply makes no sense.
- When the new science shows that the old rules are no longer relevant you trust the new science….that is how science works.
- Professional judgement supported by the science is why you hire experts to do these types of analyses.
The funny part is that the authors complain that thresholds from other jurisdictions were “weaker”. Except as I mentioned previously, if the latest science from another jurisdiction provides better, more relevant information then a good consultant will note that and incorporate that information in their EA.
As I have made clear, the primary complaint in this article is that the EAs were flexible, because they reflected site-specific conditions. Making a standard, model or “threshold” site-specific should be something we all look for in our environmental assessments. Only a bunch of academics would argue that “flexible”, site-specific thresholds are bad things when used in an EA. The reality is that a good EA has to incorporate site-specific conditions. Flexible thresholds thus reflect a feature of a good EA not sign of a deficient one.
Perhaps you should take a sniff as well at UBC’s “Institute for Resources, Environment and Sustainability” who “work extensively with…Canadian Parks and Wilderness Society, Westcoast Environmental Law, and the David Suzuki Foundation.”
The Wilderness Society whose “vision is to keep at least half of Canada’s public land and water wild — forever.” is funded by some of the usual suspects. http://cpaws.org/about/partners
Gerry Butts, Trudeau bestie and main strategist, was CEO of World Wildlife Federation (Canada) from 2008-12, receiving US$360,000 as “severance” when he voluntarily resigned, presumably to organize Justin’s campaign. https://fairquestions.typepad.com/files/letter-to-gerald-butts-29may2017.pdf
Impartiality and objectivity all round.