I was mostly off-line earlier in the year when two publications came out on fugitive emissions in the British Columbia natural gas industry. These two publications were:
Mobile measurement of methane emissions from natural gas developments in northeastern British Columbia, Canada by Atherton et al., 2017 and
Fugitives in Our Midst Investigating fugitive emissions from abandoned, suspended and active oil and gas wells in the Montney Basin in northeastern British Columbia by the Suzuki Foundation (primary author Dr. John Werring).
These two publications were hailed by opponents of the BC LNG industry as destroying the “myth” that BC LNG could be used to help reduce global carbon emissions. I discussed why I believe BC LNG is good for the planet in my previous blog post On the global climate change math supporting BC LNG. Coincidentally these two publications came out just when a different article (Country-Level Life Cycle Assessment of Greenhouse Gas Emissions from Liquefied Natural Gas Trade for Electricity Generation by Kasumu et al., 2018 ) appeared in the peer-reviewed press supporting the thesis that BC LNG can help reduce global global greenhouse gases (GHG) emissions. Since that paper is behind a paywall you can get a taste of its contents in this Business on Vancouver article: BC LNG exports would have net GHG reductions: study or this Intelligence Memo from the C.D. Howe Institute.
Given the importance of this topic I have decided to look more deeply into these two fugitive emissions studies. While Dr. Weaver (leader of our BC Green Party) likes to refer to the Atherton paper as the St. Francis Xavier Study it is important to recognize that both publications were funded, in whole or in part, by the Suzuki Foundation which is a strong opponent of BC’s LNG industry. Unsurprisingly, the results of these studies were seized on by politicians like Dr. Weaver and activists who are fighting against the BC LNG industry. My conclusion from reading these publications is that both have significant flaws and neither represents, in my mind, science that should be considered in any serious policy discussion.
Let’s start with why I called them “publications”. Both were, arguably, “peer-reviewed”. The Artherton paper was peer-reviewed as an open-access article and so the peer reviewers were 1) self-selected, 2) not all necessarily experts in the area under consideration 3) not able to exert any authority over the process (i.e. the authors could choose to ignore the peer reviewers). Dr. Werring’s work doesn’t even rise to that that level. It was published independently by the Suzuki Foundation and doesn’t even pretend to represent a balanced report but rather appears to be part of an anti-LNG campaign by the organization and has to be viewed in that light.
Since it is the most serious of the two let’s start with the Artherton paper. It presents a novel approach to estimating fugitive methane emissions. They did so by installing an “ultraportable greenhouse gas analyzer” (UGGA) into a vehicle and driving it around BC then using their results to extrapolate an emission volume calculation for the entire Montney formation. Now there are some incredibly significant issues with this paper that would have had any editor in a real journal pulling out their hair but given the limitations of this blog post I will only highlight the three most critical points.
We need to begin by considering how the authors generated their minimum detection limit (MDL). This is an incredibly important number for this assessment because it is the number used as a multiplier for almost all their total emission volume calculations. In this study they took their MDL, multiplied it by the percentage of facilities where they detected emissions and then multiply that number by the total number of facilities in the Montney. Thus if the MDL was flawed the resulting conclusion would be equally flawed.
The challenge with the instrument used (the UGGA) is that it is intended to be used in a stationary position when taking a reading. The amount of air it takes in matters for calibration purposes. In this study they operated the instrument while driving around the country-side. To address that the instrument was not being used according to design specs the authors reportedly did a bench experiment and came up with correction factor for their instrument readings (a multiplier of 3.3 times). The problem is they don’t present any details about how they derived that multiplier or the range of values generated in the process. They just reported that their “mean level of dilution was about 70%“. They don’t include error bars or any consideration of uncertainty they just multiply their observed values by 3.3 times. The problem is that this number really matters. It is critical to establishing total emission volume estimates. Recognize that later in the publication they report numbers to 6 significant figures from an initial value “about 70%”? Put another way, their headline volume estimate could be 3.3 times smaller and we, the reader, would have no way of knowing. Since the conclusion of the report was that total emitted volumes are 2.5 times higher than are officially being reported, this factor of 3.3 is pretty darn significant. By omitting the critical calibration data we are left to simply trust that they got the number right. Most confusingly, this is passed on as a minor issue barely worthy of consideration?
Taken alone the absence of calibration information should inject a serious level of doubt about the publication but then you have to consider how they addressed “surveyed facilities” which make up almost a third of all the emissions they report in the publication. In the study they use modelling to justify why we should consider their numbers as valid. In doing so they acknowledge that they have uncertainty about their readings from “surveyed facilities” (collection facilities etc…) since those facilities don’t match the conditions of their model. As a consequence, they decide to ditch their established methodology altogether and to use a number essentially picked out of thin air.
For this reason, instead of actual measured MDLs we used previously published natural gas facility emission volumes of 2.2 g/s (Omara et al., 2016),
Now this is a pretty big deal because if you look at the study you see that almost 35% of the reported Montney emissions are based on this assumption. This caused me to go look at the Omara paper and I admit I am confused. I have read the Omara paper forward and backwards a half-dozen times and do not see where they get that number. The paper doesn’t even look at production facilities, it uses that term to describe multi-well pads. The paper is also studious about not providing individual numbers, it provides ranges with confidence intervals. Finally, the paper addresses facilities in the Marcellus Basin. The natural gas in the Marcellus formation is renowned for being low-sulfur (sweet) which means that leaks from their facilities are bad but not life-threatening. BC and Alberta natural gases are known for being high-sulfur (sour) and as such the facilities have to be much tighter because leaks from these facilities can be fatal to the employees who visit those facilities. This is a huge deal, the facilities simply aren’t comparable. To add insult, this issue was raised by one of the “peer reviewers” and the response by the authors was that:
They were told that one third of their total was based on an incorrect figure and instead of pulling back the paper to confirm their estimate they simply said they might have to offer a correction sometime later?
The third issue with this paper was raised by one of the peer reviewers (Tony Wakelin from the BC Oil and Gas Commission (BCOGC))and essentially ignored by the authors. Yes that is right the the BCOGC decided to have one of their people review the article. Needless to say the BCOGC, as the regulatory body that controls the database used by the authors, would be the people who could tell the authors a lot about what the coding used in the database means. For instance this is what the BCOGC told the authors the term “cancelled” facilities meant in the database.
“Cancelled” means the well permit expired without drilling commencing. So these wells do not physically exist in the field and can not be attributed to the release of methane.
Why is this important? Well the authors attributed pretty significant emissions to those cancelled facilities which is a physical impossibility.
The BCOGC also raised the calibration issue (3.3 time multiplier I mentioned above) and the authors provided this reply:
The primary purpose of the paper was to determine emission frequencies, not to create a highly accurate volumetric inventory
Read that again. The intention of the paper was not to create a “highly accurate volumetric inventory”….except that the HEADLINE CONCLUSION of this publication is a volumetric inventory. Literally the only reason people are talking about this paper is that it provides a number (a volumetric inventory) for fugitive emissions in the Montney formation. How can the authors say it was not the purpose when that is literally the only reason people talk about the paper.
Finally let’s look at the final comment from the BCOGC:
The fact significant quantities of emissions were attributed to wells that do not exist
(i.e. 25 per cent of cancelled wells were reportedly emitting) calls into question the
accuracy and validity of the discussion paper. Also, the basis for determining
emission factors used in this discussion paper is highly questionable – therefore, this
study should not infer that the estimates constitute an emission inventory that could
be compared with what is reported under the Greenhouse Gas Emission Reporting
Yes you read that right in the final table (Table 2) 11.5% of the total emissions reported in the survey were assigned to wells that WERE NEVER DRILLED! These numbers were then extrapolated so that magically 1989 of these imaginary non-wells supposedly emit 12,948 tonnes of methane a year into the atmosphere. Can you imagine a professional editor letting a paper go into their journal when 11.5% of the volumetric assessment is simply imaginary? The response of the authors…crickets….
To summarize, in this publication:
- 53.5% of the reported emission volume calculations could be inflated by as much as 3.3 times;
- 35% of the reported emission volume calculations result from an extrapolation based on a number (2.2 g/s) that never appears in the referencing document and if it did would represent results from a geologically different formation in a different jurisdiction that produces sweet gas and thus has very different safety requirements; and
- 11.5% of the reported emissions volume calculation is from wells that were never drilled in the first place.
I think it is pretty clear that this volume estimate represents problematic science to say the least and any expert reading the work would be shocked that a former scientist turned politician like Dr. Weaver would use these results to develop policy.
Now this post is already too long but I still want to touch on the Suzuki follow-up study by Dr. Werring. I really don’t have much to say about this report because it starts off so wrong that it is hardly worth digging deeper. Why do I make such a bold statement? Here is how they describe their methodology:
On each survey day, our field investigators would drive to the chosen area for investigation and access any or all oil and gas facilities (wells, batteries, compressor stations), depending solely on whether they were accessible (whether or not the well site or facility was flagged as a possible emitter was only one determining factor). Accessibility was key. If a facility was behind a locked gate or if the property housing the facility was marked “No trespassing”, the facility was passed by, unless it could be reasonably and safely inspected using the FLIR camera from public roadways
Now re-read that paragraph again do you see the issue? The problem is an issue called “sampling bias“. Sampling bias is when your sampling program over-samples from a particular proportion of the population with respect to the entire population. To explain here is a simple analogy. Consider you decided to study the effect of drinking on incarceration. One Sunday morning you send a researcher to stand outside the municipal drunk tank just as the police were releasing the revelers from Saturday night. Your researcher dutifully interviews each reprobate about their drinking habits and their overnight incarceration. Suppose at the same time you send a different researcher to ask the same battery of questions outside the Mormon Temple in Langley. If the two researchers interviewed the same number of people your combined results might read something like 50% of the people interviewed in our study spent the night in jail and 100% of the people who spent the night in jail were drinkers. 100% of non-drinkers had not spent any time in jail in the last year. These would represent some pretty inflammatory results if you were trying to argue the connection between alcohol and incarceration wouldn’t they? Almost like there was a bias in your sampling methodology.
Now let’s look at the sampling program Dr. Werring used in his study. He only went to facilities that had zero security (not even a “No Trespassing” sign). They did not enter facilities that had any security whatsoever. Do you honestly think that investigating only those facilities that are completely unsecured can be used to extrapolate to all facilities in the region? Do the folks from the drunk tank represent all people in Langley? Of course not. The sampling bias introduced by the sampling plan makes the results essentially meaningless. Certainly Dr. Werring can argue that the natural gas facilities from the cheapest producers, (the producers too cheap to even put up a “No Trespassing” sign) leak at a high rate…but then you might expect that result. If you are too cheap to put out a “No Trespassing” sign you are likely too cheap to maintain your wells effectively. Admittedly, Dr. Werring encountered a bunch of bad facilities that deserve to be inspected by the BCOGC, but to brand an entire industry using the results from a subset of its worst members represents bad science and should not be the basis of a policy decision.
To conclude, fugitive emissions are an important concern in establishing the effectiveness of BC LNG in reducing global GHG emissions but anyone who relies on these two publications in establishing and inventorying emissions has made a bad choice. Both publications have easily recognizable flaws and neither should serve as a critical reference in any further policy assessment.