Are Gas Stoves Really Responsible for 12.7% of Current Childhood Asthma Cases in the US?

The news has been full recently with stories about the risk of childhood asthma caused by natural gas stoves. As someone who specializes in risk assessment and has experience with indoor air chemistry this seemed like it was right up my alley. As I went digging through the research; however, I discovered that the research seemed less about providing a good scientific examination of the topic and and more about generating a lot of headlines and press discussion of the topic.

The furor is all derived from a recent study published in an open-source journal called Population Attributable Fraction of Gas Stoves and Childhood Asthma in the United States (Gruenwald et al., 2022). The paper itself doesn’t present any new data but rather applies a rather arcane type of mathematical attribution analysis (Population Attributable Fraction or PAF) to the results from a ten-year-old meta-analysis that summarized work from the 80’s and 90’s. Needless to say, the paper absolutely doesn’t advance the science in any useful manner and appears designed instead to induce political change rather than inform policy.

Two of the authors of the paper are Talor Gruenwald and Brady A. Seals. Many of us are familiar with these names as they both work for the Rocky Mountain Institute. For those not familiar RMI is:

an independent, non-partisan, nonprofit organization of experts across disciplines working to accelerate the clean energy transition and improve lives.

Now I’m not going to slag the RMI as it really does do good work. But it is absolutely fair to note that two authors who work for an organization that is dedicated to transforming the global energy system to secure a clean, prosperous, zero-carbon future for all might not be the totally objective scientists you want doing your research on natural gas stoves.

Before we get too deep into evaluating the data used in the paper, I think it is pretty important we start with a little background on the critical statistical tool used in this paper (PAF). As described in the literature PAF

is an epidemiologic measure widely used to assess the public health impact of exposures in populations. PAF is defined as the fraction of all cases of a particular disease or other adverse condition in a population that is attributable to a specific exposure.

That sounds like a pretty useful measure but there is a hitch. PAF has been around since the 1950s but a Google Scholar search of the term finds less than 17,000 hits. From an academic perspective, this tells you a lot about the technique. A statistical tool in epidemiology (a field that publishes thousands of papers a year) that has been around for 70 years and only appears in a few thousand papers must have some issues, and PAF absolutely does. The big complaint is that PAF doesn’t work when there are multiple confounding variables. The challenge for academics unfamiliar with the tool PAF is

found in many widely used epidemiology texts, but often with no warning about invalidness when confounding exists.

So let’s consider Asthma as a disease. According to the American Lung Association Asthma can be caused by: Family History (genetics), Allergies. Viral respiratory infections in youth, Occupational exposures. Smoking, Air pollution and Obesity. Do you know what a statistician would call each of those SEVEN different causes of asthma? Confounding variables! So here we have a statistical analysis that is invalid when used in the presence of confounding variables and we have a disease that can be caused by a half dozen other factors, that are not controlled for in the analysis.

Reading the Gruenwald et al paper carefully, one discovers the terms “confounding” and “variable” do not appear. It is thus possible the authors simply did not recognize the issues with this statistical tool for this type of analysis as that omission would typically result in a bench rejection in most well-respected journals.

Another challenge with this paper is the data used to derive its conclusions. The research for this paper started with an evaluation of the academic literature. The authors started where most authors on this topic start. With the 2013 Meta-analysis of the effects of indoor nitrogen dioxide and gas cooking on asthma and wheeze in children by Lin, Brunekreef and Gehring. This is a seminal paper on this topic and I have seen it cited numerous times by those opposed to fossil fuel stoves. The major problem with the paper is that it is old. While it was written in 2013, it relies almost entirely on research articles from the 1980’s and 1990’s. From the perspective of indoor air assessment that is like the Stone Age. A look at the supplementary material for the work shows that most of the studies included were, by modern perspective, very small and had little statistical power.

Given that knowledge the authors of Gruenwald et al., looked for newer work and but unfortunately found no new data. Why? Because

Full manuscripts (n = 27) were independently reviewed…none reported new associations between gas stove use and childhood asthma specifically in North America or Europe.

So there were 27 major studies they could have included in their analysis but the authors deliberately limited their inputs by requiring the work be done entirely in North America and Europe because they were looking for “similarities in housing characteristics and gas-stove usage patterns”.

By making this editorial choice the authors managed to exclude the definitive research on the topic: Cooking fuels and prevalence of asthma: a global analysis of phase three of the International Study of Asthma and Allergies in Childhood (ISAAC). The ISAAC study was

a unique worldwide epidemiological research program established in 1991 to investigate asthma, rhinitis and eczema in children due to considerable concern that these conditions were increasing in western and developing countries. ISAAC became the largest worldwide collaborative research project ever undertaken, involving more than 100 countries and nearly 2 million children and its aim to develop environmental measures and disease monitoring in order to form the basis for future interventions to reduce the burden of allergic and non-allergic diseases, especially in children in developing countries

The ISAAC study collected data from 512,7070 students between 1999 and 2004. It has incredible statistical power due to its massive sample size and one of its signature conclusions was:

we detected no evidence of an association between the use of gas as a cooking fuel and either asthma symptoms or asthma diagnosis.

Arguably, in any study to evaluate the “Population Attributable Fraction of Gas Stoves and Childhood Asthma in the United States” a massive, recent, international study that showed that there was no evidence of an association between natural gas as a cooking fuel and asthma might be considered relevant. But no, that landmark study was ignored in this analysis.

Even worse…and I can’t believe I am saying this, even the seminal meta-analysis by Lin, Brunekreef and Gehring barely met their standard. Of the 41 papers evaluated in that meta-analysis the Gruenwald et al authors chose only to consider 10 (those where all subjects were from Europe or the US). The limitation of relying solely on European and US data was nominally due to the “similarities” between housing characteristics in the US and Europe but it further degraded the statistical power of their analysis

Now I am not speaking out of school when I point out that houses in the US are really not more comparable to European homes than homes in Australia or Japan. Anyone who has ever travelled to Europe can attest to how similar their housing design is to US building and frankly American houses are not all that comparable either. I would argue that the differences between houses in Nevada and New Hampshire would greatly exceed the differences between those in Nevada and Australia. Thus, it is fair to ask whether imposing this restriction was really about maintaining internal consistency of the data or whether other factors might have played a role?

To conclude, I can only restate that the Gruenwald et al paper seems to have some clear challenges that would typically preclude it from consideration in a policy-making process.

  • Its underlying data is of low statistical power.
  • Its conclusion is directly contradicted by more recent studies with significantly greater statistical power. and
  • It relies on a statistical tool that is considered invalid in situations with confounding variables yet it is being used to analyze an association that is absolutely rife with confounding variables.

Put simply, this is not the study I would rely on to make a major policy change that will affect millions of people and will cost billions to implement. As to its conclusion: are 12.7% of childhood asthma cases in the US attributable to cooking with natural gas? Based on the points above, that conclusion is almost certainly not the case.

Posted in Climate Change Politics, Risk Communication, Uncategorized | 11 Comments

Understanding Risk Assessment as a form of Sustainable and Green Remediation

One of my New Year’s resolutions is to write more posts that explain, in plain language, how our environmental regime in BC protects the public with respect to contaminated sites and to help clear up common misconceptions about contaminated sites.

My area of professional expertise is the investigation and remediation of former industrial and commercial sites. My specialization is risk assessment, specifically the assessment of petroleum hydrocarbon contamination and its effects on human and ecological health. For those of you not familiar with the terminology, I have included a background section at the bottom that can help you understand the topic of risk assessment in this context as well as links to previous blog posts where I address issues surrounding contaminated sites.

There is a common fallacy in the environmental and regulatory community that risk assessment is a cop-out. A way to avoid doing “real” remediation and is thus inherently unsustainable. Nothing could be further from the truth. Often risk assessment is the most green and sustainable choice for remediating contaminated sites in BC.

A typical example of the negative regulatory viewpoint was presented in the BC Ministry of Environment & Climate Change Strategy (BC ENV) discussion paper Making Contaminated Sites Climate Ready put out in the fall of 2022. The document repeatedly suggests that risk-based instruments should be subject to additional scrutiny without acknowledging that risk assessment often represents a preferred green/sustainable form of remediation.

Historically, the standard approach for a “real” and “permanent” remediation at a hydrocarbon-impacted site was the “dig and dump” excavation. In a dig and dump excavation, contaminated soils are dug out of the ground, along with significant volumes of less contaminated, or even uncontaminated soils, using diesel powered excavators which deposit the soil into diesel trucks to be transported to a landfill.

Given the presence of the hydrocarbons in these soils they typically cannot be shipped to just any landfill. Instead, they need to go to specially permitted facilities designed to receive and treat this type of waste soil. Most of these facilities are located in the lower mainland (in Richmond or Abbotsford). If your impacted site is in the interior this might require a 1000+ km round trip for soil disposal.

The trips are carried out by diesel trucks and each trip presents a real risk on the roads. The trucks will travel along community roads to the highway then often hundreds of kilometers on the highways before driving through more residential and busy urban communities to reach their goal. Each trip can generate hundreds of kilograms of carbon emissions and well as harmful diesel exhaust and multiple trips are typically required to achieve numerical closure.

Once the waste soils arrive at a permitted facility for treatment they will generates dangerous vapours while more diesel and greenhouse gas emissions are given off during their treatment. Once treated, the soils then get sent to the main landfill, taking up limited landfill space, for final disposal. But remember, you are only halfway done at this point. Having dug out the hole, you still need to fill it in.

To fill in the hole you need to excavate clean fill from somewhere else and transport that clean fill to your site which entails further transportation emissions, transportation risk and ecological consequences because that fill soil has to come from somewhere.

To summarize, a typical remedial excavation generates massive GHG and diesel emissions; poses transportation risks through busy communities; while using up non-renewable landfill space; and requiring the excavation and transportation of clean fill which entails further transportation emissions, transportation risk and ecological consequences. None of this is recognized in the BC ENV document.

So what is the alternative? The Environmental Management Act (EMA) provides the legislative framework for addressing contamination in British Columbia. The Contaminated Sites Regulation  provides the specific regulatory regime for managing contaminated sites under the EMA. Both identify risk assessment as a viable mechanism to remediate a site because it is a safe, environmentally friendly mechanism of addressing contamination. The decision to remediate via risk assessment has been a standard remedial approach in British Columbia for decades and BC ENV has repeatedly supported the use of risk assessment in their protocols and guidance documents. If a risk assessment demonstrates that there are no unacceptable risks to human health and the environment at a site, that site is considered remediated to risk-based standards.

Under risk assessment a qualified professional can develop a risk management plan to ensure that a contaminated site does not pose unacceptable risks to human or ecological health. Sometimes a risk assessment can’t make that assessment and other remedial options may be necessary, but often a series of relatively simple precautions can be undertaken that eliminate any real risk to the community posed by a contaminated site.

This is often the case in parts of Vancouver where the deep subsurface is dominated by dense glacial tills (sand and gravel that has been compacted by glaciers until it is as hard as concrete). Glacial tills are not only as hard as concrete they are virtually impenetrable by contamination and contain no extractable groundwater. Contamination confined by a glacial till poses no short or long-term risk to human or ecological health and will eventually biodegrade (naturally attenuate) until it no longer exists. Building a properly designed parking structure (as part of a high-rise building for example) over top of this type of contamination can ensure the contamination poses zero risk to the community as it attenuates over time.

Ultimately, the choice will often be to either leave contaminated soil where it poses no current or reasonable future human or ecological harm or conduct a remedial excavation which would generate massive greenhouse gas and diesel emissions, create additional traffic on the highways and in busy urban and residential corridors while taking up limited landfill space and requiring the importation of clean fill soil to replace the removed material.

From a sustainable and green remediation perspective the choice could not be any more clear. Risk assessment is often by far the best remedial option both economically and using any sustainability measure anyone can invent. A site remediated by risk assessment typically avoids significant ecological consequences, emissions and human and ecological risks associated with unnecessary dig and dump excavations or gas-fired oxidizers in vapour extraction systems while providing permanent solutions to contamination. This makes risk assessment a legitimate green approach to remediation.

Background

Because I deal with risk all the time in this blog, I have prepared a series of posts to help explain the risk assessment process. The posts start with “Risk Assessment Methodologies Part 1: Understanding de minimis risk” which explains how the science of risk assessment establishes whether a compound is “toxic” and explains the importance of understanding dose/response relationships. It explains the concept of a de minims risk. That is a risk that is negligible and too small to be of societal concern (ref). The series continues with “Risk Assessment Methodologies Part 2: Understanding “Acceptable” Risk” which, as the title suggests, explains how to determine whether a risk is “acceptable”. I then go on to cover how a risk assessment is actually carried out in “Risk Assessment Methodologies Part 3: the Risk Assessment Process. I finish off the series by pointing out the danger of relying on anecdotes in a post titled: Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist.

Previous posts on Contaminated Sites topics:

A primer on environmental liability under BC’s Environmental Management Act.

On the Omnibus Changes to the BC Contaminated Sites Regulation

Posted in Chemistry and Toxicology, Risk Assessment Methodologies, Risk Communication, Uncategorized | 3 Comments

Understanding the role of, and opportunities for, Canadian fossil fuels in our net zero future

In my review of Seth Klein’s A Good War, I took issue with the author’s statement that in order to fight climate change we need to eliminate the fossil fuel industry. I have repeatedly pointed out how ridiculous that claim is and think it is time to put some numbers to my claims about fossil fuels and their continued role in our existence as a civilized society.

Sadly, as a start to any post of this type I have to do my climate acknowledgement:

I believe climate change is real and is one of the pressing concerns of our generation. I have spent years advancing low-carbon and zero carbon options and agree that we need to achieve a net zero economy, ideally well before 2050.

It is sad I have to do a climate acknowledgement but unfortunately, the reality of this topic is there are so many bad faith actors out there who insist that any data-driven discussion on climate change and its mitigation makes me an old school climate denier or part of the “New Climate Denialism”. I am, of course, neither so do the acknowledgement as a matter of rote.

My scientific area of interest has been evidence-based environmental decision-making and seeking pragmatic and effective reductions in our greenhouse gas emissions is an expression of that interest. Why is the last part important? Because a lot of the demands from the climate NGOs and activists will not reduce our greenhouse gas emissions. Rather, as I have pointed out, many of these ill-considered demands will increase emissions, decrease air quality, and increasing ecological risk.

Going back to the topic of this blog post. In his book the author insists that as part of our fight against climate change we need to eliminate the fossil fuel industry and I argue that the claim is ridiculous? Who is right?

Absolutely no one can deny that the vast majority of fossil fuel use involves using oil and its refined products as a transportation fuel or for the generation of heat or energy. According to the International Energy Agency (IEA) world oil demand is forecast to reach 101.6 million barrels a day (Mb/d) in 2023. Of that, transportation represents about 60% of total oil demand. But that leaves 40% of oil demand that is not from transportation.

The important thing to understand is that fossil fuels aren’t just a transportation fuel or a heat source. Fossil fuels are also the raw inputs for any number of technologies that are absolutely necessary to maintain our modern society. From pharmaceuticals, to petrochemicals, to fertilizer, to synthetic rubber, to carbon fibers to asphalt, fossil fuels are simply not replaceable given our current technologies and societal and ecological expectations.

Let’s start with the biggest user: pharmaceuticals and petrochemicals. The IEA has produced an incredibly useful document which details our reliance on petrochemicals called The Future of Petrochemicals. In this document the IEA indicates that currently we use the equivalent of 12 Mb/d for petrochemicals and that value is increasing as we look to build lighter vehicles, stronger plastics and more items from carbon fibers. From 2020 to 2040, BP expects plastics to represent 95 percent of the net growth in demand for oil (demand to increase by almost 6 million barrels/day). That is approximately 18 Mb/d of oil demand from petrochemicals and pharmaceuticals.

Recognize that most of this demand cannot be met through other sources. Petroleum hydrocarbons represent a massive natural bounty. They are the results of millions of years of solar energy converted into chemical form by plants and trapped in complex molecules that have been compressed to liquid form by huge geological forces. That process cannot be readily replaced with modern biofuels or other modern sources.

Another huge user of crude oil is asphalt. In 2019, global demand for asphalt was projected to be around 122.5 million metric tons (742.5 million barrels). That is better than 2 Mb/d of crude oil demand just for asphalt. Heavy oil is by far the best source of asphalt.

Another major demand for oil is for synthetic rubber. In 2021 the world used 26.9 million tonnes of rubber of which 53% was synthetic (derived from hydrocarbons). Rubber is another product that can be made via organic sources, but doing so increases risk to ecosystems from deforestation. The better ecological choice is via crude oil.

Adding up the various products, the demand for crude oil for non-energy, non-transportation uses will be around 20 million barrels of oil/day. That is 5 times Canada’s projected maximum production. That demand will continue to exist even once we have eliminated any transportation or energy demand.

So why is this important? Because we know the fossil fuel industry will be generating emissions to produce those 20 Mb/d and the countries that can produce their oil for the cheapest prices (including carbon taxes) while generating the fewest emissions will have an indefinite and ongoing market all to themselves.

As I have pointed out previously, Canadian oil sands produce very low cost oil, with a high asphalt component, and our existing production has an incredibly low depletion rate. We are ideally situated to be one of the last producers standing if we can produce net zero oil (and gas) to fill the perpetual oil and gas markets.

This brings us to the second half of our data-driven policy discussion. Were we to believe the faulty claims of the anti-oil NGOs then there would be no justification for developing technologies like carbon capture and storage or direct air capture of carbon dioxide. In fact, the activist community regularly argues we shouldn’t invest in these technologies. But as I have demonstrated above, there will be a tremendous ongoing demand for net zero crude oil for the indefinite future.

But the critical consideration is the “net zero” component. We need to invest right now in the technologies to turn our fossil fuel industry to a net zero one by reducing emissions at every possible step and developing tools to sequester or trap carbon to address the emissions we can’t eliminate. At our current price point we have a significant opportunity to permanently grab a slice of that ongoing oil demand, especially the heavy oil component which cannot be supplied by our most likely net zero competitors.

I am often asked, why do I appear to be supporting the fossil fuel industry with posts like this one? The answer is simple. You can’t solve a problem until you identify and diagnose the problem. The activist community has forwarded the idea that in order to effectively fight climate change we need to eliminate the fossil fuel industry. As I have shown above that demand is not possible. I am also an ecologist and a pragmatist and recognize that every action has a consequence. I want my kids to grow up in a society that still has healthcare, wildlands and a functioning ecosystem.

The fossil fuel industry is a necessary one and has the potential to provide reliable revenues for generations to come. But that will only happen if we ignore the anti-oil activists and develop the tools to get our oil production to net zero. Alternatively, we can do nothing and watch our industry die in the next 10-20 years and with it all the revenues that we currently use to pay for our social services and to help fight climate change.

Posted in Oil Sands, Pipelines, Uncategorized | 3 Comments

Reviewing Seth Klein’s A Good War – An interesting historical treatise that ignores the details of climate science

I finally bit the bullet and read “A Good War” by Seth Klein. The book describes itself as an exploration of:

how we can align our politics and economy with what the science says we must do to address the climate crisis.

But as I will discuss below, in my opinion the book presents some really interesting historical information while ignoring the details, and frankly the science, of what it will take to fight climate change. The book is written in a compelling style and is meticulously footnoted when discussing the political and economic conditions of the war era; but the high quality of his historical research is juxtaposed with the absolute dearth of reliable referencing when it comes to modern day climate science.

Ultimately the book is not about fighting climate change as an energy/GHG emissions issue and more about fighting the idea of climate change where “climate change” is used as a tool to re-align our political and economic systems to meet the author’s political ideals.

This book started out really badly for me because right from the start it was clear it was not going to rely on any peer-reviewed or reliable science. In his section on the “New Climate Denialism” the author provides the technical basis for his arguments against the Trans Mountain Expansion Project (TMX), the CGL pipeline and the fossil fuel industry in general. This should represent the critical intellectual core of his book and its quality should be consistent with his research into the war years. Instead, his understanding of these projects ends up being based on a handful of Canadian Centre for Policy Alternatives (CCPA) articles and a few Globe and Mail articles; all of which have been repeatedly debunked in the scientific literature. Let’s summarize:

He relies on a Marc Lee Globe and Mail article to claim that BC LNG has “a GHG profile very similar to coal”. This claim is a claim is demonstrably false and is contradicted by the peer-reviewed research.

His claim that the Trans Mountain will not generate better returns for oil to Asia was from another of his friends J David Hughes. That claim is demonstrably untrue, with more here.  

His claim that the Trans Mountain will add “13 to 15 million tonnes” of carbon emissions “equivalent to two million cars” isn’t even referenced, rather it is attributed to Katherine Harrison a “UBC political science professor.” This claim comes from a National Observer article by Dr. Harrison. The problem is the actual reference from which that range is derived said those values would only be valid for new production.

As I have written numerous times, there is no data to support the argument that the TMX will increase Canadian oil production or our carbon emissions. Rather, the information from the energy regulators is clear that the production that will move down the pipeline is not dependent on the pipeline. The only new production in development in Alberta will be completed at a price point where it is still financially viable whether the pipeline is built or not. There is no production in the development queue that has a price point where it is only viable with the completion of the TMX. As such, this production will be completed in the absence of the pipeline. In reality the pipeline will reduce transportation risk and emissions compared to the existing transportation options for that same production. The pipeline is a win for the fight against climate change.

More problematically, throughout the book the author argues we need to eliminate the fossil fuel industry. This demand is simply counter-factual. Fossil fuels are both an energy source and a source of necessary primary materials that form the basis of our modern world. As the International Energy Agency points out petrochemical feedstock accounts for 12% of global oil demand, or between 12-14 million barrels a day. From pharmaceuticals, to petrochemicals, to fertilizer, to synthetic rubber, to carbon fibers to asphalt, fossil fuels are simply not replaceable given our current technologies and societal and ecological expectations.

That 12-14 million barrels a day is expected to increase driven by increasing demand for plastics, fertilizers and other products. This represents 3-4 times Canada’s total oil production and for many of these uses heavy oil is the preferred hydrocarbon source and Canadian heavy oil is among the lowest emission heavy oil on the market. Similarly, his plans for eliminating nitrogen fertilizer would starve out our population. Even in a Net Zero future we will not be eliminating the fossil fuel industry.

As for electricity sources, anyone reading the book would totally forget that nuclear energy exists. A look in the index shows a complete lack of discussion of the topic. Similarly geothermal (which requires fracking by the way) is given short shrift.

Given all the above, I have to laugh at the author’s suggestion that the “CRTC could demand that reporting be scientifically factual” since doing so would cause them to stop his friends from publishing their faulty claims.

Now I am going to do something unexpected. I am going to point out that from a big picture perspective I think the author convinced me that only our government can mobilize the resources needed to achieve the fundamental changes necessary to reach Net Zero. No, we will not be eliminating the fossil fuel industry and yes we will be exporting LNG to Asia because both will help reduce global emissions. But we also need to acknowledge that the private sector alone is not going to achieve our goals. We need a strong government willing to strategically spend a lot of money and write good regulations to get us to Net Zero.

The author’s approach to using the power of government to force the public into converting from fossil fuel-based heating and transportation looks, to me, to be the best way to achieve our Net Zero goals. Similarly, I was convinced that the government leading in renewable and low carbon technologies would be the most efficient and likely most profitable (from a Canadian economy perspective) approach to the problem.

I was confused; however, how a trained Economist, like the author, could completely omit the economic and political limitations to his plans. Canada is not an island. We live in an inter-connected world of trade agreements and supply chains and the book is incredibly light on how his approach would fare once our international trading partners (and multi-national corporations) decided to challenge his preferred approach. During WWII Canada had the benefits of allies working towards the same goals, using the same means. The A Good War, go it alone approach is the exact opposite to that situation in WWII.

Ultimately, the quotation that absolutely typifies this book for me is one he presents from Greta Thunberg.  In the quote Greta says:

Avoiding climate breakdown will require cathedral thinking. We must lay the foundation, while we may not yet know exactly how to build the ceiling. 

Any serious thinker would instantly recognize how completely insane that statement is. A building foundation needs to be designed to handle the expected stresses associated with the building design. If you build a foundation without first designing the building you will either need to build a smaller, less effective design to address the limitations in the foundation; or you will need to massively overbuild the foundation wasting time and resources; or you will need to tear out the foundation once completed and lay a new one that reflects the needs of the final design.

Put another way, before you can come up with a solution to a problem you have to be able to diagnose the problem and to do that you need to understand the problem. Throughout this book the author talks about how to fight a problem he is unable to describe. He uses terms like “follow the science” as an alternative to describing what he actually wants done. His entire thesis misses that the fight against climate change isn’t just about carbon or methane, it is about energy and raw materials as well.

Oddly enough, even as the author mangled the energy and climate science he did a pretty reasonable job of convincing me that part of what he wanted accomplished was both possible and even necessary. I suppose that makes the book a partial success from his perspective.

To summarize, in A Good War the author makes it clear he really doesn’t understand our climate challenge from a technical and scientific perspective. To use a metaphor from the book, the author builds his cathedral using a flawed foundation, resulting in a structure unable to support his basic premise. It is worth the read for the historical perspective it provides, but sadly like many recent tomes on climate change, the book has less to do with fighting climate change and more to do with eliminating/defeating Neoliberalism.

Posted in Canadian Politics, Climate Change, Climate Change Politics, General Politics, Leap Manifesto | 4 Comments

BC’s new School Food Guidelines: an attempt by bureaucrats to squeeze the joy out of our kids’ childhoods while stripping away parental choice

I am the parent of three school-aged kids and the president of our local elementary school Parent Advisory Council (PAC). Last night our PAC looked at BC’s Proposed 2022 BC School Foods Guidelines For Food & Beverages in K-12 Schools and the accompanying Ministry’s rationale for the proposed 2022 Guidelines.

It is the opinion of our PAC that these documents represent massive bureaucratic overreach and read like they were written by bureaucrats instructed to suck the joy out of our kids’ childhoods while simultaneously using their bureaucratic power to eliminate parental choice in how we should to raise our kids. As a bonus, these Guidelines will kill some of our PAC’s most successful fundraising. I hope that after reading this post you will rush to your computer to fill out their feedback form to tell these bureaucrats to get out of the business of trying to parent our kids and return parental choice to parents where it belongs.

For those unfamiliar with the 2022 School Food Guidelines, they are nominally intended

to support healthy food environments at school by increasing access to healthy food while limiting access to unhealthy food.

but what they also explicitly admit is that

The Guidelines are for adults making food decisions on behalf of students in a school setting.

they literally are telling us that this is about bureaucrats taking away parental choice about how we feed our kids.

Let’s look at some examples. These guidelines don’t just deal with the food served in cafeterias or food prepared by school staff, they also apply to hot lunch programs and bake sales. Let’s start by considering bake sales, here is a list of baked goods.

I can just imagine a bake sale under the 2022 Guidelines. No cakes or pies, no cookies or muffins, no home-made treats. Instead we can sell loaves of rye or bulgur bread or whole wheat muffins made with with low fat milk and no refined sugar, butter or fat.

One of the most successful fundraisers for our PAC is the hot lunch. They happen at most once-a-month and involve fun, easy-to-prepare foods that the kids will eat: hotdogs, Subway sandwiches. pizza, even Cobb’s bread and Booster Juice. None of these options would be allowed under the 2022 draft Guidelines. Hot dogs are specifically mentioned as unacceptable, pizza has processed cheese and meat and Subway sandwiches have deli meats and soft, processed cheese.

I have heard a number of people saying that these are only “Guidelines” are are thus not mandatory. That is not true. Once a School District chooses to put these “Guidelines” into their policy documents they become mandatory for the schools in those school districts.No administrator is going to turn around and tell their District that they have decided to ignore District policies.

Let’s be clear here. I am not saying schools should feed kids donuts and pizza every day but that is not what we are talking about. The Guidelines lack proportionality and don’t provide exceptions for special events. I can understand a set of Guidelines for general use that acknowledges that there will be exceptional cases but the Guidelines make it absolutely clear they brook no exceptions. Consider the Family Fun Fair.

Before Covid our school had its annual Family Fun Fair. It is a community event that was attended by well over half of our school community. It included a concession that sold hot dogs and hamburgers. You could buy an ice cream treat and of course on a hot spring night the kids could get popsicles or Freezies. Besides the concession there were lots of little games where the kids could win a toffee or a sucker. This is not a weekly or monthly event, it happens once a year…and the Guidelines would make it impossible. The Guidelines explicitly identify fun fairs and says no hotdogs, no popsicles and no treats of any kind. Think I am joking? Look below at the list of allowed treats….but we can try to sell cottage cheese and whole milk…that will go over really well on a hot spring evening.

One of the teachers at our school gives children who succeed a Hi Chew as a special reward for reading success. Another will give out small packs of gummy bears or a sucker to take home. All these rewards will cease to be allowed under the new Guidelines. I think we all agree that teachers shouldn’t need to bribe kids to get them to read, but eliminating virtually every treat used as rewards takes that a step too far.

How about another example? Each year we have a regional track meet. The event occurs in late spring when it can get pretty darn hot and the concession will sell sports drinks including drinks designed specifically to replace the electrolytes lost by kids exercising hard in the heat. Yet the draft Guidelines literally identify electrolyte replacement drink as being on the naughty list. Young athletes working hard in the sun don’t get to replace their electrolytes. Instead they can have water or maybe some plain unsweetened milk, just like Olympic athletes drink at their events.

Ultimately what these inflexible draft Guidelines completely miss is all these PAC and school food programs are optional. Parents can opt their kids in or out of the programs. It is about parental choice and how we want to raise our kids. There are plenty of parents who don’t like treats at school and they have the right to say no to optional school food programs, but under the draft Guidelines parental choice has been utterly removed. The bureaucrats don’t trust us to feed our kids. They want to be the final arbiters of what our kids eat and what they drink.

The thing that angers me the most about these draft Guidelines is that they have been created by unelected bureaucrats who were never given a public mandate to make this significant a change. We recently had a provincial election but these draft Guidelines were kept secret until after the election. I paid attention during the election and the current education minister certainly did not run on a platform of destroying PAC fundraising and making school miserable for kids. Had the current government run on a platform of eliminating parental choice and giving this type of power over our kids to bureaucrats they would never have been elected.

The other point I have mentioned in passing but really matters is that all these changes will essentially eliminate our school PAC funding structure. Virtually every major fundraiser will be affected with most being eliminated. No hot lunches, no Christmas chocolate sales, no bake sales, no fun fairs, no concessions at sporting events.

In BC, PACs play an incredibly important role filling in the gaps left by the chronic underfunding of our education system and the new draft Guidelines will essentially eliminate my PAC’s ability to raise the money necessary to underwrite field trips, to supply financial support for enrichment supplies and teaching aides and even provide our school with more books for our school library. PACs help pay for clubs and events and all that depends on funding…and our government is not giving our school that funding.

To summarize, these new draft Guidelines are a power grab by unelected bureacrats who want to take decision-making about raising our kids away from parents. They will eliminate our PAC’s most effective fundraisers and ultimately won’t make a major difference in student health. I urge my fellow parents to fill out the feedback form provided to the ministry and remind everyone that you also might want to write or call your local MLA or the Education Minister to let them know how you feel about these draft Guidelines.

Posted in Canadian Politics, Uncategorized | 2 Comments

Why you needn’t fear the “Dirty Dozen” fruits and vegetables

There are certain things you can count on with the coming of spring. Two of the earliest are the arrival of the first Mexican and Californian strawberries in the produce aisle and the Environmental Working Group’s (EWG) annual “Dirty Dozen” report misrepresenting the risks of eating said strawberries. I have previously written about EWG’s reporting of risk but want to address them again because there is more to say about their approach to science communication.

For those not familiar with EWG, they are an organization partially funded by organic food trade organizations and organic producers. Absolutely coincidentally, each year they produce a list of fruits and vegetables they feel have excessive pesticide residues while simultaneously suggesting that consumers rely instead on more expensive organic alternatives for their fruit and veggie choices.

Sadly for science communication, their annual Dirty Dozen report regularly gets picked up by news outlets desperate to draw readers to their sites. This week I found over a dozen links to this report including ones from the Vancouver Sun, The Province, and The National Post.

In reading the Dirty Dozen report the first thing to understand is analytical chemists are extremely good at identifying infinitesimally small concentrations of discrete chemicals in mixtures. As I pointed out in a previous post; analytical chemistry has become so precise that a modern mass spectrometer can distinguish to the parts per trillion range. That would be 1 second in 30,000 years. When an activist report says they found “detectable” concentrations of a pesticide in a sample you should take that claim with a grain of salt since that same analysis has the capacity to find a single grain of salt on a 50 m stretch of sandy beach.

As a specialist in risk assessment, the first thing I look for in a report like the Dirty Dozen is the identified concentrations. They will tell me the true story about whether there are any real risks. The absolute tip-off about the Dirty Dozen report is that it does not present actual concentrations for the pesticides identified in the fruits or vegetables in the report. All they say is that pesticide residues were identified.

There is a simple rule of thumb in risk communication. If a toxicological report doesn’t give you the concentrations of a compound it is because the authors don’t want you to see those concentrations. This is not the sort of thing that happens by accident.

But that is not the only way in which the report keeps their readers in the dark. In toxicology, risk is dependent on exposure concentrations and professional toxicological bodies determine acceptable exposure concentrations through detailed, publicly-available, peer-reviewed research. The EWG reporting doesn’t even use toxicological terms in their reports, instead referring to their preferred concentrations as “benchmarks” without ever explaining what that term actually means.

Most importantly, they never explain the basis for their benchmarks. They don’t explain how they determine whether a concentration is safe or not safe. Their calculations have not been widely shared but they don’t appear to be based on the peer-reviewed toxicological literature. The best I can tell is that the values are arbitrary. Consider their benchmark for glyphosate. On their page How Does EWG Set a ‘Health Benchmark’ for Glyphosate Exposure? they write:

EWG calculated a health benchmark for the total amount of glyphosate a child might ingest in a day. EWG’s benchmark is 0.01 milligrams per day significantly lower than both the Environmental Protection Agency’s dietary exposure limit and California’s No Significant Risk Level.

There is no rationale provided to justify or support their benchmark.

For the record, the EPA has systematically (and publicly) reviewed the peer-reviewed toxicological research for glyphosate and has identified a safe dietary limit of 70 mg/day. California, which has a standard based on slightly different criteria, says a safe number is 1.1 mg/day. EWG’s undocumented benchmark (the one they use in their reports) is orders of magnitude lower than the levels identified as posing no significant risk based on the peer-reviewed toxicological literature. To my eye, EWG simply chose the lowest detection limit available from their research lab as the basis of their benchmark.

What the above tells you is that when EWG says something isn’t safe it is not based on the peer-reviewed science. That is not how good science works. In toxicology you don’t just get to declare something is not safe without explaining how you came to that conclusion. Consider a thought experiment:

Imagine that I, a highly credentialed scientist, created my own private “benchmark” for trip hazard risks. Imagine I claimed that individual grains of sand on the sidewalk represented dangerous trip hazards to children. Now it is generally understood that children don’t trip over individual grains of sand but the grains are detectable on the sidewalk if you look carefully enough. Imagine I then wrote a report indicating that the presence of grains of sand on the sidewalk posed a real and dangerous tripping hazard to neighborhood children and suggesting that families buy expensive leaf blowers to protect their children from these unsafe conditions. Does anyone imagine I could get dozens of media outlets in Canada to publish a story on my report detailing the risk of individual sand grains and promoting the sale of leaf blowers? Of course not, because unlike the toxicology every parent in Canada would recognize that my “benchmark was invalid.

Now I would love to write a snappy conclusion to this blog post but happily a peer reviewed academic journal beat me to the punch. As Winter and Katz wrote in their review of an earlier edition of the Dirty Dozen report (in Dietary Exposure to Pesticide Residues from Commodities Alleged to Contain the Highest Contamination Levels):

In summary, findings conclusively demonstrate that consumer exposures to the ten most frequently detected pesticides on EWG’s “Dirty Dozen” commodity list are at negligible levels and that the EWG methodology is insufficient to allow any meaningful rankings among commodities.our findings do not indicate that substituting organic forms of the “Dirty Dozen” commodities for conventional forms will lead to any measurable consumer health benefit.

Given the above I only wish Canadian content providers recognized when they were being played and stopped giving EWG so much free earned media coverage every year.

Image from Shutterstock

Posted in Chemistry and Toxicology, Risk, Risk Assessment Methodologies, Risk Communication | 1 Comment

Why an over-budget Trans Mountain Pipeline Expansion Project will still not be a financial loser for the Federal government

Last week new details emerged about ongoing cost increases on the Trans Mountain Pipeline Expansion (TMX) Project. If news media is to be believed, the price of the pipeline will likely exceed $17 billion. A far cry from the initial $7.4 billion price tag when the federal government bought the project. Opponents of the project will claim that at this price the TMX is a financial loser that should be abandoned. As I will demonstrate in this post, that claim is demonstrably false.

To summarize my argument, the opponents of the project will argue that the pipeline will possibly have a negative net present value (NPV) at its current $17 billion price tag. But as I will show, when it comes to government projects NPV is only part of the picture, and in this case, it is only a tiny piece of the much bigger economic picture. Except in the case of massive losses, the TMX makes absolute financial sense from a government perspective because the government has more than one way to generate revenue from this project.

I went into detail about the Parliamentary Budget Officer’s (PBO’s) report on the valuation of the TMX in a previous post (Understanding what the PBO report says about the Trans Mountain Pipeline Expansion Project). The PBO report presents numerous scenarios and depending on the cost of the project, the financing costs and other factors, the project may or may not have a positive NPV.

What does a negative NPV mean? Well, let’s think about why a company builds a pipeline. When Kinder Morgan proposed the pipeline, it had a simple plan. Build a pipeline for $4 – $7 billion and then sell space (tolls) on that pipeline at a price that allowed it to recoup its costs plus generating a profit for its shareholders. The challenge Kinder Morgan faced was that its only source of revenue on the project would be the tolls on the material transported by the pipeline. For Kinder Morgan, the NPV of the pipeline would really matter. If they were unable to recoup the costs of construction, over the lifetime of the project, then the project would be a money-loser and a financial drain. Companies don’t last a long time if they regularly build projects that generate a negative NPV.

In my earlier post I also went into detail into the concepts of “optionality” and the “WTI-WCS price differential”. To save you time I will copy some text from that post here:

Optionality refers to the availability of more pipeline export capacity to more downstream markets for Western Canadian oil producers. Optionality allows shippers more opportunities to maximize returns and reduce the netback disadvantage, reflected in the price differential between West Texas Intermediate (WTI) and Western Canadian Select (WCS)

The PBO also notes:

That analysis determined that a reduction in the WTI-WCS price differential of US$5 per barrel would, on average, increase nominal GDP by $6.0 billion annually over 2019 to 2023.

When considering optionality and the WTI-WCS price differential we are reminded that the federal and provincial governments are not private corporation with limited sources of incomes. Governments generate revenues from a variety of direct and indirect sources.

Consider the building of the Trans Mountain. When the government spends $17 billion building a pipeline, they generate tax revenues on that spend and the money invested has a multiplier effect throughout the community which generates more revenue. When a crown corporation pays GST that is direct revenue to the very government paying that crown corporation’s budget. Similarly, when a crown corporation pays staff to build a pipeline, that staff remits income taxes on all their income. Thus, a $17 billion project doesn’t actually cost the federal government $17 billion but rather $17 billion minus the taxes etc. that the government generated from that construction. Moreover, this type of economic activity generates spin-off economic activity from that construction activity.

If taxes and direct economic spinoffs were the only benefits from the project, then even the government could only afford a small loss in NPV over the long term since they can only make up so much value in taxes. But thankfully, those are really only secondary benefits. The primary benefit of the project is in optionality and its larger effect on national GDP.

When TMX is complete, it will increase optionality and will increase the value of the oil moved down the pipeline (as described by the PBO). Line 2 is projected to move 540,000 barrels/day. If optionality increases the value of that oil by a single dollar per barrel that means the pipeline would generate $540,000/day of added value to the economy at no additional cost. That multiplies to about $200 million/year per dollar of increased value. Remember this is simply an increase in the value of the existing production that would otherwise still be moving by rail to Asia or California Texas. It is pure cream which requires no further effort once the pipeline is built. If we use the PBO estimate of a $5 increase in value that comes out as $1 billion a year in added direct revenue from the TMX. That $1 billion in revenue means substantially higher royalties and higher tax revenues. That is more money for the government.

Thus, even if the pipeline ends up with a NPV of minus $1.2 billion, the government, through their other revenue sources, would make up that “loss” in very short order. Moreover, if increased demand raised the price of additional production (remember Alberta produces about 3 million barrels a day of heavy oil) that increase in value might spread to the remaining oil resulting in higher revenues off that oil as well. This is how the PBO comes up with their $6 billion/year in added revenue.

See how a negative NPV can still end up with a positive cash flow? Can you imagine any investment where $1.2 billion in one-time costs resulted in $1 billion to $6 billion a year in extra revenue? No business on the planet would say no to that proposition. And remember, this is not due to increased production this is simply increasing the value generated by producing the same product. It is simply getting paid more for the same product because you can now get it to a market that values it more.

Ultimately, we know the opponents of the TMX are going to make wild and unsupported claims about the project being a money loser, a financial drain, etc.. But the simple truth (as displayed above) is that their argument about NPV simply does not hold water. The federal government is not a private corporation with a single revenue stream. The federal government builds all sorts of projects that have negative NPV because they generate value through other means. From schools, to roads, to ports, to pipelines, these projects can generate either economic or social benefits. In the case of TMX both the direct and indirect revenue streams will result in the project being a big economic winner for the federal government even if it costs a bit more to build.

Posted in Canadian Politics, Pipelines, Trans Mountain | 56 Comments

Do Canadians really consume the equivalent of a credit card worth of plastic every week? – Of course they don’t

This week I was directed to a factoid I had somehow missed that is currently making the rounds. That “humans consume the equivalent of a credit card worth of plastic every week”. The factoid was being used by the CEO of Friends of the Earth Canada in a Georgia Strait commentary: “Leave plastic where it belongs—in the tar sands”. Looking around I was struck that I kept finding that same particular value and quote at place like CNN, Reuters, Phys.org etc.. A Google search of the headline got 145 unique hits, almost all leading back to a World Wildlife Fund (WWF) report. This set off my chemist’s antenna and I had to discover whether the reported information was valid. Quelle surprise, it really isn’t. As I will discuss below, it is clear these sources have badly misrepresented the scientific source material and Canadians absolutely do not consume that much plastic.

The “humans consume the equivalent of a credit card worth of plastic every week” factoid is derived from a recent paper: “Estimation of the mass of microplastics ingested – A pivotal first step towards human health risk assessment” by Senathirajah et al.. In the paper the authors do indeed conclude that

we estimated that globally on average, humans may ingest 0.1–5 g of microplastics weekly through various exposure pathways.

But that “may” carries a lot of weight in that sentence. The authors make abundantly clear in their text that the 5 g value is the very top of the suspected range (not the typical as suggested in the news articles) and as I will show, achieving that number requires accepting a number of completely implausible scenarios. Any serious reading of the paper would leave the reader concluding that the correct value was somewhere closer to 0.1 g (which I will argue is likely high) and even that value relies on a sequence of uncommon assumptions.

To begin let’s start with some background on the paper. The paper is a “systematic review and analysis of the published literature” that subsequently attempts to simultaneously estimate the numbers and mass of microplastics ingested. For those not familiar with the language that means this is like a meta-analysis but with less strict inclusion criteria. It is a really interesting piece of foundational research, but like any research of this type it is not terribly robust. The challenge is they are trying to estimate two critical values (with all the associated uncertainties) and then they use those estimates to make further estimates. When you multiply uncertainty that way your accuracy and precision go way down.

Like any analysis of this type, the basic assumptions at the front end will dictate the conclusions at the back end. Thus, it is important to look at the basic assumptions at the front end. In this paper the authors assume typical individuals will drink a lot of bottled water and eat a lot of shellfish. Specifically, they assume that each person drinks 219 L of water a year with 53.2 L of that being bottled water (24%). This is important because bottled water has a LOT more microplastics than tap water. Shellfish is important because many shellfish (especially mussels and oysters) are filter feeders that are eaten whole. Now I don’t know about you, but my family relies entirely on tap water (often run through a Brita which does little to remove microplastics) derived from mountain reservoirs (with virtually no microplastics) and our oyster and mussel consumption is relatively low. Speaking of seafood:

Another key source is shellfish, accounting for as much as 0.5 grams a week. This comes from the fact that shellfish are eaten whole, including their digestive system, after a life in plastic polluted seas.

The problem is most shellfish is not eaten whole. Shellfish includes prawns and shrimp that are cleaned and de-gutted before cooking. Even when you cook a lobster whole, you don’t feast on its digestive system or gills, where the microplastics tend to accumulate. In a typical Canadian diet this shellfish value simply doesn’t make sense.

If you are a family that doesn’t drink a quarter of your water from disposable bottles (not reusable plastic bottles but those fragile PET bottles you get from the store) or eat mounds of mussels and oysters your ingestion numbers will be a small fraction of the total used in the article…but there is more. While the ingestion assumptions are likely a bit high, the biggest consideration in determining the mass of plastic ingested comes from the author’s assumptions about the size and shape of those microplastics. This is what really distinguishes the results of the report.

When scientists discuss the volume of plastic in water they do so by counting particles in the water. They typically use one of two techniques to do the job, either Fourier Transform Infrared (FTIR) and Raman spectroscopies. Both do an excellent job of identifying microplastics, the problem is they are less effective at providing the shape and size of the individual particles (particularly since there are so many particles to size). Also, different types of plastics have different masses (the same volume will weigh a different amount). In the article the authors address this issue by providing different scenarios where they identify typical particle sizes associated with different groups of microplastics.

Microplastics come from a variety of sources and get into our foodstuffs through various means. In the oceans, microplastics tend to be bigger as ocean water doesn’t get treated. So shellfish would be expected to be exposed to these bigger, heavier bits of microplastics. Water treatment facilities aren’t designed to eliminate microplastics, but the treatment process does a reasonable job of eliminating the bigger microplastics through their various filtering systems. As a result, microplastic particles in drinking water tend to be smaller.

A further note, the source of your drinking water really matters when it comes to microplastics. Our north shore reservoirs in Vancouver collect runoff from mountain streams. These streams have very low microplastic loads. Similarly, groundwater supplies from confined and unconfined underground aquifers also have very low numbers of microplastics. So, if your water supplies are coming from the ground or from clean freshwater reservoirs, then this paper really doesn’t apply to you either.  

Going back to our discussion. In the 5 g scenario the authors assume the average particle of microplastic is equivalent to the microplastics found in seawater. The authors themselves suggest that this is unlikely, but they are doing scenarios and this is simply Scenario 1. In Scenario 2 and Scenario 3 the authors assume all the ingested particles are consistent with what comes out of water treatment facilities. That would be the scenarios that calculate ingestion between 0.1 g and 0.3 g per week with almost all of that being derived from microplastics in table salt. Ironically, if you want to reduce that number even further you are advised to avoid sea salt and eat rock salts instead.

Finally, the authors do a “medley” where they assume shellfish are exposed to ocean plastics and drinking water to water treatment-sized particles. In that scenario the seafood contribution increases significantly, and ingestion goes up to 0.7 g/wk. This value is significantly less than the 5 g we see in the headlines and may be relevant to communities that consume a lot of mussels and oysters. It absolutely does not apply to most Canadians.

To conclude, it is clear this paper absolutely does not support the headline that “humans consume the equivalent of a credit card worth of plastic every week”. Rather the paper suggests that individuals who rely heavily of bottled water and shellfish (which admittedly could represent a significant community in the developing world) may ingest closer to 0.7 g of microplastics a week. As for your typical Vancouverite who might eat shellfish a couple times a month and drink our beautiful, clean tap water? Your microplastic ingestion rate should be significantly less than 0.1 g/week (or less than a credit card of plastic a year). Admittedly, that result really isn’t going to make international headlines; drive in donations; or convince politicians to ban plastics, so it is understandable how the qualifiers presented in the paper were mostly ignored by the activists discussing this paper in the news.

Posted in Chemistry and Toxicology, Risk, Uncategorized | 4 Comments

Digging into that paper that “associates” VOCs in indoor air and tap water samples with Northern BC LNG wells – a likely example of spurious correlations

This week I was directed to a new paper in Science of the Total Environment titled Volatile organic compounds (VOCs) in indoor air and tap water samples in residences of pregnant women living in an area of unconventional natural gas operations: Findings from the EXPERIVA study. The study was cited in a CBC article with the entertaining title: Homes near fracking sites in B.C. have higher levels of some pollutants, study finds. Needless to say this study was jumped on by opponents of BC LNG.

I previously blogged about the challenges of disconnecting theory from data. In that post, I discussed the concept of spurious correlations. Spurious correlations occur when two unrelated observations are incorrectly linked via a statistical analysis. The classic example is the purported link between pirates and climate change. This happens because statistics (especially the non-parametric statistics with small sample sizes used in this article) are prone to false positives. The easiest way to evaluate whether a correlation is likely spurious is to ask a simple question: what is the theoretical link between the two observations?  If no link can be established then it is likely the observed correlation is not real.

As I will demonstrate in this blog post, there is no viable mechanism to justify the relationship (or association as they call it) identified between the observed VOC concentrations and the presence/location of unconventional natural gas (UNG) LNG wells. For those who don’t remember their statistical jargon “associated” is the word you use when your result is not statistically significant but the correlation coefficient is slightly elevated. As I will demonstrate in this blog post, any “association” identified between VOCs and and UNG well proximity in this report is almost certainly spurious.

In the paper, the authors examined indoor air and tap water at the households of 92 pregnant women in the Peace River Valley. One-week indoor air and tap water samples were collected from each home. In addition, the authors used an Oil and Gas Commission database to identify LNG wells in the vicinity of each residence. The results from the air and water sampling was then compared to the LNG well data to see what nuggets might fall out. They had no specific hypothesis, they just threw all the data at the wall to see what the statistics said was relevant. The “Highlights” of the paper were:

Density of UNG wells was associated [my emphasis] with indoor air chloroform, acetone and BTEX.

Density of UNG wells was associated [my emphasis] with tap water trihalomethanes.

The Conclusion includes this line:

Our results also show that even when accounting for the region of residence and/or other known sources of exposure to VOCs, concentrations of acetone, chloroform and total trihalomethanes were associated with UNG well density/proximity metrics

For those unfamiliar with water treatment, chloroform and trihalomethane (THMs) are generated by chlorinated water when it is exposed to organic material. As the organic material reacts with the free chlorine it produces TCMs and/or chloroform. Every household with chlorinated tap water will have these compounds at some concentrations in the air and water with siltier water or water further from the original source generally having higher concentrations than water closer to the source or from pristine clean water.

Now the “associations” identified in the article surprised me because my reading of the fracking literature had never identified “acetone, chloroform or THMs” in fracking fluid or in LNG wells of any sort. Rather, anyone familiar with chloroform knows that it is avoided in most oil and gas uses because of its carcinogenic nature. Its use is typically restricted to labs where it can be handled in fume hoods.

For those who work in the industry, the bible for fracking fluid contents is the EPA’s Analysis of Hydraulic Fracturing Fluid Data from the FracFocus Chemical Disclosure Registry 1.0. The FracFocus chemical disclosure registry provides public disclosure of hydraulic fracturing chemical additives used in more than 55,000 wells by over 600 companies.So what does FracFocus say about “acetone, chloroform and THMs“? Not one of the three even makes an appearance in the document. So what does the paper say?

Hydraulic fracturing wastewaters and produce waters contain a number of VOCs, including acetone, xylenes (Lester et al., 2015) and chloroform(Akob et al., 2015),which are used as powerful solvents by the oil and gas industry (Luek and Gonsior, 2017)

Going to the reference Luek and Gonsior one discovers they say nothing of the sort about acetone or chloroform. Rather chloroform is only mentioned twice with the second instance being where it is identified as among a list of “suspected laboratory and field contaminants and inconsistent with contamination due to hydraulic fracturing activities”. In Lester et al., there is no mention of chloroform or THM and acetone is noted as being used as a cleaning solvent not part of the fracking solution. Akob et al is not a primary reference but rather refers to another report by Hays and Severin which reported finding chloroform in only 1 of 1330 samples in one run and only 2 of 60 well locations where solvents were observed and this was potentially attributed to reuse of fracking water. Acetone was identified as being used in numerous wells as a solvent that occasionally appeared in some wells and not as a standard fracking additive. Put simply, “acetone, chloroform and THMs” are not used in fracking fluids and in BC fracking does not involve the use of municipal water so they wouldn’t be an accidental byproduct. This poses some challenges to the “associations” identified in this article.

With respect to the household water sources the authors detail that 60% of the participants were on municipal water supplies. As the authors note:

Inclusion of Dawson Creek or Fort St John as the city of residence (covariate associated with both water treatment process generating trihalomethanes, and density of UNG wells) in the models did not change the associations between well density/proximity metrics and tap water concentrations of total trihalomethanes.

According to the authors the relationship between the fracking and water quality is

It is known that an increase in organic matter entering a water treatment play may lead to an increase formation of trihalomethanes (Xie, 2003). Interestingly, wastewaters generated during hydraulic fracturing contains high concentrations of dissolved organic matter… Surface and groundwater contamination events linked to UNG operations have been documented in the scientific literature…Furthermore, increases in the concentration of trihalomethanes have been observed in drinking water facilities in Pennsylvania, potentially because of the high levels of dissolved organic matter in hydraulic fracturing wastewaters discharged into surface water resources (EPA, 2016). Given these documented events, it is therefore possible that the density and proximity of UNG wells is contributing to the high concentrations of tap water total trihalomethanes in the EXPERIVA study.

So here is the thing. The authors evaluated the proximity of the dwellings to local LNG wells. The analysis did not consider the proximity of LNG wells to municipal supply sources. The City of Fort St. John gets its potable water from 5 shallow wells in Taylor on the Peace River. Dawson Creek gets its water from the Kiskatinaw Watershed northeast of Tumbler Ridge. The residences (and associated LNG wells) are not related in any way to the water source locations. The authors don’t explain how an LNG well, 2.5 km from a dwelling, can affect tap water that is supplied via a piped municipal water supply sourced 175 km away (in the case of Dawson Creek) to that dwelling.

So let me put this all in one paragraph, to understand the logic underlying their identified “association”. Dawson Creek gets its water from a watershed 175 km to the north of the city. The water is pumped to a modern and highly advanced multi-stage water treatment facility. The water is then pumped into a utility system that transports the water to individual dwellings. If the “association” was real it would mean that a fracked LNG well which was installed sometime in the last couple decades, located something like 2.5 km from the dwelling, generated enough organic material during its original installation to have affected said water to the extent that the dwelling’s water is now generating excess THMs and chloroform. In order for the “association” to be correct the organic material generated from that well either migrated the 175 kms to the water system’s originating watershed or migrated overland to overwhelm the Dawson Creek water treatment plant and thus caused the water to generate excess THMs and chloroform. Does anyone else imagine that a better explanation for this fairy tale would be “spurious correlation”?

And it gets worse. As I noted earlier, LNG wells don’t generate “acetone, chloroform or THMs” so how are these compounds getting to the indoor air from the LNG wells? There is no plausible mechanism by which an LNG well can cause an indoor air problem kilometers away…indoors no less!!! The observed VOCs in the indoor air are, however, typically associated with the tap water concentrations but as we have discussed, the tap water is not related to proximity to any LNG wells in the study.

This post is getting a bit long so I will simply highlight some other gems from this article.

The report relies on self-reporting by the participants. Self-reporting without auditing is known for its lack of reliability. Consider that according to the paper:

It is important to note that other household products containing chlorine (e.g., toilet cleaners, bleach, detergents) can lead to VOC emissions indoor (Odabasi, 2008). In EXPERIVA, only one participant confirmed storing this type of household products in their residence.

If I read that correctly, it appears to say that of the 92 households surveyed only one stored household cleaning products in their house? I can go around my house and find three different types of household cleaning products that include chlorine (toilet cleaner, laundry bleach and Vim household cleaner) yet only one of the 92 households had any chlorine cleaners of any kind? That seems…unlikely???

With respect to air sampling the study includes zero duplicates and zero regional background samples. Recognize that the authors are attributing indoor air concentrations of VOCs to outdoor sources (LNG wells). Therefore these concentrations should be higher in the outside air…but the researchers chose to collect no samples to compare indoor versus outdoor air concentrations, nor did they collect duplicates from different locations in the same houses.

Were I preparing a report on indoor air for a regulator the absence of a background sample or duplicates would result in my sampling regime to be deemed deficient and I would be sent back to reproduce my work. That the peer reviewers failed to address this issue is simply problematic.

The report also doesn’t consider how old the LNG wells are or when they were installed. The VOCs considered in the study were all quite volatile and would be subject to rapid deterioration in the natural environment. The VOCs would only be an issue immediately after a well was fracked so a well installed a year prior would simply not generate these VOCs. So even if these VOCs were found in the fracking water [they aren’t] their presence would only be expected for a very short time after installation. That consideration was never included in the analysis.

I think I can stop here because from the information I have provided above there is simply no way anyone could credibly argue that the associations observed in the article are real. The critical VOCs identified as being “associated” with “UNG well density/proximity metrics” do not appear in fracking formulations in BC and the” UNG well density/proximity metrics” used to generate the association between tap water and dwellings does not account for the actual source of the drinking water source for 60% of the sample locations. The UNG locations used to generate the “association” were incorrect. How could that result in a real association?

Posted in Uncategorized | 2 Comments

Why Climate leaders sometimes build pipelines – understanding the climate implications of the Trans Mountain Pipeline Expansion Project

One of the most common refrains of the activist community during our recent federal election was the line “climate leaders don’t build pipelines“. As I will explain in this blog post, this refrain, while catchy, is wrong.

I have written numerous blog posts about the Trans Mountain Pipeline Expansion Project (TMX) debunking activist claims about heavy oil, Asian demand for heavy oil, southern resident killer whales and tanker traffic to name a few. But the biggest activist talking point about the TMX is that it will have an oversized climate impact and will increase Canada’s global greenhouse gas emissions. In this blog post I will explain why these claims are not true.

The only way the TMX can increase global GHG emissions is if it spurs enough additional production to negate its emission reductions over other transportation mechanisms. This was the argument put forth by Dr. Marc Jaccard is his submission for the City of Vancouver to the National Energy Board in 2014. The problem with that submission, and all the arguments made by activists since then, is they fail to explain where all that new production would be coming from.

Last week I posed a pretty simple question to the activist community:

Here’s an incredibly simple question for all the anti-#TransMountain activists to answer Since you keep claiming the @TransMtn will increase GHG emissions, please identify which current/planned production that will be made viable/inviable by the new pipeline? #cdnpoli

The activist response…crickets…. I posed the question again to specific individuals and organizations who are leading the fight against the TMX and the closest to a useful reply was this:

Here’s your specific proof It’s basic economics that facilitating cheaper/ faster transport is going to support an increase in production.

The problem with that answer is that it is wrong. The economics of the TMX project are not “basic”. Rather they are complex and driven largely by factors outside of a simplistic supply/demand model.

Unlike light oil in the Permian Basin, if an oil sands producer wants to increase production they can’t spend a couple million to hire a rig to drill a well that is producing oil 2 weeks later. Oil sands projects are long term investments. As such they don’t respond to the same short-term incentives as other production.

Oil sands projects undergo an incredibly long and complex approval process. Consider the now cancelled Teck Frontier Mine. It began its regulatory journey in 2008 and wasn’t nearing completion of that journey in 2020 when it was cancelled. During its initial regulatory run it required multiple regulatory filings:

An environmental impact assessment was submitted to Alberta Environment and Parks, the Canadian Environmental Assessment Agency, and the Alberta Energy Regulator (AER).

Applications were submitted to the AER under the Oil Sands Conservation Act (OSCA), and AEP under the Environmental Protection and Enhancement Act (EPEA), and the Water Act for provincial approvals.

Approvals will be required under the federal Fisheries Act and the Navigation Protection Act for activities that may affect fish and fish habitat and navigable waters.

Approval from the Alberta Utilities Commission will be required for the cogeneration facilities and from the Regional Municipality of Wood Buffalo for parts of the camp.

Ancillary approvals under the Public Lands Act, the Municipal Government Act, and the Historical Resources Act were also required.

Recognize the Teck Mine was not the exception, that is the typical requirement to get an oil sands project up and going. What this means is that it is quite easy for outsiders to establish what projects are in the process because all this regulatory information is publicly available.

It also means that we know EXACTLY what production is in the planning pipeline [pun intended].

Oil sands projects are also massively expensive and are only viable in select financial conditions. Returning to the Teck Frontier Mine. It was a $20.6 billion dollar project. As a big project it needed big oil prices to be viable. Estimates for a break-even West Texas Intermediate (WTI) oil price for Frontier ranged from US$65 a barrel to more than US$80.

This brings us to another major misconception from the activists community. That because new oil sands production costs are high therefore the old production must be equally high. As was reported in a recent article:

Canada’s resources are really expensive to extract, in addition to having a super high carbon intensity,” said Caroline Brouilette, domestic policy manager at Climate Action Network Canada.

This couldn’t be further from the truth. Existing oil sands projects produce exceptionally inexpensive oil. Let’s look at the top three producers:

Suncor identifies two break even prices: an “Operating Breakeven” of $30 WTI which covers operating costs + asset sustainment & maintenance capital. Their Corporate Breakeven of $35 WTI = operating breakeven + full dividend

CNRL has an operating breakeven of $28 per barrel WTI with a free cash flow (full dividend) breakeven of $30

Cenovus has a free funds flow WTI break-even of US$36/bbl

Much of the oil sands production pays for itself and generates a generous dividend at a WTI equivalent prices around $36/bbl…and the WTI has been below that value for only a couple months in the last 10 years.

These aren’t projects that live and die on the $2 – $3 dollar savings made by the presence or absence of the TMX. They are projects that can afford to keep pumping out product and shipping it by rail or US pipeline for decades to come…even as the US Permian and other global sources gets completely priced out of the market.

So I can imagine a lot of you are asking why, if these projects are still making money, does the TMX matter? The quick answer is because the TMX allows producers to get better value for their same production and will generate billions in additional revenues for our federal and provincial governments via increased royalties and tolls.

I go into detail about this topic in an earlier post, but put simply even if the pipeline went massively over budget and cost a lot more than proposed it would still be a financial boon to the Canadian economy by generating more revenue and royalties from the same production.

Going back to my initial point. The one thing the TMX does not do is influence production. Why? Because while it improves transportation costs it doesn’t make any marginal project viable and its absence does not make any existing project less viable….and when it comes to oil sands we have to talk about specific projects we can’t just wave our hands and say “supply and demand”.

Essentially the Alberta oil industry has a strict dichotomy. There are existing greenfield projects and upgrades that will be financially viable irrespective of whether the TMX is built and there are more expensive projects that will never be viable given our new regulatory environment and carbon tax structure.

There are no marginal projects in the works that this pipeline will make viable. Thus the argument that the TMX will generate more production because it slightly reduces the cost of transportation is nullified by the particular realities of the Alberta oil sands industry. The activists’ claim is not supported by the data.

This brings us back to refrain I discussed at the start of my piece that “climate leaders don’t build pipelines“. Well my response is that unless an activist can show me a specific project that will go ahead based solely on the existence of the TMX then that refrain is false with respect to the TMX. As I have shown numerous times at this blog, the TMX will move oil more safely while generating lower emissions than that alternatives (oil-by-rail or oil-by pipeline to Texas and then by ship to Asia).

For politicians and climate leaders the TMX is thus a no-brainer. It will move the same production more safely, while generating fewer emissions and in doing so will generate more revenues and royalties for our federal and provincial governments. The reason Mr. Trudeau supports this project couldn’t be more clear. It makes absolute sense from an environmental, climate and financial perspective. The activists who claim otherwise either haven’t made themselves familiar enough with the Canadian oil industry or simply don’t care about the truth because it disrupts their preferred narratives.

Posted in Canadian Politics, Pipelines, Trans Mountain | 2 Comments