More on the Trans-Pacific Partnership and the Environment

As many of my blog readers know, I have a regular blog at the Huffington Post Canada. On that blog I post shorter versions/updates of my A Chemist in Langley posts and post “short takes” on recent issues in the world of evidence-based environmental policy and renewable energy. The good thing about that venue is that it draws more eyeballs to the screen than my own little blog here. The downside is that it has a pretty strict limit on how long a post can be (about 800 words). Having a word limit can be a good thing as writing for that blog has allowed me to refine my writing and find shorter ways to say things. The downside is that I blog about topics that are not always black and white and a word limit can restrict my ability to expand on an important point or give useful examples to clarify topics of interest.

My post yesterday Why the TPP Doesn’t Spell Doom for the Environment is a really good example of where a post would be improved with the addition of a few hundred more words. So today, I am going to add those words. In doing so I will take some others out and mix it up a bit.

As I wrote in my Huffington blog post, early Monday negotiations on the Trans-Pacific Partnership (TPP) concluded with an Agreement in Principle. While details of the agreement are still under wraps, the Canadian government has provided a Technical Summary of the Agreement. Now an Agreement in Principle is not a final deal ready to be ratified by governments and so we don’t have to go rushing about and panicking yet but it is important to understand what the deal means from an environmental perspective.

The thing to understand about these trade agreements is that they are not the black/white story that the activists claim they are. Rather from an environmental perspective, trade agreements have both positive and negative aspects. They can have the negative effect of slowing down the development of unilateral environmental regulations, but they can have a positive effect by forcing environmental laggards to catch up with the pack.

It is quite true that trade agreements typically include considerations to prevent individual countries from developing their own distinct environmental policies. One of the important features of these trade agreements involve knocking down or eliminating non-tariff barriers (also called technical barriers to trade). The problem is that environmental regulations have historically been used by the bad actors on the international trade front to disguise simple protectionism. Like the wolf in the story Little Red Riding Hood the protectionism is dressed up to look like it is intended to enhance environmental performance but under the covers hide regulations intended to harm foreign competitors, often without improving environmental performance in the least. A recent example is the case of Korean emissions standards which did nothing to improve emission characteristics of cars on Korean roads but did a wonderful job of stopping the export of North American autos to Korea.

To further explain how these trade agreements can hurt individual action on the environmental front, imagine that Canada implemented a national carbon tax. Every pound of steel produced in Southern Ontario would be subject to that tax. This would make Ontario steel more expensive than the steel from an identical steel plant with otherwise identical cost structures elsewhere in the TPP zone. Under the National Treatment and Market Access (NTMA) chapter of the TPP, Canada would not be allowed to put a tariff on imported steel (that was not subject to the carbon tax) to address the difference in price and our steel industry would suffer. In this case the TPP would slow down the development of innovative, Canada-first policies to fight climate change.

Now the example above ignores some very important points put forward by the environmental community. The first is that we almost never see an example where a Canadian plant is identical to one in Vietnam (for example). Canadian plants tend to depend more on automation to address the differences in our employee cost structures. If we are running more efficient factories, the addition of a carbon tax can actually have a positive effect. By driving up the cost of carbon Canadian producers are forced to improvise and adapt. This is what fuels innovation. A factory dependent on cheap labour is less likely to make those adaptations and thus Canadian companies have the potential to adapt their way out of the higher costs.

A further important consideration is what happens to the money generated by the carbon tax? If that money is reinvested into new energy efficient processes or renewable energy projects then once again a country like Canada can thrive under a carbon tax regime. This is why I, among many, don’t like the idea of purely “revenue neutral” carbon taxes. I think a percentage of the money generated by a carbon tax should be funneled back into rapid transit, environmental infrastructure and research. In a perfect world a carbon tax should only be a temporary thing as we move away from carbon. Making our government dependent on the revenues from a carbon tax only ensures that we will never move away from carbon because I have yet to see a government give up a revenue source once it has figured out a way to hook its way into that revenue. Earmarking carbon tax revenue, rather than throwing it in general revenue, is a way to ensure that the government doesn’t rely on it to keep the lights on and thus our government has an incentive to eventually get us off the carbon train.

The most important feature of a low-carbon, high environmental value economy comes down to consumer choice. The TPP will not force Canadians to buy foreign products, it only says that we cannot deny other countries the opportunity to sell their product in our market. We, as consumers, can make a conscious choice to pay a little bit more to get a better, greener products. This is where the Greens of our world have to actually start living by their words.

I live in the community of Walnut Grove in Langley. Walnut Grove has very good selection of local stores that provide high-quality products often at a price slightly higher than it would cost to buy at a big box store or an American warehouse outlet. My family has made a choice that we are willing to pay the 5% -10% more to be able to walk to our local baker, vegetable market, butcher, wine store and grocery store. We value the fact that everyone at our local stores know us and our kids by name. We like the fact that Mr. Lee from IGA gives our kids hugs when they walk into his store and that his niece shares a classroom with our son at the local elementary school. We love that profits from our meat purchases at Meridian Meats go to their head office in Port Coquitlam and that our money for vegetables go to local farmers and not to some corporate head office in Arkansas. We appreciate the Overwaitea Food Group, whose corporate head office I can see on my walk to work. Sure we could, and sometimes have to, shop at places like Costco and Walmart but that makes up a very small percentage of our shopping dollar. The TPP does not take any of that away from us.

It is only when consumers demand low prices above all that we as a country will suffer. Meanwhile, the low prices often come at a cost of lower quality or higher inconvenience. I’m not sure about you, but spending 20 minutes each way to drive to a big box store is often a false economy both in lost time and in travel costs.

On the positive side of the ledger, under the TPP multilateral environment agreements (MEAs) are further strengthened. Enforcement of the Montreal Protocol, the International Convention for the Prevention of Pollution from Ships and the Convention on International Trade in Endangered Species of Wild Fauna and Flora will all be strengthened for exactly same reason that individual action is discouraged. In order for competition to be considered fair every country is expected to live up to its international environmental obligations. MEAs set a baseline that every member of the TPP must meet, to do otherwise results in penalties. A country trying to shirk its environmental duties would be punished and forced to improve environmental performance to group norms. Thus in this case the environment benefits from the agreement. If the TPP had been in effect when Kyoto was signed Canada may not have been able to drag its heels in implementing the plan because its trade partners would have been there to force Canada to do its part or suffer the consequences of failing to act.

In following the debate on the TPP I find it particularly odd that the people who most dislike these international trade deals, are often the same people who demand that we, as a country, involve ourselves in international environmental deals/regulations. You either trust in international cooperation or you don’t. Somehow the leaders in the environmental movement want us to believe that international is good when it comes to the environment and bad when it comes to trade?

The other thing to recognize it that most of the really big problems of the world today cannot be handled by individual governments. Climate change, loss of biodiversity, destruction of coral reefs, the virtual elimination of the upper trophic levels in our world oceans, these are topics that have to be handled by a community of nations. Our shared global future is one where MEAs are going to be a necessity and trade agreements like the TPP are going to provide one of the few dependable mechanisms to enforce those MEAs.

Thus international trade agreements like the TPP can discourage independent action but strongly encourage international cooperation and movement towards common international goals. To discourage foot-draggers from stopping all environmental advances, typically once an agreed upon percentage of the trade partners take a side on a MEA everyone has to jump on board or suffer the consequences. From these examples you can see the issue. When a single country wants to make a unilateral advance in environmental regulation, the TPP is going to slap it down, or failing that the industries in the affected country are going to become less competitive. However, when the global community agrees on a common environmental goal the foot-draggers and slow movers are punished.

Finally, a lot of the naysayers have popped up to ask how NAFTA helped the environment? My response to that question is that it is complicated. NAFTA was one of the first really big trade deals and when it was written, politicians didn’t really understand how important locking environmental performance into a trade deal was to ensure fairness in international trade. As such, the environmental components were tacked on at the end of the negotiations with NAFTA. Even with this proviso NAFTA ended up improving environmental performance in Mexico without a commensurate decrease in Canada/the US. I welcome my readers to go to the literature on this topic, because the references are many and surprisingly weighted in the positive direction.

So are trade deals like the TPP perfect? Absolutely not, but from an environmental perspective they are far more nuanced than the anti-free trade activists would have you believe.

Posted in General Politics | 3 Comments

Why the West Coast’s gas prices are so high and who is to blame

Early in my blogging career I wrote a blog piece discussing factors that affect gasoline and diesel prices on the West Coast. The post was called A Primer: Why Cheap Oil Doesn’t Mean Cheap Gasoline or Diesel and dealt mostly with how gasoline is created in refineries. Well, the topic has come up again and once again we have people complaining about gasoline and diesel prices on the West Coast in a world of low oil prices. Most recently the National Observer had a post of the subject “Canadians get ripped off at the pumps” produced by a local economist Robyn Allan. Having read that article I suppose it is about time I updated my earlier post and addressed some of the obvious shortcomings in Ms. Allan’s piece in the Observer.

The first thing you need to know to understand gasoline prices on the West Coast is that it is all about supply and demand and has very little to do with the price of oil. The reason for this is simple: it is not oil that you put in your gas tank; it is gasoline and diesel, both of which are refined products. In my earlier post I give a description of how we convert oil into gasoline and diesel and pointed out that there is a limit to how much gasoline and diesel can be generated from a barrel of oil. This is especially problematic with respect to diesel fuel since the component of the crude oil mixture used to generate diesel fuel is also the same one used to make kerosene and fuel oils (for household heating). The diesel market is, thus, heavily affected by the current and future market for fuel oil (especially in central and eastern Canada where fuel oil is heavily used for home heating).

As I note above we can’t use crude oil in our fuel tanks, we need to use refined petroleum products and we all know where refined petroleum products come from: refineries. So it is not just the amount of oil on the market that defines the price of gasoline but the ability of the refineries to convert that oil into useful things like gasoline and diesel. That is not all, however; once we have refined the oil into gasoline we still have to transport it to market. All the refined gasoline in the world does you no good if it is stuck on the East side of the Rockies. These are the portions of the story where Ms. Allan falls off the rails in her analysis in the National Observer. In her piece she pretty much ignores the two critical bottlenecks in the progression from oil in the ground to gasoline in your tank: refinery capacity and transportation capacity. Today I am going to deal with refining capacity.

As anyone who follows the oil industry knows, we on the Canadian West Coast have allowed our refining capacity to wither and die on the vine. Historically, there were several oil refineries on the West Coast including the Chevron refinery (still open), the Imperial Oil Ioco refinery and the Shell Refinery. Thanks to regulatory hurdles and market forces we are now down to a single refinery (Chevron) which is able to handle about 57,000 barrels/day(b/d) of oil. To put that number into perspective, the Chevron refinery only supplies about 25% of B.C.’s commercial fuel supply and 40% of YVR’s jet fuel needs. As a consequence we import a LOT of fuel from refineries in Alberta (mostly around Edmonton). According to Natural Resources Canada, we import almost 60% of our petroleum product needs via pipeline and gasoline tanker cars (by rail) from Alberta. Unfortunately, even that is not enough and so we are also dependent on the big refineries in the Puget Sound for things like aviation fuel (from Cherry Point) and additional volume when the prairie market gets too tight. Because that additional fuel is bought on an irregular schedule it is subject to the whims of supply and demand. This makes US supplies a critical consideration in any gasoline price discussion in BC.

The United States has broken their petroleum market up into five Petroleum Administration of Defense Districts (PADDs). This was originally done during the Second World War to ensure energy supplies but is still in effect to this day. The West Coast of the US, including California, Oregon and Washington, make up PADD V. PADD V is a rather unusual district because of its geography (it is mostly bordered on the east by mountains). Unlike the other districts, which are linked internally with lots of pipelines and combined capacity; PADD V is pretty much stuck on its lonesome and has to be self-sufficient. There are some minor cross-PADD connections but mostly when something goes wrong in PADD V it hits the entire region. Well this year has been a tough one for PADD V. In February, a major fire shut down the Torrence refinery in California. Torrence is the third largest refinery in California and supplies about 10% of California’s gasoline supply (remember the California gasoline market is essentially equivalent to the entire Canadian gasoline market). The loss of Torrence meant that all of the other refineries in PADD V had to make up the difference. All of a sudden the Puget Sound didn’t have an excess of fuel to sell to British Columbia as it was being sold in California.

In addition to the Torrence issue, the American mid-west was also having a bad time. For most of August the BP Whiting Refinery in Indiana was also shut down. This left a huge crunch in the market in the prairies as mid-west suppliers were offering top dollar for gasoline from Alberta. This left BC in a pickle. Alberta didn’t have any cheap gasoline because it was all going to fill a need in the US mid-west and PADD V didn’t have any cheap gasoline because of the fire in Torrence and another disruption in April. We were the equivalent of the lonely traveler wandering into town, during a nasty storm, in the middle of convention season and demanding a room. Without a reservation (firm, regular, fixed-rate contracts for gasoline) and without any alternatives (since Edmonton couldn’t help us) we ended up having to pay top dollar for our gasoline. Thus we had $1.20+ gasoline in a world where the oil price was below $60/bbl.

Of course the piece in the National Observer completely ignored these conditions. In the Observer it was all the greedy oil companies’ faults that we could not get cheap gas. No mention was made to the red tape, fuel access restrictions (pipeline capacity) and bad political climate that scared all but one of the local refineries out of the market. No mention was made of the work to block expansion of pipelines that would have allowed more refined gasoline to move east-west across the country. No mention is made about protestors that locked down the Chevron refinery further curtailing supply.

The truth is that we as Canadians have brought this down upon ourselves. We made it uncomfortable for refineries to exist in BC by limiting supply of crude (by fighting pipelines) and adding red tape. In doing so, we have made ourselves utterly dependent on refineries in Alberta and the Puget Sound to keep our cars and buses running. Like so many other environmental fields (see my post on rare earth metals) we have off-loaded all the environmental costs to other jurisdictions and lived like environmental free-loaders letting others take the risks while we reap the rewards. Well now it is time for our chickens to come home to roost. We are not getting “ripped off at the pump” as Ms. Allan would claim; rather we are getting a well-justified comeuppance. We made a politically expedient decision to limit the production and transportation of a critical component of our economy (refined fuels) and so now have to pay the price for that decision when regional supplies are low. The ironic part of all this is that from an environmental point-of-view this is a good thing. By making the fuel more expensive we will force people to use less of it. This is supposed to be a good thing. Why is that ironic? Well because a media outlet like the Observer is the one complaining most loudly about the problem. That the Observer would turn around and complain that the outcome they have been working towards has come to pass? That is just rich!

Posted in Pipelines | 6 Comments

Debunking the Leap Manifesto – Demand #9: Local agriculture is not always better

I have been asked numerous times in the last couple days what I have against the “The Leap Manifesto”? My answer is simple: The Leap Manifesto is of particular interest to me because it touches so close to my intellectual home and it annoys me because it is demonstrably lacking in scientific rigour. As I have written numerous times before on this blog I am a Pragmatic Environmentalist who believes in evidence-based environmental decision-making. My personal goal is to help make demonstrable and tangible improvements in our country’s environmental performance. As a pragmatist I am not the type of person who would suggest that it is sensible to “leap and the net will appear”, nor am I a newbie in this field. As I noted in an earlier post Environmentalism and Pragmatism, the two aren’t mutually exclusive – A blast from my past I wrote about my own personal “Pragmatist’s Rules of Engagement” back in 1995. So to further answer those questioners: having worked a lot, read a lot and seen a lot I figure it is up to people like me to inject some science and defensible data into a debate that seems mostly about politics and emotions. If we waste all our built-up moral capital on emotionally-charged and scientifically-indefensible projects (like the Leap Manifesto) then we won’t have any to spend when it comes to making real changes that can make tangible improvements locally, regionally and nationally.

Having addressed Manifesto’s Demands #2, #3 and #6 in my previous post I thought I should take another shot at this document by looking at another environmental fairy tale: Demand #9

We must develop a more localized and ecologically-based agricultural system to reduce reliance on fossil fuels, absorb shocks in the global supply – and produce healthier and more affordable food for everyone

The “smaller is better”, “local is better”, “organic is better” memes in agriculture are some of the most pernicious myths to come out of the modern environmental movement and show a profound lack of understanding of how food is grown and energy is used. I would argue this goes back to the urban nature of most of our modern environmental activists but that is more of a personal opinion rather than a statement based in defensible facts. In a previous post Modern Environmental Fairy Tales: “Moving Back to the Land” and the 100 Mile Diet I discussed the modern “Arcadians” described by Martin Lewis in his 1992 book “Green Delusions”. These modern Arcadians seek to return us to a more pastoral time when we lived with a “more localized and ecologically-based agricultural system”. What they and their more recent confreres the Degrowthers and the authors of the “The Leap Manifesto” seem to have forgotten is why we migrated from that “pastoral” lifestyle in the first place. The reason is simple: during those “pastoral” times in our ancestral past people lived lives that were “solitary, poor, nasty, brutish, and short“. Given our current human population density any attempt to move back to the land would be devastating to both the human population and to the ecosphere.

As I quoted out in my post Ecomodernism and Degrowth: Part II Future Scenarios

The minimum amount of agricultural land necessary for sustainable food security, with a diversified diet similar to those of North America and Western Europe (hence including meat), is 0.5 of a hectare per person. This does not allow for any land degradation such as soil erosion, and it assumes adequate water supplies. Very few populous countries have more than an average of 0.25 of a hectare. It is realistic to suppose that the absolute minimum of arable land to support one person is a mere 0.07 of a hectare–and this assumes a largely vegetarian diet, no land degradation or water shortages, virtually no post-harvest waste, and farmers who know precisely when and how to plant, fertilize, irrigate, etc.. In India, the amount of arable land is already down to 0.2 of a hectare; in Philippines, 0.13; in Vietnam, 0.10; in Bangladesh, 0.09; in China, 0.08; and in Egypt, 0.05. By 2025 the amount is expected to fall to: India, 0.12 of a hectare; Philippines, 0.08; China, 0.06; Vietnam, 0.05; Bangladesh, 0.05; and Egypt, 0.03 (ref).

As of the year 2000, the US Northeast had a population of 49.6 million people who live with a population density of 359.6 people/km2. This translates to 0.69 acres per person. If we returned to the land there would barely be enough land to support the population of the US Eastern Seaboard with a minimal vegetarian diet. Moreover, this “pastoral” lifestyle would not be conducive to centralized services like sewage and water. Without modern sewage treatment and water supplies the population would undergo massive “Degrowth” as diseases and weather slowly eliminated the majority of the population. As for electrical supply, under the 0.44 acre scenario, power would be supplied by solar panels. Solar panels will certainly supply a house in South Carolina with reliable power in summer, but the same cannot be said about those same panels in a northern winter. Consider the “Snowpocalypse of 2015” and think about how those solar panels would provide power in the middle of one of the coldest winters on record, while buried under two meters of snow?

As for nature, once you discounted the areas where humans cannot farm (bogs, lakes etc…) there would not be an unallocated acre on the Eastern Seaboard. There would be no room for growing crops for profit and more importantly there would be no room for nature of any sort. I don’t see that existence as being in harmony with nature as much as being utterly antithetical to nature.

As for the importance of “localized” food and food security, as I wrote in another blog post:

From an environmental perspective regional self-sufficiency in food is a loser. Large-scale farming, with its ability to maximize crop yields and thus reduce land needs, is a necessity in a world of 7 billion souls. Anyone really interested in this topic should read The Locavore’s Dilemma by Desrochers and Shimizu. They comprehensively deconstruct the environmental arguments for the 100 mile diet and the concept of “food miles”.

Activists point out that the food then needs to be moved by ship or airplane but Desrochers and Shimizu point out that 82% of the estimated 30 billion food miles associated with U.K.-consumed food are generated within the country, with car transport from shop to home accounting for 48% and transport to stores/warehouses representing 31% of food miles. As for carbon dioxide equivalents, as Tasmin MacMahon notes in Macleans: research from the U.K. comparing local tomatoes with those imported from Spain showed the U.K. tomatoes, which had to be grown in heated greenhouses, emitted nearly 2,400 kg of carbon dioxide per ton, compared to 640 kg for the Spanish tomatoes, which could grow in unheated greenhouses.

As for the line from the Manifesto about this food being “healthier” the research is definitive on that score as well. Organic foods are no healthier than food from non-organic farms. Meanwhile, the widespread use of “natural” fertilizers in organic farms can lead to the contamination of groundwater supplies with nitrates and in exceptional cases animal wastes and e-coli. While factory farms have their own fertilizer/waste issues, they tend to be much more tightly regulated and have the financial wherewithal to invest in the most efficient treatment systems. Not to mention that in sufficient quantities/qualities, their outputs can actually have some value on the open market.

As for the suggestion that local food would be more affordable than commercially bought food can be demonstrated as false on its face. The primary driver for food prices are input costs and small, inefficient farms have higher costs/per bushel for virtually every foodstuff known to mankind. For proof I suggest you go to your local community market and compare the costs of the market vegetables as opposed to those at your local grocery store. Alternatively look at the charts in Desrochers and Shimizu or go look on the shelves of your local “Whole Foods” outlet.

As I describe above, locavores, 100-mile dieters, modern Arcadians and Degrowthers all continue to suggest that local is better for you, and better for the environment. The problem is that all the research on the topic says exactly the opposite. Local food may make you feel better about yourself, but it uses more energy and fertilizer per bushel to produce and deliver to your table; is no healthier than the alternatives; is less efficient necessitating more land per bushel to produce and every acre of nature carved out for a small, inefficient hobby farm is one less acre where nature can be allowed to flourish. For the authors of the Manifesto to suggest that localized food production be a goal would run exactly contrary to the idea that agriculture be ecologically-based. Modern agricultural practices are the only reason the earth can feed 7+ billion souls while still leaving any room for nature to have an opportunity to do its thing with minimal interference from humans.

Posted in Leap Manifesto | 7 Comments

A Chemist looks at the Leap Manifesto and finds it wanting

This morning as I was enjoying a well-earned coffee break a fascinating announcement lit up my Twitter feed. It was about “The Leap Manifesto”. By the breathless tweets I expected a highly-researched document full of insight and new ideas, maybe like An Ecomodernist Manifesto that I blogged about earlier this year. To my disappointment I found a minimalist web page almost completely free of useful references or critical details.

Looking deeper, I went to the “sign the manifesto” section where I observed “The 15 Demands” which apparently form the meat of this Manifesto. These demands range from somewhat reasonable to the ridiculous to the sublime and would take numerous blog posts to address individually. Happily for me, I have been writing this blog for almost a year and the Manifesto addresses a number of topics I have previously covered in detail. That being said no one is going to sit and read 5000+ words on this topic so I tonight will stick to my area of blogging expertise and address Demands #2, #3 and #6.

Demand #2 says the following:

The latest research shows we could get 100% of our electricity from renewable resources within two decades; by 2050 we could have a 100% clean economy. We demand that this shift begin now.

This demand is the only one of the lot that actually has any references associated with it since it is discussed on the cover page of the web site. The statement references two documents:

Sustainable Canada Dialogues. (2015). Acting on climate change: Solutions from Canadian scholars. Montreal, QC: McGill University

Jacobson, M., et al. Providing all global energy with wind, water, and solar power, Part I: Technologies, energy resources, quantities and areas of infrastructure, and materials. Energy Policy 39:3 (2011).

Regular readers of this blog will know well how I feel about these two documents. The first is a feel-good document written in a policy-orientated style that fails to impress. The major problem with the document is that it has been written by urbanites who appear completely unaware of the scale of our transportation issues in Canada. I will not go further into that concept until later (Demand #6). Instead I will hit the bigger target: the Jacobson paper.

I have already written a couple very detailed blog posts on the Jacobson paper. The two blog posts are nominally about a follow-up paper but both primarily detail shortcomings in the Jacobson 2011 Paper. The first: Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part I Why no nuclear power? addresses serious shortcomings in the Jacobson model with respect to nuclear power. The second: Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part II What about those pesky rare earth metals? points out that renewable energy technologies depend heavily on a rare earth metals. As I point out in another blog post On renewables and compromises Part II Rare earths in renewable technologies (and a follow-up blog post at the Huffington Post which I will discuss later) we simply do not have a supply of rare earth metals necessary to address the needs of the facilities suggested in Demand #2. It is lovely to demand that the government do something but before you make a demand you might try to determine whether accomplishing the demand is even possible?

Arguably the first half of Demand #2 (100% electrical energy in 20 years) may conceivably possible, with a Herculean effort, but the part about achieving 100 % clean energy by 2050 (i.e. 100% fossil fuel free energy status) is simply a pipe dream. I did an intellectual exercise detailing what it would take to achieve a fossil fuel-free British Columbia, the short version is here: Dispelling Some Myths About British Columbia’s Energy Picture and the more detailed version is here: Starting a Dialogue – Can we really get to a “fossil fuel-free BC“? The take-home message from those pieces: In order to achieve a “fossil fuel-free B.C.” we would need to somehow replace the almost 60 per cent of our energy needs currently being met with fossil fuels through alternative sources. Given that BC, which is incredibly rich in hydro, cannot reasonably achieve a fossil fuel-free status in the timeline presented the idea that Saskatchewan or Ontario could achieve similar results without a heavy investment in nuclear power, is simply inconceivable.

This brings us to Demand #3

No new infrastructure projects that lock us into increased extraction decades into the future. The new iron law of energy development must be: if you wouldn’t want it in your backyard, then it doesn’t belong in anyone’s backyard.

Demand #3 is a typical NIMBY/BANANA demand and reflects a common misconception about energy amongst the non-technically inclined. I address the problem in detail in another blog post On Renewables and compromises, Intermission: Energy Density and Power Density which points out that while our modern society is very power-hungry and uses a lot of energy, most renewable energy sources have very low energy density. Energy density is the amount of energy stored in a unit of mass or volume. The thing that makes fossil fuels so attractive to our society is that they represent a very dense energy source. The reason that fossil fuels are so energy dense is that Mother Nature has done the all-important job of converting the power of the sun into a biological form, then geology compressed it from a less dense to a more dense form. Large energy projects cannot, by their nature, reasonably be put in every person’s backyard. If we are going to survive in a renewable energy future we will need a lot of energy from hydro and geothermal sources and you simply can’t put a commercial-scale geothermal or hydro facility in anyone’s backyard.

To put it into perspective, solar, the highest density renewable, has a theoretical power density of up to 200 W/m2but that the best solar collection systems seldom do better than 20 W/m2(in desert solar photovoltaic farms). The further north (or south) you go the lower the theoretical maximum, and thus the lower the resultant systems. A truly exceptional visualization of this is presented by David Mackay athttp://withouthotair.blogspot.co.uk/2013/06/david-mackays-map-of-world-update.html. As for the remaining renewables, the best biofuels can achieve about 2 W/m2 while wind can achieve a maximum of about 3 W/m2. As Dr. Wilson points out, since Germany and the United Kingdom consume energy at a rate of approximately 1 W/m2 in order to supply either country with power using wind they would need to cover half of their total land mass with wind turbines which is not a realistic option in a country with cities, farms and forests. Even with that density, the country would be powerless in the dead of winter or on any wind-free evening.

As for these energy systems, as I mentioned above, they cannot function without rare earth metals and as I point out in my blog post Our Demand For Renewable Energy Comes With Canada’s Dirty Little Secret rare earth metals facilities are neither small nor are they clean and they certainly do not fit under the “new iron law”. The activists who prepared these demands appear to be unaware of where the wood, metal, concrete and aluminum needed to create their infrastructure actually comes from. None of these can be scaled down to what you would build in your backyard.

NIMBY only works if you are rich enough to be able to import your raw materials from somewhere else. While I agree that most of the initial signers of the Manifesto might be that rich, the rest of us aren’t and so we will continue to need to hew wood and draw water.

I must say of all the demands the one I find most amusing is Demand #6:

We want high-speed rail powered by just renewables and affordable public transit to unite every community in this country – in place of more cars, pipelines and exploding trains that endanger and divide us.

I cannot imagine greater proof that this list was written by a bunch of urbanites than a suggestion that we connect the country (and all cities) by high-speed rail, powered by renewables. As I wrote in my blog post Dispelling Some Myths About British Columbia’s Energy Picture

With improved transit and smart planning we should be able to reduce our energy needs for transportation; but the vast majority of British Columbia cannot be served by mass transit. There is simply not enough money available to give every driver from Creston to Fort Saint John and from Invermere to Prince Rupert an alternative to driving. That means that for most of British Columbia, we will still need personal vehicles.

Moreover, all the transit in the world will not address the need for panel vans and light trucks. Contractors, suppliers and salespeople cannot rely on the transit system. Try to imagine a plumber attempting to transport a new sink or toilet and all her supplies/tools to a job site on a bus?

Finally, no amount of transit will reduce the need for the transport trucks that bring the groceries to market and supply the boutiques of Vancouver. The last time I looked it was pretty much impossible to move a pallet of milk or apples on SkyTrain.

Given our current technological state we are nowhere near a position where British Columbia can achieve 100 percent fossil fuel-free status. Any plan that ignores that fact is simply magical thinking.

I think that last line pretty much summarizes my opinion of the 15 Demands and The Leap Manifesto. They ignore the laws of physics and show a profound misunderstanding of energy science. As such they represent nothing more than the magical thinking of a bunch of activists who have never actually had to hammer out how a system like the one they “demand” would be sourced, built and paid for. The authors of the Manifesto are well-meaning but appear to lack the real-world experience to understand that Canada is a HUGE country and building a trans-continental railway was an incredible achievement. The thought of connecting every community in Canada by rail (powered by renewables no less) doesn’t even warrant the description “pie in the sky” it is simply delusional.

Posted in Leap Manifesto | 11 Comments

On Wi-Fi, Electromagnetic Hypersensitivity and the Nocebo Effect

One of my fears when I wrote my previous post about Wi-Fi was that I was opening a Pandora’s Box on the whole field of electromagnetic fields and health. As I expected, shortly after I posted that blog a number of people tweeted to me explaining how wrong I was about Wi-Fi, with many describing stories of Electromagnetic Hypersensitivity (EHS). Well as my dad used to say: in for a penny, in for a pound. I may as well cover that topic as well. In this post, therefore, I will look into the topic of EHS and in doing so will re-visit the concept of the Nocebo effect.

The World Health Organization defines EHS as:

a variety of non-specific symptoms, which afflicted individuals attribute to exposure to EMF [electromagnetic fields]. The symptoms most commonly experienced include dermatological symptoms (redness, tingling, and burning sensations) as well as neurasthenic and vegetative symptoms (fatigue, tiredness, concentration difficulties, dizziness, nausea, heart palpitation, and digestive disturbances). The collection of symptoms is not part of any recognized syndrome.

Before I can go into a discussion of EHS, however; I will need to introduce a couple topics I have not yet covered in my blog: the double-blind study and the concept of a systematic review or meta-analysis.

In my previous post Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist I describe the field of epidemiology which the World Health Organization defines as the study of the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. In the field of epidemiology the most reputable testing is carried out through clinical trials. A clinical trial is a prospective study in which humans are exposed to “something” at the discretion of the investigator and followed for an outcome. The biggest problem with clinical trials is that they are conducted on humans and by humans. This is a problem because humans are not machines; we are a very social species who give off any number of non-verbal cues every time we interact. This is a problem in epidemiology because in order to confirm that an outcome of a study is due to the “something” in the study, we have to ensure that those very things that make us human do not influence the outcome. As a consequence, in the field of epidemiology randomized double blind placebo control (RDBPC) studies are considered the “gold standard” of studies.

In an RDBPC study both the subjects participating in the study and the researchers carrying out the study are unaware of when the experimental medication or procedure has been given. In drug tests this means splitting the participants into groups where some of the participants get the active ingredient (or medicine) and the other half are given a placebo (historically a sugar pill made to look like the medicine being tested) and ensuring that the treating physicians are not aware of which of the subjects got the real pill and which got the sugar pill. In the testing of EHS this means that neither the scientist doing the experiment, nor the subject of the test actually know when the subject of the test is being exposed to an EM field. As I will describe later, a lot of testing has been done on EHS using either double-blind or single-blind (the person getting the test does not know) methodologies and as I will discuss the results have been entirely consistent.

As I have mentioned previously at this blog, the statistics we use in science are very sensitive to population size (number of subjects tested). The more subjects tested, the more likely you will be able to identify a small signal or weak effect in a large population. The problem with clinical trials is that each individual study is limited by its budget, its geography and the number of subjects it can test. In a field like EHS there are hundreds of people all over the globe who claim to be particularly sensitive to EM fields. They can’t all be tested at the same time or in the same study so instead the literature is full of small studies of a handful of individuals. In order to take advantage of the strength of population statistics, scientists have developed the tools of the meta-analysis. A meta-analysisor alternatively, a review article, represents an attempt by one or more authors to summarize the current state of the research on a particular topic. In a meta-analysis the authors will often combine the findings from independent studies in order to enlarge the sample size in the hopes of identifying an effect that might have been missed in the individual studies included in the analysis.

This long introduction is intended to save me a lot of time because, like the study of RF in humans, there is a broad literature on EHS and numerous reviews and meta-analyses have been carried out. This is fortunate for me because that means someone else has done all the work for me. So let’s see what the literature says?

In 2005, Rubin, Munshi and Wessley conducted a Systematic Review of Provocation Studies on Electromagnetic Hypersensitivity. Their conclusion:

The symptoms described by “electromagnetic hypersensitivity” sufferers can be severe and are sometimes disabling. However, it has proved difficult to show under blind conditions that exposure to EMF can trigger these symptoms. This suggests that “electromagnetic hypersensitivity” is unrelated to the presence of EMF, although more research into this phenomenon is required.

Also in 2005, Seitz et. al. prepared a paper titled: Electromagnetic hypersensitivity (EHS) and subjective health complaints associated with electromagnetic fields of mobile phone communication—a literature review published between 2000 and 2004. Their conclusion:

based on the limited studies available, there is no valid evidence for an association between impaired well-being and exposure to mobile phone radiation presently. However, the limited quantity and quality of research in this area do not allow to exclude long-term health effects definitely.

In 2007, Oftedal et. al. conducted an RDBPC on mobile phones titled: Mobile phone headache: a double blind, sham-controlled provocation study. The results of that study:

The study gave no evidence that RF fields from mobile phones may cause head pain or discomfort or influence physiological variables. The most likely reason for the symptoms is a nocebo effect.

In 2008, Roosli conducted a systematic review on radiofrequency electromagnetic field exposure and non-specific symptoms of ill health. His conclusion:

This review showed that the large majority of individuals who claims to be able to detect low level RF-EMF are not able to do so under double-blind conditions. If such individuals exist, they represent a small minority and have not been identified yet. The available observational studies do not allow differentiating between biophysical from EMF and nocebo effects.

Between 2005 and 2010 there was a lot of hype on the topic of EHS and as a consequence a lot more research was carried out on the topic. As a consequence, in 2010 Rubin, Nieto-Hernandez and Wessley carried out an updated systematic review of provocation studies on Idiopathic Environmental Intolerance Attributed to Electromagnetic Fields (Formerly ‘Electromagnetic Hypersensitivity’). Their conclusion:

No robust evidence could be found to support this theory. However, the studies included in the review did support the role of the nocebo effect in triggering acute symptoms in IEI-EMF sufferers. Despite the conviction of IEI-EMF sufferers that their symptoms are triggered by exposure to electromagnetic fields, repeated experiments have been unable to replicate this phenomenon under controlled conditions.

Most recently, in 2012 Kwon et.al. did another review titled: EHS subjects do not perceive RF EMF emitted from smart phones better than non-EHS subjects. Their conclusion was like all the rest of the studies:

In conclusion, there was no indication that EHS subjects perceive RF-EMFs better than non-EHS subjects.

As you can see, the academic literature is essentially unanimous. Every case where a supposedly EHS sensitive individual was put under a double-blind procedure the result has been the same: the supposedly sensitive individual was unable to perceive an EM field at a rate higher than would be contributed purely by chance. Don’t even try to ask me about Dr. Havas and her study in the European Journal of Oncology. As described quite clearly at Skeptic North that was not a blind study and was clearly a case of someone not reading the warnings pamphlet that came with her heart rate monitor.

You will notice above that most of the reviews attribute the symptoms of EHS to the “nocebo effect”. I have written about the nocebo effect before at this blog but to summarize. The “nocebo effect” is the opposite of the placebo effect. While the placebo effect has the ability to help you feel better in the absence of any active ingredients, the nocebo effect has the ability to make a person feel poorly in the absence of any active stimuli. As described in this review paper the nocebo effect is not as well studied as the placebo effect but it has been demonstrated to be real.

It is important to recognize a couple things about the nocebo effect. First and foremost, people who “feel bad” or claim to be “sick” via the nocebo effect are neither lying nor are they fakers, rather they are doing one of two things. They are either associating actual symptoms from other causes to the “nocebo” or they are having phantom symptoms based on their minds playing tricks on them. There are any number of celebrated cases where people have be shown that their “illnesses” were all in their minds. By far the most entertaining one is described in this article from Daily Tech. In that case a community complained about EHS symptoms even though the radio towers supposedly causing the symptoms had been turned off during the time the community members claimed they were being made ill by the towers.

So to summarize, as I described in my previous blog posting, untold thousands of studies have been conducted on Wi-Fi and the results are clear, RF is not a serious human health risk. Rather, it is almost a perfect example, a de minimis risk (which I discuss in another blog post). As I discuss, a de minimis risk is a risk that, while it may exist, is too small to be of societal concern. EHS, meanwhile is a real problem but not one related to the presence or absence of RF fields. Instead it is related to the real concerns about Wi-Fi that have been spread by individuals who ignore the mountains of peer-reviewed research, meta-analyses and systematic reviews that demonstrate that Wi-Fi is not a risk to human health. EHS has, quite literally, become the textbook example of the nocebo effect. When a whole community can claim to be made sick by a transmission tower, that has been turned off, you have a classic case of individuals quite literally scaring themselves and their children sick.

Posted in Wi-Fi | Leave a comment

On Wi-Fi in Schools and the Precautionary Principle

I knew this day was coming. I wasn’t sure when, but I knew that at some point as a promoter of evidence-based decision-making I would have to take on the topic of Wi-Fi in schools at this blog. Well the new school year is here and the topic has started to bubble up to the surface in the local press and I have been asked to comment on it. Right off the top I want readers to know that this blog post will not go into detail about the research. I will provide links to lots of resources but want to look at this topic from a policy perspective, with a special emphasis on how the Precautionary Principle is misused in activist arguments.

Let’s start with a common misconception; Wi-Fi is not a new technology. Rather, Wi-Fi is a new twist on an old technology: transmitting information via the radio frequency (RF). Humans have been broadcasting radio and microwave transmissions across the planet for over a century.

As for health studies, according to the World Health Organization, over the past 30 years approximately 25,000 articles have been published on the biological effects and medical applications of non-ionizing radiation. RF is just another form of non-ionizing radiation.

If you are looking for detailed discussions about the science behind RF and cancer then there are a lot of good resources out there. The U.S. National Cancer Institute has a very good Question and Answers page, Health Canada has a Frequently Asked Questions page and the BC Center for Disease Control has a radio frequency toolkit that everyone reading on the topic should look at.

As for the peer-reviewed science, the United Kingdom Advisory Group on Non-Ionising Radiation (AGNIR) put out a hefty report in 2012 that is worth a read if you have time to digest 300+ pages of detailed discussion and references on the topic. A very readable discussion of the topic of Wi-Fi activism in Canada was conducted by Bad Science Watch and further Canadian resources include Skeptic North’s take on the issue. As for Wi-Fi in schools and public places well Skeptic Blog has a good summary and Health Canada a good video on the topic.

Anti-Wi-Fi activists will point out that the International Agency for Research on Cancer (IARC) investigated radio frequency electromagnetic fields as possibly carcinogenic to humans. The IARC Monograph on the Evaluation of Carcinogenic Risks to Humans provides a comprehensive examination of the topic and the IARC classified radio frequency electromagnetic fields as a Group 2B possible carcinogen. The critical thing to understand is that Group 2B compounds are, by their very definition, not known to be carcinogens.

Group 2B is a category used when a causal association looks like it might be possible, but when other factors cannot be ruled out with reasonable confidence. Group 2B is, thus, a placeholder for compounds that haven’t been shown to cause cancer but are of further interest for study. Some of these compounds, like acetaldehyde and benz[a]anthracene, will likely be determined to cause cancer but others like coffee, pickled vegetables and talc-based body powder, are much less likely to do so. My opinion, based on a mountain of peer-reviewed research, is that radio frequency electromagnetic fields will be in the latter group and not the former.

I briefly mentioned Wi-Fi on Twitter yesterday and immediately an activist brought out their big gun: The BioInitiative Report. This report is very official-sounding but it has been debunked and discounted by every scientific body that has looked at it, from Australia to the European Union. The EMF & Health website has a whole section dedicated to it. I mentioned this fact and got directed to a single study that indicates the possibility that RF can cause a particular type of cancer. That study didn’t really bother me either.

Scientific research uses as its gold standard the 95% confidence interval (p<0.05). What this means is that if you replicate a study 100 times about five of the tries will give you a false positive (saying that a compound causes an effect when it really does not). Given the approximately 25,000 articles published on the topic, the absence of any toxicologically negative outcomes is statistically improbable.

What is important is to look at the number of positive studies when compared to negative ones. Moreover, a careful examination of the handful of positive studies shows that almost every one involved a particularly rare type of cancer and a minimal effect. This is the ideal scenario for a false positive. Population statistics break down when sample sizes are small and in the cases of most of these studies the number of affected individuals is tiny with respect to the general population. As I pointed out above, there is a copious body of literature that says that Wi-Fi is safe.

To back up that copious body of literature consider that from an epidemiological perspective we have been engaged in a massive human test trial for the last 70+ years. From radar operators during the Second World War to children with cell phones in 2015, billions of humans world-wide have been exposed to varying concentrations and doses of microwave and radio wave radiation. Just look at your cell phone right now, almost anywhere you go you are in range of a Wi-Fi router and you are almost always in range of a radio signal. The fact is, we have not seen spikes in any of those rare cancers purportedly related, via these questionable studies, to exposure to RF.

Certainly we hear about a single police officer here or a woman there who got a suspicious cancer but as I point out in my post Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist that is why we have epidemiologists. Epidemiologists look at all the anecdotes and see if there is some underlying trend. The results are categorical: RF is not a serious human health risk. Rather, it is almost a perfect example a de minimis risk (which I discuss in another blog post Risk Assessment Methodologies Part 1: Understanding de minimis risk). As I point out in that blog posting a de minimis risk is a risk that, while it may exist, is too small to be of societal concern.

So how does an activist try and sell you on making a societal change when dealing with a de minimis risk? The answer is almost always: the Precautionary Principle. Activists use the Precautionary Principle because it sounds good and most people don’t actually know what it says. In a previous post Risk Assessment Methodologies Part 2: Understanding “Acceptable” Risk I introduced readers to the real definition of the Precautionary Principle. The actual Precautionary Principle was defined as Principle 15 in the Rio Declaration which states:

“In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

The Precautionary Principle does not say that all risk is bad risk and that all risks must be avoided because that is not a realistic way to run a society. Getting out of bed in the morning poses a non-zero risk of slipping and breaking your neck. Using the Wi-Fi activist view of the Precautionary Principle we would have to ban all beds to avoid that potentially fatal risk. Instead of requiring “no risk” in the real world we ask: what is an “acceptable risk”?

As I have written previously since we live every day in a world full of risk, we need to figure out how to deal with and understand the risk. That is why we (Canadians) hire epidemiologists and other scientists at places like Health Canada: to help us understand and differentiate between acceptable and unacceptable risks. The reason smart meters, Wi-Fi and cell phones are of little concern to Health Canada is that these technologies are not some mysterious things for which the Precautionary Principle may apply. Transmitting information on the microwaves spectrum is a mature technology that we have used for almost 100 years. The Precautionary Principle does not apply because we have almost 25,000 scientific studies that each individually say that RF exposure is probably safe; but when you repeat that “probably” 25,000 times what it really means is that you are safe.

The Precautionary Principle also considers the consequences of actions. In the case of W-Fi in schools you have a real and obvious benefit. Students with access to Wi-Fi have access to more teaching resources and a better education experience than students without. If you want to eliminate Wi-Fi in the classroom you either accept that you are going to give your kids a lower standard education or you have to hardwire every classroom in every school. The hardwiring of schools is often thrown out as if it were a viable alternative, but the costs of hardwiring every classroom in British Columbia are simply unaffordable. Moreover, it is not as if the schools are RF-free zones to begin with. An informative report on CTV demonstrated quite categorically that schools without Wi-Fi set-ups can have higher levels of Wi-Fi running through them than schools with Wi-Fi. Heck any parent who has attended a Christmas concert in their kids gym knows that they can typically find a at least a dozen Wi-Fi networks on their cell phones.

So to respond to the obvious activist rebuttals to this piece: Wi-Fi is not some brand new technology that we must fear, it is simply a new spin on an old technology. Wi-Fi is not a carcinogen, rather after 25,000 studies on the RF spectrum and its effect on humans and after the exposure of billions of humans to RF the best scientists can say is that it might be responsible for a handful of rare cancers. If RF is a cancer risk, it is one below the de minimis threshold: one that may exist but is too small to be of societal concern. As for applying the Precautionary Principle, that is just a red herring. The Precautionary Principle does not say that you accept no risk only that you factor risk and rewards into your calculations and in this case the risks are negligible and the rewards significant. Applying the Precautionary Principal it is an easy call to keep Wi-Fi in the classrooms.

Posted in Wi-Fi | 4 Comments

Lessons learned from the BC Wind Storm

Like many of my readers I spent much of the weekend dealing with the consequences of the big windstorm that hit the west coast on the weekend. For those of you not aware, what was supposed to be a pretty typical rainstorm ended up being massive wind storm which, at its peak, knocked out power to over 500,000 people in Metro Vancouver. Given our population (about 2.5 million) that means about 1 in 5 households was affected by the power outage. Our house was one of the 500,000 and, unfortunately, we were one of the last of the big substations to be energized so many individual houses in our area still don’t have power 48 hours after the end of the storm. This post is a bit of a post-mortem or as we say in my field a “post-incident analysis” where I will share some of the things I learned from this storm to help prepare our household for “the Big One” (the predicted earthquake that we all know is coming on the west coast). It also ends with some unsolicited advice for our friends at BC Hydro about their communications strategy for the storm.

In my work the way we improve our safety performance is through post-incident safety assessments. Every negative safety incident is accompanied by a post-incident analysis. This involves looking at the incident and asking the question: “what is the worst thing that could have happened”. We then do a root-cause analysis in order to establish and address the root cause of the incident. Ideally in doing this, similar incidents can be avoided in the future. In addition to incidents we also track and investigate every “near miss”. A near miss is an event that could have resulted in an incident but did not. Usually the difference between a near miss and an incident is simply good luck (i.e. a trip that caused a bump but didn’t break a bone). In our industry a near miss is seen as a “free learning”: an opportunity to catch a problem before someone gets hurt.

Without belittling the cost this windstorm had in human hardship and financial losses it pretty much represents a near miss when compared to the Big One. In this case only 1 in 5 households was hit, in daytime, on a weekend, in summer and only power was affected. We have been warned that in the event of the Big One, we have to be in a position to take care of ourselves without outside help for a minimum of 72 hours. That means assuming that the entire lower mainland is affected; that power, water and natural gas supplies will be offline; and we can expect no help of any kind (except from our neighbours) for at least three days.

Looking at our how our family emergency plan held up during the power outage it was clear that while we did a lot of things right, we have some serious holes to address. We have a reasonable store of water and dried goods and while we would be uncomfortable we would not starve nor lack for water for three-to-five days. Now for the biggest holes.

My plan for cooking during an emergency involves using the bar-b-que. However, it being the end of summer instead of having a lot of fuel, I have been working on the bottom half of my one tank. For emergency preparedness I should have taken my father-in-law’s advice to have at least one full tank in reserve at all times. Since the roads to Langley City (which had power) were open (as was Costco) I was able to rectify that problem on Sunday morning. Had I waited much longer though, I would have been out of luck. When I showed up at the Husky (the only place in our area that had power and sold propane) I discovered that they had about an hour’s supply of propane left in their tank (at the rate they were selling it) and they had already sold out of both gas and diesel.

Talking gasoline, we have a four-tiered plan for shelter depending on what happens to our house in a big earthquake. Tier two is to shelter in the minivan. Once again I failed to take my father-in-law’s advice. He never lets his fuel tank get to less than half-full so he has a reserve in case of an emergency. I, meanwhile, had let my tank get to almost empty as I was waiting for a chance to visit my in-laws in Aldergrove (where I can buy cheaper gas). Fortunately, I was able to get $20 of gas from the local Chevron (apparently the only gas station in Walnut Grove with an emergency back-up generator). I was later able to fill up in Aldergrove but, as I mentioned, the gas station with propane in Langley City had long run out of gas and diesel so in the case of the Big One finding an operating gas station may not have been an option for me.

As for paying for the gasoline, I only got $20 of gas from the local Chevron because I didn’t have much cash on hand. My wife never carries cash (she likes debit) and it is only by habit that I make sure to have a few actual bills in my wallet. Except during the power outage Interac was down (no power) so everyone was accepting cash-only. When the BC preparedness people say to keep a couple hundred dollars in small bills on hand it is for that reason. Our area has power but the phone lines are still down so it looks like could be back to a cash-only society for a few days still.

Part of my plan for time without power is having a supply of ice available. But we learned another lesson and this one I want to share with the folks at BC Hydro. We were lucky that we were able to get enough ice to save many of our perishables from the fridge (by putting them in coolers) and our deep freeze was okay but due to the communication policy of BC Hydro we lost a lot of food we did not need to lose. As most locals know BC Hydo (our government-owned utility) had an almost complete collapse of its public communication system during the storm. Their web site crashed, their phone lines were jammed and it took quite a while for even their Twitter feed to come to life. Once up the Twitter feed (and the good old fashioned AM radio) were what we used to make our plans and this is where my issue with BC Hydro comes to play.

My one big complaint about BC Hydro (whose employees have worked incredibly hard this weekend to restore power) is that they did not come close to giving us any reliable information for most of the time we were without power. We lost power just after noon on Saturday. By late Saturday BC Hydro got their Twitter feed running and was reassuring us that we would get power back by midnight. Using that as our guide we decided to leave the fridge freezer and fridge unopened, counting on insulation and retained cold to keep everything okay until the power came back that night. Waking up Sunday morning we were shocked that the power was not yet on. We went back online and were informed on Sunday morning that power in our area would be back by noon. Come noon we still had no power and had not had power for 24 hours which I was taught is the cut-off for trusting your fridge without power. If BC Hydro had been honest with us at the onset we could have triaged our fridge/freezer and saved a lot of good food by moving the more expensive meat etc…from the fridge freezer to the deep freeze and being more aggressive with our use of ice and coolers. The problem with triaging is that it means opening the freezer and losing a lot of the less expensive stuff which we thought we might be able to save by simply being prudent about fridge use (and would have happened if the power had only been off for 6-to-12 hours).

I know, I know BC Hydro was not in a position to give exact estimates but surely they must have known pretty early into Saturday afternoon that this was not a problem they were going to be able to address in 6 hours. All they would have had to do is simply announce: “this is too big to handle right away expect to be without power for at least 24 + hours” and we could have acted accordingly. Instead we trusted BC Hydro’s unrelentingly optimistic estimates and lost many hundreds of dollars worth of groceries, much of which could have been saved with better information.

As an outsider I have no sense on how BC Hydro comes up with their repair estimates, but I am informed that until the local power sub-station has been energized they are not going to know all the problems down-line from the local sub-station. I only learned at around 4 pm Sunday that the sub-station that powers our entire area had been de-energized and was not going to be energized until Sunday at 5 pm. In our case it was a further 10 hours after the sub-station was energized before we got power. I know the organization wanted to put a good spin on the situation but they must have known that if a sub-station is de-energized then telling me at 6 pm that power from a de-energized sub-station will be up at midnight is simply not going to happen.

This situation reminds me of an episode of Star Trek: The Next Generation (yes I am a nerd) where Commander Scott (Scotty) guest-starred. In the episode Scotty pointed out to Geordie (the Head Engineer of the Enterprise) that Scotty always over-estimated how long it would take to fix a problem. His logic was that  if something went wrong he still had time to meet his original estimate but if he got everything done right he would be done early and he would get praise as a “miracle-worker”. By giving us overly optimistic predictions did the exact opposite, instead they made us resent them. In effect BC Hydro wasn’t helping us and in doing so was actually hurting their brand. Throughout the weekend they repeatedly gave cheery predictions which they, time-after-time, failed to meet. Each time they did so it got us more and more angry. Had they told us a less optimistic (more realistic?) estimate right up front (and they must have known pretty early that it was going to be more than 24 hours) we might have grumbled but then we could have planned accordingly.

I have a client who gave me some words of advice early in my career that I remember to this day. She said:

never lie to me or try to say something is clean when it is not. This is my job and I know you didn’t make the mess and that you are just the guy figuring out how to clean it up. I won’t hold bad news against you as long as you tell me the truth no matter how hard it may be for me to hear. With the truth I can make plans, allocate budgets and make promises to my bosses. I will, however, definitely hold it against you if you mislead me or don’t tell me the truth because then I can’t make good decisions, I will mis-spend my budget and I am likely to make promises to my bosses that I cannot meet. If I do that because you misled me then I wouldn’t want to be in your shoes.

If BC Hydro learns only one thing from this event it should be that people will be disappointed with bad news but will be furious if they think (even wrongly) that they have been knowingly misled.

Posted in Uncategorized | 2 Comments

On the misleading use of toxicology in discussions about fracking chemicals?

Last night I was forwarded a tweet that absolutely demanded a response. It was from that friend of science Robert F Kennedy Jr. and said “New Study: CA frak chemicals are linked to cancer, mutations and hormone disruption”. The study in question provides a case-study for science communicators and journalists alike on how activist scientists can misconstrue and miscommunicate scientific risks in order to achieve political aims. The report is titled California’s Fracking Fluids: The Chemical Recipe and the report was prepared by the Environmental Working Group (EWG). I invite readers who are unwilling to wade through the entire torrid text to browse the Executive Summary at the EWG web site. Having done so I welcome you to come back and join me as I look into the claims in a much more nuanced manner and consider the actual information provided in context.

While the others are away reading that stupefying Executive Summary I will remind the rest of you that I have spent a reasonable amount of time blogging about the investigation and communication of risk. Unfortunately, due to the nature of my blogging platform (read free and simple since I am a chemist and not a web designer) it is not terribly easy to figure out what I have written in the past so I will summarize here. I prepared a series of posts to help me out in situations like this. You see talking about how the authors have messed up the science is very hard if my audience doesn’t understand the language of the field. The posts started with “Risk Assessment Methodologies Part 1: Understanding de minimis risk” which explained how the science of risk assessment establishes whether a compound is “toxic” and explained the importance of understanding dose/response relationships. It explained the concept of a de minimis risk. That is a risk that is negligible and too small to be of societal concern (ref). The series continued with “Risk Assessment Methodologies Part 2: Understanding “Acceptable” Risk” which, as the title suggests, explained how to determine whether a risk is “acceptable”. I then went on to explain how a risk assessment is actually carried out in “Risk Assessment Methodologies Part 3: the Risk Assessment Process. I finished off the series by pointing out the danger of relying on anecdotes in a post titled: Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist. Now anyone who has read all those previous posts can probably figure out what I am going to write next but that would be less fun for me so I will continue here.

Let’s get something straight right away. Fracking fluids are generally not good for human consumption. The reason for this is simple: fracking fluids are industrial mixtures intended to be used under controlled conditions. No one wakes up in the morning and asks themselves: “what shall I have for breakfast this morning: a nice chia smoothie or a glass of fracking fluid?” That being said sometimes fracking fluid can be released into the environment and so it is useful to understand its toxicity. Based on this (and political considerations) California sought to identify what was in the fracking mixtures through their law SB 4. Well the EWG report takes this disclosure and ramps up the hype (quite impressively) in order to frighten readers and sway public opinion.

The EWG report looks at the entire list of 197 chemicals that have been reported in California fracking fluids and highlights those that appear the most often. The Appendices present the entire list and some of the compounds on the list are pretty clearly not stuff you want to encounter in high concentrations: compounds like #76 toluene. A couple points should be made clear here. Fracking fluid is intended to be forcefully blown into geological formations rich in petroleum hydrocarbons. If the target geology is rich in hydrocarbons, then using hydrocarbons shouldn’t be a big deal right? It would be like complaining when someone used a hose to blast water into the ocean. The ocean is not likely to get much wetter. Moreover, toluene is reported as being used in only 3.6% of the fluid mixes and is likely used in very low concentrations, kind of like it is used in things we use everyday like glues. Thus, while it might represent a risk, it would appear to pose an exceedingly low risk. For the purposes of this blog post we will ignore these trace compounds and stick to the top 40 fracking chemicals which the EWG report highlights in Table 2. Table 2 of the EWG report presents:

The top 40 fracking chemicals used in California, Dec. 2013-Feb. 2015, compared to national data from U.S. EPA’s March 2015 report, “Analysis of Hydraulic Fracturing Fluid Data from the FracFocus Chemical Disclosure Registry 1.0.

This is the list that the authors intend to use to frighten readers and maybe if you were a non-chemist you might be frightened by the list. As a chemist I look at the list and want to yawn. It is filled with a bunch of innocuous compounds, some pretty common household-type chemicals and a handful of petroleum hydrocarbons. The actual red meat of the report is Appendix 2 which details “The environmental and human health effects of fracking chemicals used in California (2014 to February 2015)”. Clearly this is the part of the report used to cover any number of sins from the earlier text by comparing the various compounds to various regulations and health effects.

In the report they place a particular emphasis on the California Proposition 65 List of “chemicals known as causes of cancer or reproductive harm”. To demonstrate why I have so little respect for the report let’s compare the top 40 list from Table 2 with the California Proposition 65 list. Of the 40 compounds, 5 appear on the California Proposition 65 list. You would imagine that this frightening five must be chemicals so problematic as to make you want to attend a protest and lie down in front of a fracking rig, so let’s look at these terrifying carcinogens:

  • #1 crystalline silica quartz (SiO2)
  • #2 diatomaceous earth, calcined
  • #7 crystalline silica, cristobalite
  • #23 methanol
  • #27 hydrated magnesium silicate (talc)

As a chemist looking at this list, I can’t help but wonder what the EWG author’s are actually worried about? Admittedly, each one of these compounds has a scary technical name (scary enough that someone may want to call the Food Babe) and each has been linked with cancers (often only tangentially) but certainly not in the manner and form encountered when used as a fracking fluid. This is one of the points I have tried to pound into my readers in my earlier post: a compound’s toxicity is based on mechanism of exposure and dose. In the case of each of the compounds above, the mechanism of toxicity is incompatible with any concern about exposure or even dose. Let’s look at the chemicals a bit more closely to help understand.

Chemical #1 and #7 are two types of silicon dioxide which you might know better as “sand”. Chemical #1 is the type of sand preferred for use in children’s sand boxes. Chemical #2 is a slightly more exotic version of sand that has been exposed to high temperatures and crystallized in a more fancy manner, but it is still just sand. To be clear, very finely ground sand, when inhaled, can raise your risk to cancer so the authors of the EWG report aren’t technically lying, but they are massively exaggerating the risks. As 100’s of generations of desert Bedouin will tell you, it is possible to live a lifetime exposed to sand (including sand blown in the air) without your entire population being felled with cancer. Were sand really a worrisome cancer risk then we might be less likely to use it in children’s sandboxes? To make it more misleading, with respect to fracking, the sand is encountered as part of a liquid solution/suspension. Having spent many happy days at the beach I can attest to the fact that wet sand is not easily inhaled. Anyone who looks at an inhaled carcinogen risk and compares it to fracking solution exposure either has no understanding of toxicology or is trying to mislead you.

Chemical #2 is diatomaceous earth, calcined. That is the crushed shells of diatoms that have been heat-treated to make them more crystalline. Diatomaceous earth, like sand, is a possible carcinogen when inhaled in a fine dust. It is used in industrial purposes as an organic pesticide (it is used against slugs as slugs don’t like to crawl across broken glass) in water treatment systems and interestingly enough as a toxicologically safe source of gritty material in toothpaste. So once again the EWG scientists are trying to convince the public that a chemical that we, as consumers, feel is safe enough to stick in our mouths on a daily basis, may be a danger when in a fracking solution?

Chemical #23 is methanol. Yes methanol, that ubiquitous chemical used in so many products as to be hard to know where to start. If you drink methanol you will indeed get very ill but since we aren’t about to drink fracking fluid…I prefer the chia smoothies myself, it can also be discounted from the list.

Finally we come to #27 hydrated magnesium silicate (talc). This is a classic case of “the Food Babe effect” where a chemical sounds terrifying using its scientific name but less so by its common name. You probably have heard hydrated magnesium silicate’s common name: talcum powder, used by generations of mothers and fathers to keep their newborn babies’ bottoms dry. Yes that is what the EWG scientists are trying to make you fear. An innocuous, familiar compound that most every family in America has bought and used. However, in this report it represents one of the California Proposition 65 cancer risks?

I think you get my point by now. The authors of the EWG report have taken a list of chemicals, which when used in a very different manner, have been linked (or associated) with cancer. They have then tried to use that link/association to make these chemical sound frightening when discussed in the context of fracking. I really find it hard to take a report like this seriously. The work is so clumsily done as to almost not be worth discussing except that I have already seen this report cited a half-dozen times on my twitter feed. As discussed, the people tweeting it aren’t exactly known for their scientific smarts. The first person was that tangentially famous son of a famous father who refuses to accept the toxicological research that demonstrates that Thimerosal is not a cause of autism. The second was one of my favourite science-blind progressives. I could go on, but the problem is that these people have a lot of followers most of whom also have no serious science background either and are likely to continue the stream of re-tweets. As communicators of science we have to keep shooting this bad science as soon as it appears because to do otherwise would leave the public policy morass dominated by credulous discussions of reports like this one

Posted in Chemistry and Toxicology | 9 Comments

On Linda McQuaig’s comments, Carbon budgets, and keeping oil sands “in the ground”

NDP candidate Linda McQuaig has been taking a lot of flack in the last couple days for a quotation on CBC’s Power and Politics where she suggested that “a lot of the oil sands oil may have to stay in the ground.” To justify her statement she has directed critics to some recent scientific literature as well as the outputs from the Intergovernmental Panel on Climate Change (IPCC) a well-respected international organization. The IPCC has stated that in order to meet their mandate (more on that below) the full extent of the oil sands cannot be exploited and Ms. McQuaig has correctly cited the IPCC. The problem is that a number of conclusions derived from the IPCC reports move away from the scientific and into the socio-economic and the political. In doing so they ignore many of the complexities of the topic. In particular, a number of activists have been claiming that 85% of the oil sands must remain in the ground as unburnable. As I will demonstrate below, this claim is not a scientific fact, but rather a political one. The rest of this post will hopefully provide a bit of clarity on this topic and maybe help eliminate some of the painful nattering we have heard so far.

Let’s start at with a bit of background. As I discussed in a previous post (on RCP8.5 and “the Business as Usual” Scenario – Different beasts not to be confused), the IPCC derived a number of potential scenarios called Representative Concentration Pathways (RCPs) to help model a future earth based on how we, as a planet, develop in the next several decades. As part of the modelling exercise the IPCC Working Group III on Mitigation of Climate Change (WGIII) took the step of trying to establish what level of cumulative carbon dioxide emissions would be likely to result in exceeding the global 2oC goal in the 21st Century. For those of you familiar with my writings you will remember that I wrote a previous post describing the 2oC goal titled What is so Special about 2 degrees C in the Climate Change Debate? where I pointed out that the IPCC’s goal of trying to keep climate change below 2oC is a relatively arbitrary one with little actual scientific foundation. That being said 2oC is the number that the IPCC was tasked to consider and they are nothing if not consistent in that respect.

To return to the point, the IPCC ran the RCPs and came up with a big table (Table SPM.1) that provided a range of carbon dioxide concentrations and resultant likelihoods that they would result in our exceeding the 2oC goal. Now to be clear, the RCPs represent complex models that include conditions of population, levels of development, rates of deforestation etc… in addition to carbon dioxide emission characteristics. As a consequence, predicted carbon dioxide concentrations have pretty wide ranges and in some RCPs a lower carbon dioxide range will result in a larger temperature change due to features completely unrelated to carbon dioxide concentration (typically having to do with deforestation etc…). Out of this massive jumble of numbers the IPCC managed to come up with a nice round number: 1,000 gigatonnes of carbon dioxide (Gt CO2). If you are like me then you are inherently suspicious of any complex modelling exercise that generates a nice big round number, but that is a story for another day.

The 1000 Gt CO2 value represents the amount of carbon dioxide the IPCC scientists felt we could afford to put into the atmosphere while still retaining a high likelihood (over 75%) of not overshooting the 2oC goal. This 1000 Gt CO2 target thus represents our planetary “carbon budget”. Since the IPCC report came out a number of authors have worked further on the topic and one of the more reasonable estimates of our “remaining emissions quotas” (also called our carbon diet) was presented in the journal Nature Geosciences in a paper titled “Persistent growth of CO2 emissions and implications for reaching climate targets”. The problem with the IPCC carbon budget is that, as suggested, it is a bit of a fudge. As discussed, the 1000 Gt CO2 carbon budget appears to be very much a conservative estimate and the 2oC goal might be too conservative as well. More problematically, from a science perspective the IPCC models are well-known to run hot, that is they use climate sensitivity estimates that are relatively high. For details on the topic of climate sensitivity see my post: Why I think Climate Sensitivity is Essential for Developing Effective Climate Change Policy. Suffice it to say that the IPCC was limited in the literature it could use (it could only use literature published before a fixed date) and since the most recent IPCC report came out the consensus estimate for climate sensitivity has decreased markedly. For those of you unwilling to read my earlier piece essentially this means that it may take more carbon dioxide than originally envisioned to generate a commensurate temperature increase. What this means is that theoretically our carbon budget could be closer to 1900 Gt CO2than 1000 Gt CO2. To be clear, I’m not saying we don’t need to be put on a carbon diet, I just mean that we may be able to ingest more calories (emit more carbon) on the new diet than was previously believed under the old diet.

Now since it is generally accepted in the climate field that we need to stay within our carbon budget (whether the higher or lower figure) the next question we need answered is what does that mean in a global sense with respect to our fossil fuel reserves? Well coincidental to the work of the IPCC, the International Energy Agency (IEA) produced a World Energy Outlook in 2012. The IEA World Energy Outlook provides a best scientific projection of energy trends through to 2035 and includes a detailed assessment of global energy reserves. Based on the numbers in the IEA report, the current global fossil fuel reserves, if all burned, would represent approximately 2860 Gt CO2. So if we are to meet the IPCC goal of 1000 Gt CO2,approximately 1860 Gt CO2 of our fossil fuel reserves will have to stay in the ground unburned. At this point I could stop, but this is where the debate really gets interesting.

Having established that some large percentage of our fossil fuel reserves must remain unburnable to meet our (admittedly conservative) IPCC carbon budget of 1000 Gt CO2 the question unaddressed is how do we allocate those 1000 Gt CO2? This is where the politics comes into play. Ever since the IPCC report came out different groups of activists and politicians have argued about topics such as whether we should stop using coal (due to its high CO2 content to energy density) and move to natural gas and whether developed nations should be allocated less of the remaining carbon budget because developed countries had already contributed to existing levels. Most of the battles in the upcoming conference in Paris will center on these topics. In preparation for Paris a number of academics have got into the mix. The first serious attempt to describe the carbon diet necessary to stay within our carbon budget came out in 2009 (before the most recent IPCC report) in the journal Nature: in a paper titled Greenhouse-gas emission targets for limiting global warming to 2 °C authored by Meinshausen (et al.) Not so coincidentally (want to guess who was on the IPCC authors list) this article written several years before the most recent IPCC report was released also came up with a proposed carbon budget of 1000 Gt CO2. The paper pointed out that the vast majority of the world’s proven fossil fuel reserves consist of coal, which most policy folks accept must be quickly moved out of our primary energy mix.  Meinshausen et al. concluded that less than half of the proven, economically recoverable oil, gas and coal reserves could be emitted to reach a carbon budget of 1000 Gt CO2.

Since 2009 more research papers have been published and the paper currently all the rage in the environmental community is actually a “Letter” (essentially a short paper) that was published in January, also in Nature, titled “The geographical distribution of fossil fuels unused when limiting global warming to 2°C” and authored by McGlade and Ekins. The McGlade and Ekins paper presents a detailed carbon diet to keep global warming less than 2oC. The authors, two professors from the University College of London, Institute for Sustainable Resources, have gone several steps further than Meinhausen et al. by looking at an “economically-optimal” solution for the distribution of the carbon budget. In doing so they discount unconventional fuels (like oil sands) and show strong preference for existing producers. Under their model 85% of the oil sands become unburnable and only 60% of the Middle Eastern Oil becomes unburnable. So to be entirely clear here for any reporters reading this article: the IPCC does not say that 85% of our oil sands have to be left in the ground to meet the 2oC goal. Two mid-level academics from the University College of London are making that demand. So when an activist says that the 85% number is from the IPCC, the correct response is (in keeping with the origin of the two authors): “bullocks”.

As a Canadian, I look at this paper with a good deal of skepticism. As discussed earlier, I believe that our carbon budget to avoid 2oC is likely closer to 1500 Gt CO2 than 1000 Gt CO2. In this I am not alone as I get that number from the Nature Geosciences paper (a peer-reviewed piece by non-conflicted scientists). I also don’t necessarily believe that 2oC is a reasonable number because the current literature doesn’t appear to support the 2oC goal (please read my older post on the topic). But even if I did accept the 1000 Gt CO2 budget I would not accept the carbon diet presented by McGlade and Ekins. Instead, I would look to identify how much of the budget is available to Canada and ensure that 100% of that budget was made up using Canadian oil. I know that the concept of “Ethical Oil” has become something of a hot potato because of issues surrounding the origins of the term, but I do believe in the concept behind the term. I want my personal gasoline purchases to go towards subsidizing Medicare and not subsidizing a despot or paying for a tyrant to build another palace. I want to know that the oil used in my car was not generated using slave labour in a country without a free press and where environmental regulations are noted by their absence rather than their application. I want my oil being produced by well-paid Canadians, in a country with a demonstrably free press, strong government oversight and a strong tradition of NGOs to watch over the regulator’s shoulder.

So to answer the critical questions about this entire piece:

1) Was Linda McQuaig correct that some of our oil sands will need to be left in the ground to meet our climate change commitments? Yes, if we are to meet our goal of limiting our greenhouse gas emissions then there are some coal and oil sand resources that will have to stay in the ground.

2) Is that number 85% of the resource as suggested by some activists and trumpeted on television and radio? Absolutely not. The amount left in the ground should be based on the economics of the resource and a desire to optimize Canadian content and minimize our use of non-ethical fossil fuel sources.

3) Do I know what percentage of our oil sands will have to stay in the ground to meet our climate change commitments? No I do not. I also don’t know how much of our oil sands resource can be extracted in an environmentally sound manner. What I do know is that Canadian oil helps support Canadian jobs, Canadian institutions and provides the funds to pay for our education and medical systems while subsidizing transfer payments. As such, in my mind, it is preferable to oil from virtually every other source world-wide for Canadian use.

Posted in Canadian Politics, Climate Change, Oil Sands | 6 Comments

More on "Professionalism" in the Climate Change debate

I am back from a brief blogging hiatus as I took some time off-line to have a holiday with my family. During my holiday I was mostly out of electronic contact except for a brief period last week, when I had Wi-Fi and got into another one of the typical climate change arguments. The discussion included one of my most ardent foils, a gentleman well-known to this blog: the blogger known as andthentheresphysics (ATTP). He is reportedly Dr. Ken Rice, a Reader of Astronomy and Public Relations Director at the Institute for Astronomy, within the School of Physics & Astronomy at the University of Edinburgh (UK). I was responding to another well-known blogger “Sou from Bundanga” the proprietor of a blog called HotWhopper. Sou, reportedly Miriam O’Brien a management consultant in Australia, was berating another blogger about using “stolen” emails from the now-famous “hack” at Skeptical Science.

The basis for the disagreement was my taking on the role of the Devil’s Advocate in the discussion. I, personally, think that it is rather rich when a group that was willing to broadcast material taken under false pretense from the Heartland Institute by Dr. Gleick (now known colloquially as “Fakegate”) would complain bitterly when the shoe was on the other foot. Dr. Gleick has publicly admitted to having misrepresented himself and used illegal methods (one might even use the word “stolen” if one was so inclined) to access and then distribute the Heartland documents. The distribution included the addition of at least one critical document that Heartland claims was composed entirely of deliberate misinformation…source unknown but presumed. Sou’s outrage was a classic case of the pot calling the kettle black and my discussion with Sou demonstrated that she was entirely satisfied with doing just that.

ATTP interposed himself into the conversation to complain, once again, about a previous post of mine: The implication of “Professionalism” in Climate Change discussions which he feels impugned his professionalism and misrepresented him. This has been an ongoing discussion about which we continue to disagree. As anyone who has read the cited blog post can see, I quoted ATTP directly; I did not edit any of his comments and included our entire exchange. I’m not sure how quoting someone correctly, completely and in context represents a misrepresentation but hey that’s just me? Admittedly, I also included a discussion of my personal interpretation of his comments with specific relationship to the concept of professionalism in the field of climate change. My concern at the time was the absence of any significant repercussions for the authors of Climategate and Dr. Gleick following their respective revelations. That an academic could do what Dr. Gleick admitted to having done with no professional repercussions continues to amaze me. ATTP’s insistence that university ethics oversight programs are sufficient to address ethical shortcomings of senior academics is laughable in light of the Fakegate and Climategate affairs. ATTP may have intended to get one point across, and to a certain population (his fellow academics) he may have; but to me his words were explicit and clear. While I added my own commentary, ATTP’s own words spoke for themselves quite eloquently.

This discussion reminded me of similar conversations I have had over the last years. As many of you know, my wife is a teacher and I spend a lot of time socializing with teachers. For those of you not familiar with British Columbia politics, the British Columbia Teacher’s Federation (the BCTF our version of a teacher’s union) has spent the last decade engaged in an all-out war with our current right-of-center government. As part of the battle, the BCTF has been arguing quite strenuously about how hard they work with respect to other employees in British Columbia. Now I have seen the classes my wife has taught and agree that teachers can have ridiculously bad classes with way too many students and way too few resources. I also recognize that teaching can be an incredibly tough job and most teachers go above and beyond to help their students succeed. Where I disagree with the BCTF is when they complain about their long hours of work. In a moment of bad judgment, I actually calculated and presented the numbers which demonstrated that an average teacher’s work year is substantially shorter than virtually all other public employees and well below those of private sector employees. This placed me firmly in the familial doghouse and so let’s pretend I didn’t bring that topic up.

What I have come to recognize from my discussion with teachers is that most teachers are not really in a position to have a reasonable conversation on this topic. This is not because they are irrational; it is just that most teachers have never left the education system and so have little understanding how the rest of us live and work. Most teachers went to elementary school, then high school, then university, then teacher’s college and finally to a position teaching. Certainly many of them worked after-school and summer jobs but most have never actually worked 5 days a week, 48 weeks a year, for year after year after year. During the 10 months of the year they work, they do work very hard but for their entire lives they have been given a spring break, at least two weeks off at Christmas and almost two months off as a summer break. As such, they have no basis for understanding how the rest of us live with only 10 statutory holidays (in Canada) and two-three weeks of paid holidays a year.

You may be wondering what this has to do with the climate change debate but in talking to ATTP I had a moment of clarity where I began to understand the division between the academic activists in the climate change world and the rest of us. What I had missed in my “professionals” post (and its sequel post On Appeals to Authority, “Climategate” and the Wizard of Oz: a Personal Journey from “Trust Me” to “Show Me) was an additional feature of the people from the “trust me” crowd. They are mostly senior and career academics and as such have lived their entire lives in an entirely different world than the rest of us. The old expression about working in “ivory towers” didn’t just fall from the sky but is based on a basic recognition that their work lives differ markedly from the rest of us.

During their early school days most current academics were likely the smarter kids in their classes and got exceptional grades. To have succeeded in the academic sphere they had to have studied hard in university typically being in the top percentiles of their classes. This allowed them to get into grad school where once again their current status is likely the result of them being in the top percentiles of their grad school classes. They have thus lived their entire lives as the crème de la crème in their academic disciplines and peer groups. Even the lowliest “second-rate academic” did better in school than 90% of their peers and ranks amongst some of the most academically gifted members of our society. Finally, to succeed they had to put in a lot of individual work, often with little requirement for teamwork but rather a lot of time working with individual mentors and individual supervisors. Derived from all this is the fact that they are used to thinking of themselves as the smartest person in the room and have come to believe that this means that their opinions (even on topics outside of their area of expertise) mean more than those from the rest of us.

Given the nature of the academic enterprise they have also flourished in an inherently hierarchical system where they now sit at the top of that hierarchy. Having spent their entire careers in this hierarchy they seem to find it hard to hear their opinions challenged by people who do not fit within that hierarchy. When they speak of the “show me crowd” as being full of “engineers” and “other professionals”, it is not necessarily meant as an insult but rather shows a lack of understanding about how the other half of the professional world works. Every one of those engineers has experienced the “university experience”. These engineers have had an opportunity to see, albeit very briefly, how the other side lives but now live and work in a world where they are required to work in teams and accept criticism from their peers and from their clients on an almost daily basis. Most importantly they have worked in an environment where their actions are overseen by ethics boards and their success is dependent on the stressors of the private sector.

As for the hardships of being an academic, I have to laugh when I read them complain about facing the dilemma of “publish or perish”. Every private sector worker I know lives under the same cloud. Ask a plumber what happens if they can’t consistently find work? Show me a thriving consultant who consistently fails to achieve results for their clients or who fails to meet the standards of their profession. The big difference between private sector workers and tenured academics is that we don’t have tenure so if we screw up we can’t fall back on a comfortable teaching position. In the private sector if you were caught fiddling with the process, like the scientists fiddling with peer-review in the Climategate files, you would be summarily fired. Were he subject to a professional ethics board the actions admitted to by Dr. Gleick in his Huffington Post blog would have resulted in him being censored and possibly stripped of his professional designation and unable to work in his chosen field.

As I pointed out in my post Type I and Type II Error Avoidance and its Possible Role in the Climate Change Debate and further discussed in my post Does the Climate Change Debate Come Down to Trust Me versus Show Me? – Further thoughts on Error Avoidance these academics live in a world where the emphasis is Type I error avoidance and where review by a limited number of peers is the norm. What I missed in those posts is that the academic’s career trajectory will necessarily have limited their interactions with professionals in other fields. It is in light of this fact that we should reconsider how the two “sides” in the Show me versus Trust me debate should interact. I admit to having failed to sufficiently recognize and acknowledge how fundamental the differences are between the two groups…and I spent over a decade working for team “Trust me”. Now that I have worked on the other side of the fence I accept the inherent value of the “Show me” approach. In my previous comments I may have failed to recognize how this fundamental difference in views colours our daily actions and reactions. I must learn not to feel insulted when an academic talks down to me. It is not intended as an insult but rather is simply a “feature” of their upbringing. That being said, I will not kowtow to, nor show the deference that academics, like ATTP, feel are their due. That being said, I can acknowledge the difference and like the ambassadors to the Chinese Royal Courts of the 1800’s I will endeavour to work out mechanisms to not unnecessarily damage their pride while insisting that they recognize legitimate differences in our worldviews.

Posted in Climate Change Politics | 9 Comments