Risk Assessment Methodologies Part 2: Understanding “Acceptable Risk”

In my last post I wrote about the basic concepts of toxicology including dose/response relationships and the concept of a de minimis risk. Today I am going to expand on that concept by discussing what represents an “acceptable risk” in the risk assessment world and how that information is mis-communicated on a daily basis.

If you are like me, you often watch the news and get frustrated by the NIMBY’s on the screen describing why this development should be abandoned and why that technology should not be trusted. Be it WiFi, oil sands or radio towers, nine times out of ten, one of the terms they will pull out of their hat is “the Precautionary Principle”. In their minds the Precautionary Principle is some magic bullet that will silence all opposition. My response is to slightly misquote Inigo Montoya from “The Princess Bride”: “You keep using that term, I don’t think you know what it means”. You see the Precautionary Principle does not actually say what most of these activists seem to think it does. The actual Precautionary Principle was defined as Principle 15 in the Rio Declaration which states:

“In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

The Precautionary Principle does not say that all risk is bad risk and that all risks must be avoided because that is not a realistic way to run a society. Getting out of bed in the morning poses a non-zero risk of slipping and breaking your neck. Using the activist view of the Precautionary Principle we would have to ban all beds to avoid that potentially fatal risk. Instead of requiring “no risk” in the real world we ask: what is considered an “acceptable risk”? Not surprisingly, this question has been posed by many over the years and lots of research has been done on the topic. After much debate and discussion a consensus has been built as to what actually would be considered an “acceptable risk”. The generally accepted guidance is provided by the USEPA and shared by Health Canada and the BC Ministry of Environment:

For known or suspected carcinogens, acceptable exposure levels are generally concentration levels that represent lifetime cancer risk to an individual of between 10-4(1 in 10,000) and 10-6 (1 in 1,000,000) using information on the relationship between the dose and response. The 10-6 risk level shall be used as the point of departure for determining remediation goals (ref)

So for a carcinogen, that involves a concentration that when administered to one million people, continuously (24 hours per day) over 70 years (an assumed lifetime), would result in one additional case of cancer in addition to any cancer that might occur in a person not exposed to the compound. Frankly, I can’t imagine getting much more conservative than that. The activists say that the health authorities are being insufficiently cautious but it is pretty clear by that definition that they are in fact being incredibly cautious. For non-carcinogens we can calculate a “Hazard Quotient” which the BC Government defines as

Hazard quotients are calculated for substances that do not cause cancer. A hazard quotient is the dose of a substance received from a site (the estimated daily intake) divided by the safe dose for the substance (the reference dose). (ref)

A reference dose, (RfD) is a concentration or dose of a compound in question to which a receptor may be exposed without causing adverse health effects (i.e. a dose that is considered “safe” or “acceptable”). Using the terminology from our last post it would be the no observable adverse effects level (NOAEL). Under the current British Columbia regulatory regime a hazard quotient of 1 or lower is considered to pose an acceptable risk. That is, if you have an NOAEL of 10 for a compound, then any concentration below 10 would be considered acceptable as it would not be expected to expose a receptor to unnecessary risk. So once again we have a concentration that is demonstrated to not have the ability to cause harm.

Let’s consider now what all these numbers mean and to do so let’s use one of my favourite examples: WiFi units in our schools. Not too long ago there was an intense lobby to try and remove WiFi units from selected schools. The activists played all the familiar cards. First they argued that WiFi as a “new” technology had not been well studied and thus posed an unknown (and therefore potentially excessive) risk. This, of course, is baseless. WiFi is based on microwave technology and microwave technology is neither special nor unique. It has been widely used for decades and the research in the field is deep and varied. The next approach was to attack using a “big number versus small number approach”. We all know that microwaves are dangerous at high power, after all we use them to cook all the time, so they must be dangerous at lower doses. As we’ve discussed previously it doesn’t work that way. Microwaves represent non-ionizing radiation that can be absorbed by water molecules resulting in thermal effects. What this means is that it does not have the energy to remove electrons from atoms/molecule and thus does not damage DNA etc.. Microwaves work by heating. At high power they can indeed cause damage to human tissue, which is why it is a bad idea to climb an active microwave tower. But a WiFi unit lacks the power to heat a thimbleful of water, let alone a human body. Putting on a hat provides orders of magnitude more thermal energy than a WiFi unit attached to your head could. The final complaint is that Health Canada “safe” levels might be too high. But as we just learned the levels are actually incredibly conservative and exposures to concentrations much higher than allowed by Health Canada would still not cause an issue.

This is the playbook, it is used on WiFi, on pipelines, on bitumen, essentially on any topic where doubt can be spread about. In my next post, I’ll go into how risk assessors actually work to establish whether a risk is something to worry about.

Posted in Risk Assessment Methodologies | 8 Comments

Risk Assessment Methodologies Part 1: Understanding de minimis risk

In my last post I talked about big numbers and how they can cause confusion in the minds of the media and the public. In this post I want to discuss the other side of the coin: extremely small numbers and how they can be misconstrued in risk communication. You have all seen headlines like: “Dangerous Levels of Radiation Recorded in Canada as Fukushima Radiation Dangers Continue” and Oilsands tailings ponds emit pollutants into the air, study confirms”. Well, when I see one of these headlines the question that comes to my mind is: do the reported results represent real risks or do they represent interesting science that has been badly miscommunicated either by ill-informed reporters or activists with an agenda? You might ask why I am such a cynic? Well I know that in my business we have instruments that allow us to identify compounds at extremely low concentrations. As I mentioned in my previous post, our mass spectrometer was able to “see” to the part per trillion (ppt) range. This is the equivalent of a single drop of liquid in a large lake. The problem is, just because I have an instruments that can identify a drop of poison in a lake doesn’t necessarily make that poison a health hazard. Just because a compound is considered “toxic” at high concentrations doesn’t necessarily mean it poses a risk at much lower concentrations. Unfortunately in risk communication this information does not appear to be widespread. There appears to be a population out there who believe that any concentration of a toxin is too high a concentration.

Looking at the hack writer’s handbook I am reminded that before I can write any post on toxicity, I have to quote Paracelsus who said: “all substances are poisons; there is none which is not a poison. The right dose differentiates a poison from a remedy”. For anyone knowledgeable in the field we recognize that this is true in a general case but we must add some qualifiers. The critical qualifier is that every individual/species has unique characteristics and as such a dose that may be fatal to one species/member of the community may be relatively benign to another. This can manifest in a number of ways. For instance zinc is harmful to viruses at lower concentrations (it inhibits rhinovirus replication) so taking a zinc lozenge during a cold will slow down the development of the cold virus and help your immune system fight a cold. But zinc is also toxic to humans at high concentrations. So regularly taking mega-doses of zinc to avoid getting a cold could instead put you in hospital. Similarly, an alcoholic who has developed a metabolic tolerance for alcohol can ingest a dose that would kill a person without that tolerance. In toxicology this differentiation is addressed by a concept called the LD50 (mean lethal dose). An LD50represents a dose (typically in mg/kg) required to kill 50% of a population of test organisms, be they human or fruit flies. Because our endpoints in toxicology are not always death we also have other measures, the most common would the ED50(mean effective dose) which represents a dose expected to produce a certain effect in 50% of test organisms and the “no observed adverse effect level” (NOAEL) which, as the name suggests, represents a dose that would not be expected to have observable adverse effects in any test organisms.

In toxicology, the goal is to establish the dose-response relationship for a compound of interest. Below some low concentration you would expect to have no effects (the NOAEL). Eventually a threshold is reached where effects are seen. This is referred to as the linear portion of the curve. In the linear portion of the curve increases in concentrations are expected to have commensurate increases in damage. Eventually you reach a plateau called a maximal response level where 100% of test organisms are expected to have the maximum response. For our risk communication discussion the two really important measures are the NOAEL (the threshold below which you would expect no effects) and the “de minimis zone”. In toxicology the term de minimis is used to refer to a risk that is negligible and too small to be of societal concern (ref). In the de minimis zone we have compounds that could theoretically have some effect, but the effect does not represent a general concern to society. To explain, toxicology is extremely conservative discipline. The aim is to be safe and in order to be safe we add layers of conservatism on top of layers of conservatism. Think of it like designing an elevator. In designing an elevator to hold 6 people you want to design in a degree of safety (conservatism) into the design. You would not want the elevator to collapse if seven people happened to push their way in, nor would you want it to collapse if six sumo wrestlers decided they wanted a ride. So elevator engineers add an order or two of conservatism into their calculations. While their elevator may be rated for 1000 kg it might actually be designed to hold 10,000 kg without failing. Similarly with toxicology, since we cannot test compounds on humans, a lot of the testing is on animals. Once an ED50 is established in an animal species a layer of conservatism is added as we move up trophic levels. Usually we start with an animal-to-human uncertainty factor of 10. So a dose that had an ED50 of 10 in a mouse would be reduced to 1 for humans. This gives us a safety fudge-factor. Going back to the concept of de minimis, given all the layers of conservatism built into toxicity there will be exposures to chemicals that are so small that even though they exceed the bottom end of the NOAEL threshold they still pose no significant risk to the population at large.

As a quick side note, there exist a second class of compounds that don’t follow this classic dose-response curve, they are called “non-threshold carcinogens”. Non-threshold carcinogens are compounds that do not have a lower bound of safety (no NOAEL). Benzene is one such compound. For each molecule of benzene you ingest you increase your likelihood of getting cancer by a finite amount. I will not address them further in this article for fear of going overlong but didn’t want anyone complaining that I was unaware of their existence.

So let’s go back to looking at the headlines presented above. The second headline dealt with a study on a class of compounds known as polycyclic aromatic hydrocarbons (PAHs). PAHs are petroleum hydrocarbon constituents and have been in the news a lot because tests for PAHs have found them in the air, rivers and lakes of Northern Alberta (where they are typically thought to be associated with bitumen extraction activities). But PAHs are also generated every time you burn fresh wood or overcook your steak on the bar-b-que. On our planet the single biggest non-anthropogenic sources of PAHs are uncontrolled fires (forest fires and biomass burning). Some PAHs are human carcinogens in high concentrations, but curiously enough, in lower concentrations we simply shrug them off. It has been posited that over the course of human evolution we, as a species, were naturally exposed to regular, low doses of PAHs from cooking and forest fires. The result is that our bodies have evolved a mechanism to essentially ignore low concentrations of PAHs in our blood streams. It is only when the concentrations cross the threshold (that is different for each PAH) that a negative response occurs.

In the report above, the tailings pond was responsible for potentially releasing as much as a TONNE!!! of PAHs a year into the atmosphere. Now a tonne sounds pretty terrifying for any “toxic” compound until you put that number into perspective. It has been estimated that in North America forest and prairie fires produce around 19,000 tonnes of PAHs every year (ref). Since the oil sands are found in the boreal forest region where massive forest fires are a yearly occurrence, this tonne of PAHs suddenly becomes less and less significant from a human and ecological health perspective. Similar the news was full of frantic headlines when the report “Legacy of a half century of Athabasca oil sands development recorded by lake ecosystems” came out last year. In this study PAHs were found in a number of Alberta lakes. A careful reading of the report, however, actually demonstrated that concentrations of PAHs in the lakes reported as “oil sands lakes” were entirely comparable to the concentrations in the lakes that were highly isolated from the oil sands, the “control” lakes. This finding was not particularly unexpected but that fact was not advertised by the activist community who trumpeted the “oil sand” lake results. Why wasn’t it surprising? Well, recognize that the oil sands represent a massive regional feature. The reason strip mining of oil sands was initiated back in the 1960’s and 1970s is because the material literally sits on the ground surface in that area. For millennia, rainfall has been washing this material into the rivers and lakes of the region. Also being boreal forests, yearly forests fires have been liberally spreading PAHs into the airshed. The reason we are only hearing about PAHs in the air and water of Northern Alberta now is the presence of oil sands extraction (and associated research funds) and that only recently has anyone been able to build mobile mass spec units that could be used to measure air concentrations around Fort McMurray. In my mind had anyone been able to find a lake in the region that did not have detectable concentrations of PAHs, now that would have been a noteworthy discovery.

Since this post is getting awfully long, I will stop here. My follow-up post will continue from here to discuss how we decide whether a compound is “toxic” and what type of risks we should be able to ignore and which risks need immediate action.

Posted in Risk Assessment Methodologies | 14 Comments

How Big and Small Numbers Influence Science Communication: Understanding Fuel Spill Volumes

This weekend I got a tweet from a friend who wanted everyone on her twitter list to be deeply concerned about the remaining oil from the BP oil spill in the Gulf of Mexico (more on that later). In reading the tweet and the cited news report it struck me that one of the major roadblocks in the comprehension of science, to the non-scientist, is the recognition that we, as a species, have a hard time understanding really big and really small numbers. This lack of a feel for big and small numbers often causes us to struggle badly to understand relative risks. Since this idea relates a lot to my previous posts on Fukushima radiation I thought I would write a bit more on the topic today.

As the parent of small children, I am amazed at how their young and inexperienced minds handle complex concepts like time and numbers. For my youngest daughter, time consists of today, yesterday and a nebulous time unit called “a week”. In her mind anything that happened outside of her little perspective of understandable time happened either “last week” or will happen “next week”. So her visit to the Halloween store (where she was terrified by a jumping spider display) still only happened “last week”. My son, as he was growing, went from that perspective to relating to time via an understanding of sleeps. Since he was a napper we had small sleeps (afternoon naps) and big sleeps (overnight) and upon waking up an event tomorrow afternoon was two little sleeps and one big sleep away. Over most of human evolution the concept of time was almost that simple. Add in the seasons and that is how our pre-historical ancestors operated. Similarly, mathematics is at best 50,000 years old and formalized numbering systems didn’t come into play until as little as 5,000 years ago. We, as mammals, thus evolved in a world where there were, one, two, three…a fist-full…more than a fist-full etc… This has left us not well-equipped to deal with numbers that are really big or really small.

Most everyone I know has a pretty reasonable grip on numbers up to the thousands. Thereafter it starts getting nebulous and by the millions/billions we end up working by analogy. On a regular basis I hear people describe volume based on an “Olympic-sized swimming pool” (OSP). The OSP, like many analogies, is actually a pretty poor one. The reason for this is that very few people have actually seen a true OSP. We are used to our local community pool (likely a 25 m pool) with shallow and deep ends and a diving area. An OSP, on the other hand, is 50 m long by 25 m wide and only 2 m deep ref. That gives it a volume of 2500 meters cubed or a liquid volume of 2.5 million liters. That is about 660,000 gals (U.S) and in oil terms that comes out to approximately 15,725 bbl.

Similarly, when dealing with very small numbers we often get stumped. We are good at 1 hundredths (pennies) but have a hard time instinctively going much smaller. In my work on contaminated sites we work in the parts per million and parts per billion range with our instrumentation able to “see” to the parts per trillion range. But what does that mean? Going back to our OSP model a part per billion would be about a drop of water in an OSP. My old supervisor while explaining the precision of his new mass spectrometer would point out that one part per trillion represented a single grain of sand on the entire length of Willow’s Beach (a long, local beach in Victoria).

So how does this all relate to the BP oil spill? Well according to the tweeted story in Salon, researchers at Florida State University identified some 6 to 10 million gallons of oil buried in the sediment at the bottom of the Gulf, covering a 9,300 square mile area southeast of the Mississippi Delta. From an outside perspective 6 million gallons sounds pretty huge. But to put it into perspective the Gulf of Mexico has a volume of 2,434,000 cubic kilometers of water (6.43 x 1017 or 643 quadrillion gallons) ref. So the remaining oil these scientists attribute to the BP oil spill represent about 10 one-trillionths of the volume in the Gulf (or 10 grains of sand on Willow’s Beach). The second version sounds a lot less menacing than the first but both represent the same value. To add further complexity, what the story did not mention is that there are more than 600 different seeps (areas where oil oozes from rocks) underlying the Gulf of Mexico. These oil seeps act like underwater springs for oil and release between 560,000 and 1.4 million barrels of oil annually (according to the National Research Council ref or ref). Going back to our conversions, 1 million barrels is approximately 42 million gallons of oil. So it is not unexpected that the researchers found oil in the sediment in the Gulf since the natural geology is spewing 42 million gallons of oil a year into the Gulf. Certainly the BP oil spill was a local disaster, but on a regional scale it now represents only one of many sources of petroleum hydrocarbons in the Gulf sediments.

In a similar vein, last year there was concern about a “Russian tanker full of fuel” (which was subsequently re-labelled a “fuel-laden cargo ship” (ref)) that was going to run aground off Haida Gwai after it lost power during a winter storm. What caused all the confusion to the press and public at the time was the volume of fuel carried by the cargo ship. It was reported to carry about 500 metric tonnes of bunker oil and 50 metric tonnes of diesel (ref). Using densities of 0.90 and 0.83, respectively, that would represent about 555,000 liters of fuel oil and 60,000 liters of diesel. To reporters this sounded like an awful lot of fuel and so for the first few days the ship was mistakenly reported as a tanker even though many older cargo ships carry similar volumes of fuel (older cargo ships are fuel hogs). A failure by the reporters to understand big numbers resulted in them getting confused and subsequently confusing their readers. This confusion continues as every time a ship has an incident it is called “fuel-laden” but when a car has an accident we never read about “a fuel-laden automobile” having an accident at 4th and Columbia? To further put this number into perspective, the BC Government West Coast Spill Response Study (ref) estimates that cargo ships operating along the coast transport approximately 42 billion liters of bunker fuel in their fuel tanks. This is more than the 38 billion liters a year that is shipped to the Puget Sound refineries in US oil tankers. This second number puts a lie to the belief that there is some form of “West Coast tanker ban”. That ban is on the Canadian side of the ledger only, and doesn’t affect our American cousins.

From a risk perspective, we have been trained to fear oil tankers even though they are highly regulated and have strict maintenance/piloting/tugboat requirements. Meanwhile, we are essentially oblivious to all those container ships travelling without tugs through our “narrow and dangerous” straits. Even more frightening are all those barges being towed along the coast. Few people ask how coastal BC communities get their fuel supplies? Well most are supplied by barges towed to their destinations by tugs. According to the spill response study 48 billion liters a year of fuels are transported by barge in coastal BC. Much of this material is considered “non-persistent” as it represents refined fuels that do not last as long in the environment once spilled. Lack of persistence, does not, however, mean risk-free. That lack of persistence must be tempered by the fact that these barges operate in inshore waters close to shore, so spills are more likely to migrate to land and cause damage to marine and coastal ecosystems. For volume comparisons, the biggest barges can carry 8 to 21 million liters of fuel. Going back to our OSP analogy, that represents between 3 and 8 OSPs of fuel per barge. The scary part is that the study had to use estimates, because unlike oil tankers, the barges do not even have to be fully reported.

Looking back at all these big numbers I have thrown around, a lot of people might become pretty panicky. After all, according to the Dogwood Initiative, we are only one “Exxon Valdez” away from Vancouver becoming a desolate hellscape. Of course that scenario conveniently ignores the fact that technology has advanced tremendously since the time of the Exxon Valdez and it also ignores the fact Vancouver Port regulations would not allow a tanker to traverse below the Lion’s Gate Bridge without a local pilot and accompanied by emergency tugs. I, on the other hand, am reassured by the numbers. Considering how much traffic we have had moving for how long it is a testament to our current system that we have not had a major spill, to date. I do not deny that we can do more but I also recognize that these really big sounding numbers aren’t nearly as scary as some would have us believe. Moreover, by re-framing the numbers (improving our analogies) we can do a better job of giving non-scientists a reasonable understanding of the relative risks of moving this material.

So having taken a brief look at how big numbers can confound and frighten, my next post will look at how very little numbers (and our difficulty in making estimates at low concentrations) are also used to scare us. But that is, as they say, the work of another day.

Posted in Oil Sands, Pipelines, Risk Assessment Methodologies | 7 Comments

Carbon Offsets: a Basilica to Bad Policy

Last week’s ridiculous display of private jets in Davos, Switzerland brought back to mind a topic I have meant to discuss in detail: carbon offsets.  For those of you in the back row, a carbon offset is simply a credit based on a current or proposed activity to reduce or sequester CO2 that is sold to someone who emits CO2. Essentially the seller’s actions reducing or sequestering CO2 emissions pays for the buyer’s emissions of CO2 and for that the seller is paid in real dollars.

Back when they were initially proposed (in the late 1980’s early 1990’s) the idea of carbon offsets seemed like a very good one. The thought was that controlling carbon emissions couldn’t be done right away. So during the transition we needed a bridge that would allow existing firms to continue to function as they transitioned to carbon-free operations. If you ran a company that absolutely couldn’t do without emitting carbon then rather than shuttering your doors, and causing immediate economic hardship, maybe buying an offset would make up for the net carbon emitted in the necessary operation of your business. At the time the supporters of carbon offsets battled the supporters of “cap-and-trade” to determine what would be the dominant way to reduce regional/national/international carbon emissions. Since “cap-and-trade” needed full government buy-in and regulation, it was easily bested by the speed and apparent efficiency provided by carbon offsets.

From their beginnings carbon offsets have had their problems. The biggest being ensuring that the buyer was getting what they paid for. In the early days of carbon trading con-men saw the opportunity for a quick buck and many a company paid to preserve South American or Central American rainforests only to discover that all they had done was enrich a middleman. One of the best known of these scams was the “Vatican forest” which started as a plan to plant forests in Hungary and ended up with no forests and the Vatican losing both face and money. The oversight these days is much improved, but as I have written elsewhere, even the best laid plans (like say biofuel plantations) can fail to meet their goals of reducing carbon emissions.

From its well-meaning roots the idea of carbon offsets has fallen out of favour in the environmental community. The most scathing attack I have read to date was penned by noted author and journalist George Monbiot who likened it to the purchasing of indulgences during the middle ages. For those of you not familiar with the concept, during the middle ages the Catholic Church sold an item called an indulgence, which essentially was a “get out of hell free card”. If you happened to have the bad luck to die out of a state of grace (without having given final confession and thus with unforgiven sins still on your ledger) then your family could pay the church and your sins would be retroactively forgiven. Once your sins were forgiven your soul would, theoretically, then be able to ascend into heaven. Soon the selling of forgiveness became quite the thing and in some precincts you could pre-pay for your sins. So before you went off to murder someone you dropped the right number of pennies in the jar and if you should happen to be killed in the process you still got a promise of heaven. Needless to say many saw the selling of indulgences as ethically and morally wrong with the most famous being Martin Luther (of Lutheran fame) whose “Ninety-Five Theses” did a pretty reasonable job of demolishing the moral and intellectual support for the concept.

These days I view carbon offsets with the same level of disdain that Martin Luther viewed purchasing of indulgences in 1521. Were I to prepare my own 95 environmental theses, carbon offsets would be right up there at the top of my list. The point of the carbon offset was to move us away from our love affair with carbon and the method of doing so was through the pain of payment. Nowadays they are used to avoid having to make hard choices or make any personal sacrifices in lifestyle. These indulgences are now simply a sop to the conscience and basically represent a rich person’s way of saying “Not only am I rich enough to fly on a private jet emitting carbon to my heart’s content, I am also rich enough to buy myself some salvation at the other end” It is bad enough when a billionaire businessman flies his personal jet to Davos Switzerland to lecture the world on profligacy but to then say that he didn’t really emit carbon because he paid some cash, that just is the icing on the cake. It is no different than the cut-purse in 1520 dropping a penny in the jar so he can go about robbing old women with a clean conscious and the apparent blessing of the church. The point of the exercise is not to emit carbon unnecessarily (or in the case of the cut-purse not to steal). The intention was never to emit it profligately and then throw the equivalent of a couple pennies in the jar.

While I think it is clear how I view the use of carbon offsets to allow for profligate lifestyles there is another level of environmental insult I find even more obnoxious. That is hypocritical protesters who not only buy indulgences for their carbon sins but then have the gall to protests the safest means of transporting their carbon indulgences to market. In my local region there is a battle going on about pipelines. As everyone knows pipelines represent the safest way of transporting petroleum hydrocarbons across long distances and over perilous and environmentally-sensitive terrain. Well one of the leading lights of the anti-pipeline campaign in my region has a habit of traipsing around the world by commercial jet. As anyone informed in environmental action knows commercial jets are some of the worst emitters of carbon per kilometer traveled. In any world where we want to reduce the emission of carbon we have to reduce the number of fossil fuel-powered jetliners cruising the skies. This individual argues that he needs to travel to do his good works but the trips always seem to include a component of fun and relaxation at the feature destinations. The individual excuses his sins by pointing out that he buys carbon offsets and thus his carbon debt has been paid. Of course this offset completely ignores the fact that real fossil fuels had to be used to fly those airplanes and those real fossil fuels had to get to the refinery and then the airport to allow the airplanes to take off.  Being rich enough to fly around the world and then buy carbon offsets doesn’t mean you use less fuel in your travels only that you have a sop to your conscious while doing so. The original goal of carbon offsets was to aid in reducing demand for carbon-intensive fuels but as we see these days carbon offsets do nothing to reduce that demand. Arguably because of their relatively low costs carbon offsets actually exacerbate the problem by allowing people, who should know better, to believe they are actually helping the environment as they live their lifestyles of the rich and famous.

To be clear, there are many who will disagree with my views on carbon offsets, and after this posting I am pretty sure I will hear from many of them here. What I want to hear from these people is: what does that offset do to reduce demand for fossil fuels? What does that offset do to reduce the environmental risks associated with extracting those fossil fuels from the earth? What does that offset do to reduce the risks in transporting those fossil fuels to refineries? What does that offset do to reduce the risks of moving those refined fuels to market? What does that offset do to reduce the secondary environmental concerns associated with air travel? Answer me these questions and prove to me that carbon offsets don’t represent a Basilica to bad policy.

Posted in Climate Change Politics | 8 Comments

On Science Communication and the Difficulty Relaying Scientific Information to the Public

This blog posting is a reminder about the difficulties communicating good science both in the media and to our fellow scientists and how challenging it is to communicate to both audiences simultaneously. This blog posting is derived from a three-way Twitter discussion I had with Dr. Jay Cullen (a Marine chemist, oceanographer and Associate Professor at University of Victoria, School of Earth and Ocean Sciences) and Suzy Waldman (a PhD student studying risk communication at Carleton University). The discussion related to an article in the Victoria Times Colonist titled B.C.’s citizen scientists on alert for radiation from Japan. The topic of our conversation was both my and Ms. Waldman’s initial negative perceptions based on our independent reading of the article. As many of you know, I wrote a blog post in December on bad representations of risk in reporting of the Fukushima plume. As such I have been sensitized to the topic. Ms. Waldman, also being in the field, also likely has very similar sensitivities.

Our discussion started off on a rancorous note (okay maybe even accusatory on my part). Both Ms. Waldman and I had read the article and I tweeted a couple tweets towards the scientists quoted in the story indicating that they might be responsible for overblowing the risks associated with the identified cesium isotope (134 Cs) plume. Dr. Cullen rightly took exception to my tweets because, as he pointed out, a clean reading of the article pretty clearly demonstrates that he had gone out of his way to explain why the radiation identified actually did not pose a significant risk to human health or the environment. Our discussion eventually turned to why Ms. Waldman and I had both experienced the same apprehension in spite of the article’s contents.

Ms. Waldman posited that the formulation of the article was a key. She pointed out that the first half of the article was full of really loaded words: “fallout, disaster, Chernobyl, peak, plume, radiol’l health risk“. For those of you unfamiliar with risk communication these types of words are commonly called “dread” words. Dread is a term used by Dr. Paul Slovic in his seminal article on risk communication called “Perceptions of Risk“. For those of you without access to journal articles Dr. Roger Pielke discusses the topic in this blog posting. Essentially dread words draw a visceral reaction from the reader irrespective of their positioning or formulation in an article. They typically represent risks or fears over which the reader has no control and often represent risks that cannot be viewed or experienced using standard human senses. In re-reading the article it is clear that Ms. Waldman has a good point. The sheer number of dread words in the first couple paragraphs would be enough to get most readers feeling uneasy. I have previously suggested that any reader who wants a non-technical but compelling read on the subject of Risk should get the book by Dan Gardner of the same name. In the book Mr. Gardner discusses how fear and risk are used by organizations and writers to advance their causes. In this case, it is likely that the language was recognized by the writer (or his/her editors) as useful in pulling eyes to the page and to keep readers reading.

My experience in science communication is different from Ms. Waldman’s and as such the one thing that really drew my eyes in the article was a line about the health risk of the cesium radioisotope:

“It can pose a radiological health risk because it tends to concentrate in organisms,” Cullen said. “[But] health physicists suggest the exposure of consumers to these fish don’t pose a danger to anybody’s health.”

My first response to this statement was strong annoyance. Surely what Dr. Cullen meant was that health physicists would view the risk as being negligible (using the de minimis principle). To put it in perspective in my previous posting I pointed out that 5 becquerels represents the approximate radiation in a little less than a half of a banana. Oddly enough my annoyance at that sentence caused me to miss how Dr. Cullen went on to relate that the exposure “is expected to be three to five becquerels per cubic metre of water. Canadian guidelines for safe drinking water impose a limit of 10,000 becquerels per cubic metre, he said“. After re-reading the piece I realized that I had been caught in the “good news/bad news” trap. For those unfamiliar with the concept, research shows that when good news is presented immediately following bad news (as in this case) the reader/listener can get unsettled by the bad news and become less receptive to the ensuing good news (ref).

So by my count it was Scientists – 2, readers – 0, but I also realized that Dr. Cullen may have unconsciously been partially to blame for my unease. I say unconsciously because I feel that in his interview he made the mistake of speaking “in Science” to a non-scientist. As a non-academic scientist, I am reminded to assume my audiences are at least as smart as myself and to speak to them accordingly. Speaking down to an audience almost always ends badly. The corollary to that statement is that the public is not always familiar with the way scientists communicate. As scientists we are trained with a language (jargon) all of our own. When my wife says something is “significant”, she means “important”; when I say the word I mean that it met a pre-determined level of significance (i.e. p<0.05). The risk professionals in our office will never say that something has “no risk”. Even if the risk is only one in ten billion it still represents a risk. Instead a risk professional will say it has “acceptable risk”, where the level of acceptance is based on the conclusions of a reputable scientific or health organization. Similarly, when discussing other people’s work I will often use correlation terms like “suggest” and “indicate” as they give me wiggle room when I don’t have exact figures (like exact levels of statistical significance etc..) at hand. For me these words tell my fellow technical people that: “I don’t have the numbers at hand but can assure you the information I am relaying to you is correct”. I tend to use to causation words only when speaking in my area of expertise.

I am guessing that this is what happened to Dr. Cullen in this case. Using typical scientific diligence he used the softer term “suggest” rather than a stronger term or phrase. Had he said “Health Physicists will tell you (or assure you) that exposures….don’t pose a danger” it would have left the reporter with no doubt (or ability to manufacture doubt). Unfortunately, when a reporter hears the word “suggest” he/she is more likely to understand it in a literal sense and not the common scientific usage. That is: health physicists “propose” or “put forward for consideration that exposures….don’t pose a danger“. This literal definition completely misconstrues the level of doubt associated with the risk. Moreover, if the writer’s editor was looking for a quotation that could be interpreted broadly (perhaps to move the article closer to the front of the paper) this would be the line to fixate on.

Ultimately, the confusion comes down to a common problem we, as technically-trained, practicing scientists, face every day. We are trained to avoid making errors of confidence (there’s that Type I error thing coming to haunt us again). When working and communicating with our peers we have the tendency to use language that is less than definitive, understanding that our meaning will be understood by all. Unfortunately, we operate in a world full of people operating blissfully in a Dunning-Kruger state who are willing to express a level of certainty that an expert cannot ethically match. As specialists we have to remember to turn off our science blinders and speak with forethought about how our words will be read, interpreted and possibly misinterpreted. Certainly we can use correlation words but stick to the ones that have comparable usage in the vernacular. I cannot reasonably say the radiation poses “no risk” but I can certainly say it poses “negligible risk”.

Coming from the other side, as scientists we also have to give our colleagues some slack. We have to recognize when our colleagues are trying to relate complex technical topics in a manner that is understandable by the public. Give your colleagues the benefit of the doubt and start with the assumption that they are acting in good faith even when (especially when) you disagree with them. Piling on and picking nits when a colleague is trying to make complex technical information understandable for the general public does nothing to expand our knowledge and only hurts the cause of good science in public

Author’s note: please don’t interpret my use of the names Ms. Waldman or Dr. Cullen as an attempt to give one greater credibility over the other. I was brought up in the old school where one did not use another’s first name in discussions unless one knew the individual well enough to justify the use. I apologize in advance if my traditional style of writing may have offended.

Posted in Risk, Risk Communication | 3 Comments

Black Carbon, a Climate Change Topic We Should all be able to Agree on

One feature of the climate change debate I find particularly troubling is the extent to which CO2 has come to dominate the narrative. Certainly CO2 is a critical component of the climate change discussion, but there are other important areas of potential advancement that have been essentially ignored in the policy debates and discussions dominated by talk of CO2. One aim of this blog is to form an intersection between climate science and climate policy. My general goal has been to not recommend global policies but to evaluate the information out there and figure out what can actually be accomplished at a regional level. I am going to make an exception in this case as there are some topics upon which even the most dedicated denialists and the most excitable catastrophists should be able to agree; one such topic is that of black carbon. The topic of black carbon has been all but ignored to date in the climate change debate, but the release of a major multi-author paper in June 2013, combined with some very compelling observational data on its effects on Arctic ice and glaciers has brought topic out from under the covers.

Since this topic has received such little play in the past let’s start with a quick definition to help set the stage; this is from the US EPA:

Black carbon (BC) is the most strongly light-absorbing component of particulate matter (PM), and is formed by the incomplete combustion of fossil fuels, biofuels, and biomass. BC is emitted directly into the atmosphere in the form of fine particles (PM2.5). BC is the most effective form of PM, by mass, at absorbing solar energy: per unit of mass in the atmosphere, BC can absorb a million times more energy than carbon dioxide (CO2). BC is a major component of “soot”, a complex light-absorbing mixture that also contains some organic carbon (OC).

In the IPCC reports, black carbon has historically been more of a footnote. It was recognized as an issue to be addressed but was mostly given short shrift. This was due to the nature of the IPCC process and its reliance on older peer-reviewed articles. The 2007 IPCC Working Group I estimate of black carbon forcing was 0.2 +/- 0.15 W m-2. This should change with the publication of Bond et. al 2013 (and I really mean et. al..it has more high profile authors than you can shake a stick at) published in the Journal of Geophysical Research: Atmospheres (abstract full report (big file)). As described in the paper:

We estimate that black carbon, with a total climate forcing of +1.1 W m-2, is the second most important human emission in terms of its climate-forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing.

This new estimate really changes the game with respect to its importance. A particularly troubling aspect of black carbon is its association with decreases in Arctic ice cover and enhancing the retreat of glaciers. Black carbon is produced all over the world but can deposit on ice and snow and microscopic quantities can have pretty significant effects. As described in a paper from Cryosphere: an addition of 8 ng g−1 of black carbon causes a decrease to 98.7% of the original albedo for first-year sea ice compared to a decrease to 99.7% for the albedo of multiyear sea ice, at a wavelength of 500 nm. Thus, instead of the ice reflecting solar rays the black carbon absorbs the radiation. In doing so, the black carbon melts the ice and once melted the ice loses its albedo and the underlying surface (be it soil or water) is able to further absorb radiation resulting in more ice loss. Whether you accept the anthropogenic nature of climate change or not, accelerating glacier retreat and enhanced loss of Arctic and Greenland ice should be a concern as they will contribute to enhancing the rise in ocean levels with ensuing risk to coastal communities and will reduced the availability of fresh water in regions dependent on glacier runoff for their water supplies.

As described above, black carbon seems to be a pretty bad actor in the field of climate change and is damaging to the cryosphere, but it also has another issue that should concern even those people uninterested in these two topics. A large body of scientific evidence links exposures to fine particles (i.e., ambient PM2.5 mass concentrations) to an array of adverse health effects, including premature mortality, increased hospital admissions and emergency department visits for cardiovascular and respiratory diseases, and development of chronic respiratory disease ref. So even if you have no interest in climate change you should still want to address black carbon for its human health concerns.

The next question to ask is: where is this black carbon coming from? Well according to the EPA’s report on black carbon, most U.S. emissions come from mobile sources (52%), especially diesel engines and vehicles. In fact, 93% of all mobile source emissions came from diesels in 2005. The other major source domestically is open biomass burning (including wildfires), although residential heating and industry also contribute.

Emissions_Pie_Charts

So the problem in North America is mostly associated with wildfires (over which we have little control) and diesel emissions, (over which we have a lot of control). The US has plans for emission controls for diesel engines but by climate change standards the funds being allocated to enhance the efforts are pretty minimal. Accelerating the cleanup of diesel emissions would have an effect on climate change while also improving human health outcomes. This seems like something we can all get behind. One way would be to direct serious research dollars into finding alternatives to diesel fuel for the movement of heavy trucks and trains. In British Columbia, we have a company called Westport Innovations Inc. that has developed a mechanism to convert diesel engines to use natural gas. Similar efforts should be jump-started to speed up the movement away from diesel as the primary fuel for heavy engines (conflict note: I have absolutely no Westport stock in my portfolio and have no financial interests in the company).

While North American and Northern European emissions of black carbon are due primarily to wildfires and transporting goods, sources in developing countries are substantially different than in the United States: mobile sources (19%) and open biomass burning (35%) represent a smaller portion of the global inventory, while emissions from residential heating and cooking (25%) and industry (19%) are larger. The area that really jumps out is the “domestic/residential” emissions or as reported by the EPA, cooking fires. Those of us lucky enough to live in North America are used to being able to use natural gas, electricity or propane to cook our meals but globally over 1.3 billion people are without access to electricity and 2.6 billion people are without clean cooking facilities. More than 95% of these people are either in sub-Saharan African or developing Asia and 84% are in rural areas (ref). These people are left to cook over open fires or in wood stoves using brush, animal dung and any wood they can get their hands on. From a human health perspective, the World Health Organization estimates that indoor smoke from solid fuels is among the top ten major risk factors globally, contributing to approximately 2 million deaths annually. Women and children are particularly at risk (ref). But human health is not the only concern, all that woody material has to come from somewhere and in sub-Saharan Africa and Southeast Asia that means deforestation. I’ve written elsewhere how important intact forests are for ecological protection but they also have serious implications for helping to mitigate/decrease the rate of climate change. So not only are these families putting their lives at risk just to cook their meals, they are also contributing to deforestation and increased release of CO2. In these countries, enhancing access to electric power grids and alternative energy sources should be an aim that we can all agree upon.

Looking at black carbon we have a major potential forcing agent for climate change; a serious risk to the cryosphere; and a human health risk of the first order. By targeting black carbon I feel we can get out of our mutual trenches and start working together in a way that will improve the condition of the planet. In doing so we can identify those people actually interested in having a perceptible effect on improving the planet and smoke out the rent seekers and hangers on who seek only to extend the debate for their own purposes.

Posted in Climate Change | 5 Comments

What is so Special about 2 degrees C in the Climate Change Debate?

In my last post I promised to take a bit of time to discuss the 2C target. You would expect that such an important target was picked through the use of a detailed scientific process with input from the brightest minds on the planet. In this you would be wrong. The acceptance of the 2C target is actually one of the most deliberately muddied topics in the climate change debate. The reason for this is because one of the not-very-secret secrets of the climate change debate is that the 2C target was originally chosen based on the thinnest of technical rationales. It was then backstopped with a thin veneer of research and buffed up to look like it had a stamp of approval marked “SCIENCE” before being trotted out for the public. Most distressingly, unlike a true science-based target, which would be refined as our knowledge base has improved and we have become more capable of understanding the climate system, this target has held absolutely steady since it was suggested in the late 1990’s. Admittedly, since the target was not chosen based on the science, advances in science really should not affect its value.

Now that I have written an introduction that should get every climate activist on the planet in a tizzy let me tell you a second secret. As targets go, the 2C target ends up being a pretty reasonable first guess. While a lot of science supporting the target is pretty flaky (and created using some pretty interesting premises) a lot of the research indicates that the most likely range certainly is somewhere around 2C (it actually seems to range from 1oC to 4C). I’ll go more into that later but wanted to reassure readers that the policy folks aren’t completely off their rockers.

For those of you really interested in reading deeper into the debate around the 2oC target there was a fascinating run of articles/blog postings in October 2014 that will provide interested readers with far more details than I have room to provide here. The discussion started with a comment in Nature by Drs. David Victor and Charles Kennel titled:Climate Policy: Ditch the 2oC Warming Goal. This was countered almost immediately at RealClimate.org (a blog for and by some of the big names in Climate Science) with an article by Dr. Stefan Rahmstorf titled Limiting global warming to 2 °C – why Victor and Kennel are wrong. The New York Times Dot Earth blog (another biggie in the climate blogosphere) then provided Victor and Kennel an opportunity to respond, which they did at length. For those of you used to policy discussions in the political sphere, this sounds like a pretty normal state of affairs. Informed professionals present their thoughts in a major journal, interested observers/experts provide counter-arguments and then the authors reply in kind. When done consistently this type of discussion would provide interested parties an opportunity to become informed without having to drown themselves in the primary literature. Sadly in the field of climate science this quality of back and forth discussion is seldom seen. Of note, while I present this discussion as a model of how the debate might be carried out in the climate field, my more sensitive readers will note that the RealClimate blog posting starts with a pretty nasty run of ad hominems and mean-spirited prose before the author actually gets down to discussing the topic at hand. Believe it or not behaviour that would get you shunned in virtually any non-academic, professional environment is actually considered a step up from the normal in this field. Having read all the documents I feel that Drs. Victor and Kennel made a far better case.

To go back to my promise, let me quickly summarize the history of the 2oC target. At the 1992 Rio Summit it was agreed that there was a need to “stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system (ref)“. The problem is that at the time they really had no idea what that would entail. It was understood that “global warming” was the next serious environmental challenge but no one knew exactly what that meant or even if we might be too late to make a difference. Coming out of the Summit a number of meetings and technical documents were sought/produced to establish what it would take to actually “prevent dangerous anthropogenic interference in the climate“. The problem was that at that point in time the climate field was really in its infancy. The first integrated assessment global climate models were still being developed and the computer power needed to do the necessarily complex calculations was not readily available. Just consider that an iPad2 would have made the list of the world’s speediest supercomputers until 1994; so at that time the models were still pretty basic. In the absence of the tools we take for granted these days a number of organizations and governmental bodies went to work formulating approaches that could be used in subsequent assessments. As described at the RealClimate blog, the critical one ended up being from the German government’s Advisory Council on Global Change. This document took a very qualitative look at the problem. It identified a “tolerable temperature window” based on historic reconstructions of temperature regimes and came up with the following argument:

This geological epoch has shaped our present-day environment, with the lowest temperatures occurring in the last ice age (mean minimum around 10.4°C) and the highest temperatures during the last interglacial period (mean maximum around 16.1°C). If this temperature range is exceeded in either direction, dramatic changes in the composition and function of today’s ecosystems can be expected. If we extend the tolerance range by a further 0.5°C at either end, then the tolerable temperature window extends from 9.9 °C to 16.6 °C. Today’s global mean temperature is around 15.3°C, which means that the temperature span to the tolerable maximum is currently only 1.3 °C.

So starting with the approximate 0.7°C that had already been observed by 1995 they ended up with a number of 2C, on which to hang their hat. As described at the RealClimate blog, the rest was history. The German position was adopted by the European Union and it became the de facto number we see and love. Kinda scary eh! The 2oC target is simply a sensible-sounding qualitative number based on approximate temperature ranges for interglacial periods, buffered with a fudge factor.

The next obvious question is: what is inherently wrong with the 2oC target? Well, assuming you are okay with its qualitative derivation, the obvious next issues are: 1) the question as to whether the number is the right one? and 2) the question as to whether it makes sense using a lagging indicator as your target? Let’s deal with the second issue first. As everyone involved in the climate change debate understands, temperature is a lagging indicator. Moreover, as demonstrated by the “pause” temperature can be a seriously delayed lagging indictor. Since we don’t have a good handle on climate sensitivity 2C could happen at 450 ppmv, it might occur at 800 ppmv or in an even less likely scenario it could take until 1200 ppmv? As we have seen with the pause, temperature has essentially held still for the better part of 20 years while carbon emissions have continued pretty much unabated. This point seems to be completely missed by Dr. Rahmstorf in his article. He harps on the political value of the target without ever acknowledging that it is less than a target than a shimmering mirage which may or may not exists somewhere out there somewhere, if you squint just right. Under the current target approach we will only be sure we have reached the edge of the chasm after we have started falling…essentially the Wile-E-Coyote approach to policy. If you truly believe that 2C represents a danger point then it would be much better to establish what emissions will get you to that point and ensure we do not exceed those numbers. This is the approach suggested by Meinshausen in his letter to Nature where he suggests carbon budgets as a much better policy tool. While I cannot attest to the validity of his actual numbers, the whole idea of negotiating carbon budgets makes a lot more sense than negotiating to keep ourselves below some nebulous post hoc target.

Another consideration in the debate is whether 2oC even represents an appropriate target. Since this post is already getting quite long instead of going into extreme detail on the topic I would simply direct you to a very interesting paper by Dr. Richard Tol from the University of Sussex. Since the article is pay walled, I will link to a figure from the paper which I found online.

Manuscript

Manuscript

 

What the paper indicates (and is illustrated in the figure) is that minor amounts of heating (less than 2C) may actually result in improvements in quality of life (as indicated by effect on % GDP). The conclusions of the Dr. Tol paper were not surprising to me as any plant biologist will tell you: plants grow better and have more drought tolerance under conditions of higher atmospheric CO2 concentrations. While the exact point at which the baseline is crossed remains under debate the positive effects of small increases in atmospheric CO2 concentrations are undeniable.

An interesting consideration of the Tol graph, that I have not seen discussed, is how the improvement in quality of life derived from the first 1oC heating might actually be hindering attempts to slow the growth in carbon emissions. Every week we see another attempt by alarmists to link higher CO2 concentrations to wild and woolly weather and unexpected global events. As I’ve written elsewhere, all this crying wolf does little to help their cause and runs contrary to the peer-reviewed research. Perhaps the policy people out there might want to acknowledge that increased CO2 concentrations are actually expected to improve things in the short run. Framing the current conditions as a “calm before the storm” would better allow the public to understand the risk. Instead they keep talking about Hurricane Katrina which only reminds us about how long it has been since Hurricane Katrina actually happened.

Posted in Climate Change, Climate Change Politics | 6 Comments

Why I think Climate Sensitivity is Essential for Developing Effective Climate Change Policy

For those of you recent to this blog, my primary readership is not typically experts in climate change science but rather people interested in the policy implications of climate change science. This includes people with interests in renewable energy technologies and governmental decision-making. What this means is that occasionally I need to step back and provide some more detail about the terminology used in the discussion. I normally do this in response to questions and in this case the one question I have received again and again is: why do you keep harping on climate sensitivity?

Personally, I think refining climate sensitivity estimates might be one of the most important things climate scientists can do to help to establish a consensus on the policy and political implications of climate change. Admittedly my opinion is not universally accepted. One of my regular commenters, Dr. Michael Tobis at his blog Planet 3.0, takes an entirely opposite view on the topic. Other blogs present similar views to Dr. Tobis, most of them are based on the argument that sensitivity is reported as a range and as long as a risk of a higher end sensitivity exists then we have to take action now. My response to this will be detailed below but first, for my non-climate scientist readers out there, let’s go back to the basics (and in this discussion I mean basics I’m not looking at transient climate sensitivity etc…).

In plain English climate sensitivity represents the anticipated heating expected based on changes in CO2 concentrations in the atmosphere. Climate sensitivity is formally defined as a measure of the anticipated equilibrium temperature change in response to changes of the radiative forcing associated with Tyndall gases. Radiative forcing is the measurement of the capacity of a gas or other forcing agent to affect that energy balance, thereby contributing to climate change. Radiative forcing expresses the change in energy in the atmosphere due to Tyndall gas emissions (reference).

What many of my readers may not know/understand is that due to the chemistry, increases in CO2 concentrations do not have a linear relationship with anticipated heating. Or put another way, 1 unit of COdoes not result in 1 unit of anticipated heating. Rather, each molecule of COadded to the atmosphere has a tiny bit less heating effect than the molecule added before it. Specifically, rather than a linear relationship the relationship is logarithmic. As detailed at the Skeptical Science web site (yes, many of my readers dislike that web site but while it has a deeply political bent it does serve as a useful information source) “this  logarithmic relationship means that each doubling of atmospheric COwill cause the same amount of warming at the Earth’s surface. Thus, it takes as long to increase atmospheric CO2 from 560 to 1120 parts per million by volume (ppmv) as it did to rise from 280 to 560 ppmv”.

So why do I care about climate sensitivity? Well it becomes a bit of a math game. If climate sensitivity is determined to be 1.5oC then each doubling in atmospheric COconcentrations would result in an increase in 1.5oC. Using the Skeptical Science numbers, an increase in COfrom 280ppmv to 1120 ppmv would represent two doublings, so with a sensitivity estimate of 1.5oC you would expect an increase of 3oC (two times 1.5oC). If your sensitivity estimate is 4.5oC then the same 3oC would occur below 500 ppmv. So you ask why sensitivity is important? Well the math is pretty easy here. If we only have until 500 ppmv to avoid 3oC (which in this thought experiment we will define as a less than scientific “point of disaster”) then we need to act immediately. If we have until 1120 ppmv then we can wean our society off fossil fuels more gradually and decrease the pain.

Now that we understand the concept, it becomes clear how fundamental differences with respect to climate sensitivity estimates will affect political and policy implications. In one corner we have extreme Lukewarmers who think that climate sensitivity is around 1oC (note I am not from that group). For them getting to the 3oC (which for our thought  experiment is really bad) would entail three doublings, or atmospheric COconcentrations of 2240 ppmv. Even at our current rate of atmospheric COdeposition that is not happening anytime soon. So trying to convince them that we should divert trillions of dollars to completely re-tool our planetary energy/power systems would be a pretty hard sell. At the other extreme are people who believe climate sensitivity might be closer to 6oC. For them we are not merely approaching the cliff but have already crossed over it and are starting the dive to collapse. For them immediate action is essential to avoid a hard landing at the bottom of the chasm. In both cases we have good, honest people who have a genuine disagreement about the science but their opinion on the science leads to dramatically differing policy positions. One side says that as long as we get this done in the next 50 years or so we are fine and the other thinks that every day’s delay will result in more pain and hardship. The first group will deeply resent being asked to make serious sacrifices to address climate change and the second group will think that actions are so necessary that maybe governments should be forcing people to make changes involuntarily. 

To add complexity to the discussion is the point made above by the colleagues of Dr. Tobis. There are lots of things we still do not understand about our climate. Right now we have a pretty good thing going. We live on a planet in an interglacial period with a relatively mild climate with storms and disasters that come at infrequent intervals. Any change could tip that balance towards a world with less good days and more bad days (or maybe even the other direction). Since our civilization has thrived under current conditions isn’t it thus a good idea to try and keep it in this current place? Also for the gamblers out there we are reminded that sensitivity represents a range and while there may be 90% (this is a made up number as no one actually has a clue what the real numbers are) chance of a small sensitivity that means there is still a 10% chance of a bigger number and given any non-trivial uncertainty, isn’t it better to not role the dice? 

The most important thing to recognize in all this that any uncertainty in the science is going to be magnified by people at both ends of the spectrum who have political and/or financial stakes in either maintaining the status quo or eliminating the status quo. The best way to get these people off message is to reduce the uncertainty. From a policy perspective uncertainty also makes for bad policy as the more contingencies you have to consider the more complex (and thus the less practical) the policy will have to be. Finally, as we work to reduce the uncertainty it would be nice if scientists stopped feeding denialists and catatrophists all that red meat. As a knitter myself, I can say this: stick to your knitting and to mix my metaphors horrendously, stop running around stamping on each other’s toes.

Author’s note: at this point in this posting at least a couple readers are going to be apoplexic over my use of 3oC in my thought experiment. Please accept that the choice was simply to make the math easier for the discussion. I know that the IPCC has a 2oC target but that number is very much a topic for a future posting.

Posted in Climate Change, Lukewarmers | 26 Comments

Does the climate change debate need a reset? – on name calling in the climate change debate

The purpose of this post is to address an area I think is incredibly badly served in the climate change debate and damages all resultant policy debates: it is the topic of name-calling. It is a common ploy in debating to label your opponent with a name intended to lower their appeal and thus degrade the reception of their technical arguments. The classic example of this is in the abortion debate. Supporters of “a right-to-choose” call themselves “Pro-Choice” (after all choice is a good thing isn’t it) which automatically labels their opponents as being “Anti-Choice”. Supporters of “fetal rights” call themselves “Pro-Life” with the obvious label for their opponents. You will note in this paragraph I have almost used up my normal quota of quotation marks for a post. I want to be careful here and use the terms and expressions used by the members of the debate themselves without indicating any opinions on that front because for the purpose of this discussion my opinions on that topic are not a matter of concern. The point is that by labelling yourself in a positive manner your opponents, by definition, get labelled in a negative manner.

Before I go too much further, I’d like to digress slightly and give you a little more of my personal background. I was a young boy when Ernst Zündel published the pamphlet “Did Six Million Really Die” (in 1974). I grew up in a time of the quiet growth of the Holocaust denial movement in my home province of British Columbia. I was a young activist while the Keegstra case worked its way up to the Supreme Court of Canada and did my small part to support groups who fought anti-Semitism and the rise of Holocaust denial. I watched as a tremendous effort was made to link a relatively benign word “denier” with the concept of Holocaust denial. This linking worked and for many of my generation this term has a power like few others. Happily, my kids are growing up in an era where (at least where I live) Holocaust denial is restricted to those with recognizably bad intentions.

Given this background, you can imagine my anger when the term “denier” was misappropriated by a core of activists who recognizing its power (a power soaked in the blood, sweat and tears of people I knew and respected) decided to use it to label their opponents. I have even less time for the apologists who say, “well look it up in the dictionary” and thus excuse themselves of the implied slander associated with using the term. When I was a young man the “joke” used to be that calling a homosexual a “faggot” was not an insult because if you looked the word up in the dictionaries of the time the dictionary definition simply read “a bundle of sticks”. Everyone knew that the word had an incredibly evil use intended to degrade the person being addressed but for some the fact that the dictionaries had not caught up with the common usage meant it was okay to use this vile term. So given my history, I will not use the term (except in quotations) and I give short shrift to those who attempt to use it in polite discussion.

So now I have addressed the most denigrating term in the vernacular of the climate change debate, I suppose the question arises: what label should be used to describe someone who disagrees in the fundamental science of climate change? To my mind, labels almost always detract from a discussion, but I admit to using them a lot in my posts to date. I am open to suggestions, but if I had to choose I would suggest that the people who legitimately disagree with the physics of climate change are “sky dragons”. This term is, of course, not mine but refers to the book “Slaying the Sky Dragons” which presents a “full volume refutation of the greenhouse gas theory of man-made global warming”. For detailed refutations of the book, look elsewhere, but in my mind Sky Dragons are people who legitimately believe that the science is on their side. I tend not to attempt to debate them as I lack the time and energy to do so. Perhaps they represent Galileo striking out against the dogma of the church, but in this debate I’m on the side of the church. On that end of the spectrum also exist a different group. This group includes those who, for reasons other than the strict science, deny the science of climate change. Looking at the numbers this group is made up of a surprisingly small number of individuals and could easily be described as “denialists”. My goodness, apparently there is a word in the dictionary, with deny at its root, that does not carry with it the toxic scent of Holocaust denial? I wonder why it was never chosen by the activists for use in the debate? Denialists, as I note, are few in number but are used as the boogeyman (or straw man) in any debate on the topic.

On the other end of the spectrum are people I describe as catastrophists. This group tends to be made up of individuals, like the denialists, who tend to have less interest in the science and more interest in the political and financial ramifications of the issue. I know, I know it is really hard to get hard-earned dollars out of donor’s pockets, but to suggest that “Santa Claus is going to drown” or that our civilization will collapse if we cannot return our atmospheric CO2 concentrations to 350 ppm both seem pretty far-fetched. Much like the denialists, I tend to tune out the catastrophists in my day-to-day endeavours. The problem lies in the fact that, most really are “true-believers” and in my neck of the woods are willing to go to almost any end to avoid the catastrophe. Most are individuals I describe as “science blind” and comprise the people who chain themselves up to the Chevron refinery in Vancouver, even though we currently have no option besides fossil fuels to run our fire trucks or ambulances in BC. These are the people who are so unwilling to compromise that they are willing to ignore the environmental and human health risk implications of sending oil-by-rail because upgrading a 60 year old pipeline might increase oil exports. My earliest posts on this blog address these people so I will say no more of them here.

So the title of my post includes the idea that the climate change debate needs a reset and you are probably wondering when I will get to that point? Well having dismissed the fringes of the debate we come down to the meat of the matter and what I find most interesting is that an examination of the issues, the differences are really quite small, but the antagonism and bad blood sure are not. There was an old saw in our department that “politics in universities are so vicious because the stakes are so small”. This can be re-purposed for the climate change debate to “the politics of climate change are so vicious because the actual differences in science are so small”. If you actually look at the debate you have lukewarmers, like myself, who suggest that the likely range of climate sensitivity is going to be at the lower end of the IPCC range; you have “warmists” who think it is going to be in the middle end of the range; and you have “alarmists” who think the higher end of the range is the most likely. Notice the strange coincidence? Virtually all the combatants are actually still within the current range of the scientific “consensus”. Certainly some lukewarmers might think that negative feedbacks will dominate and foresee little global effects based on Tyndall gas emissions but, at a minimum, all are working under the general restrictions of climate science. So when a certain high-strung Professor calls his opponents “deniers” or “delayers”, what he is really saying is: “I disagree with your interpretation of the currently available scientific literature, but am unwilling or unable to explain why my interpretation is better than yours”. To put it another way, we are not talking about black and white here, we are talking about a palette of greys that is strongly supported by a robust scientific literature. Frankly to my mind, most of the real debates end up being about mitigation versus carbon dioxide reductions and how quickly we should move away from fossil fuels as primary energy sources.

The critical point that appears to be completely missed by the alarmists is that their language actually hurts their cause. As is noted elsewhere, there are numerous debates out there. The two primary forms are debates about the science (climate sensitivity, aerosols, etc..) and debates about the politics and policies. Moreover, as I note above, there are real denialists out there whose aims appear to be to delay and befuddle in order to maintain the status quo and/or to provide cover for industries that would be hurt by political action. What really detracts from our ability to develop good policy is when alarmists call lukewarmers “deniers”! Why is this the case? Because what the alarmist is doing by conflating lukewarmers and denialists is actually giving unearned respectability to the denialists. Consider Sen. Inhofe, by the standards of the climate change debate he would appear to represent a denialist (I make no claim to knowing the man’s heart and mind and if I am mistaken I apologize in advance). Now don’t you think that a politician like Sen. Inhofe prefers to be lumped in with a group that includes highly credible and well-respected scientists like Dr. Judith Curry and Dr. Richard Muller rather than his true position outside the ranks of mainstream science? From a political perspective you shoot yourself in the foot when in trying to denigrate an opponent you actually strengthen an enemy. So whenever a warmist calls a respected colleague a “denier” he/she is actually adding a mantle of respectability to the actual denialists out there. The alarmists wrongly make the denialists look like they are many in number and that they have a scientific expertise which they do not possess.

On the other end of the spectrum, by not calling out catastrophists for who they really are, warmists also lose on that front. We have Chicken Little’s screaming that every drop of rain represents a clear sign of “climate change”. The IPCC SREX clearly states that the science is not in and that while the potential exists that climate change will increase the likelihood of major weather events there is insufficient evidence that it has done so to date. Warmists and alarmists have to call out the catastrophists because like in the story “The Boy Who Cried Wolf” the wolf eventually showed up, but by the time it did the townsfolk had stopped listening. I sometimes wonder if the reason the warmists have appeared to have thrown their lot in with the catastrophists is that they are tired of trying for incremental changes and want to win it all with one roll of the dice. The problem is that, continuing the analogy, you can also lose it all on one roll of the dice. In science and politics incrementalism is how most advances are made. By pushing the alarmism ahead of the science the warmists risk alienating the public and that makes it easier for the denialists to do their thing.

Posted in Climate Change, Climate Change Politics | 22 Comments

On “Trust” and the Role of Renewable Energy Sources in “Climate Science”

 Over the course of the last few weeks, my readings into the field of climate change have strayed from the technical end of the spectrum to the “discussion” end of the spectrum more than usual. Certainly, I have tried to keep apace of the blogosphere, but the surfeit of sites, and my busy family and work commitments, have limited me to reading a select number of higher impact websites. In the last few weeks I have expanded my reading based on suggestions from commentors on my blog and tweets of interest. Ironically, some of the most interesting links used to come from Dr. Mann before he blocked me for politely asking why he conflated “lukewarmers” with “deniers”. At the website of a frequent commentor, I read the following text that pretty much typifies the views of the “trust us” school in the climate change debate. The author writes:

My point is that it doesn’t really matter; I’m not going to trust a scientific result more if scientists learn to behave in a more trusting way. There’s a scientific method which requires that before we accept a result we check it and confirm it again and again. We probe and investigate in as much detail as possible. It’s only accepted when it becomes clear that the evidence is largely overwhelming. It’s the evidence itself that matters, not the behaviour of the scientists involved.

On its surface this statement sounds pretty reasonable, until you acknowledge the interdisciplinary nature of the research field known as “climate science”. As a chemist I am in a position to discuss a number of areas within the field but I am unable to effectively fact-check geologists or atmospheric physicists (to name just two of many fields). I have to “trust” that the peer review process will result in reliable and reproducible results. If I fear (based on purloined emails) that someone has a finger on the scale in the peer review process, then I am less likely to have faith in the resulting balance of articles in the peer-reviewed press. Earlier in the same post the author did acknowledge:

It really just means that you don’t simply accept something because someone tells you to. It means that you don’t simply accept something because the person presenting it is trustworthy or a high-profile scientist. It means that you check and consider what someone presents. You, or others, collect more data, do more calculations, or run more models to try and understand something in more detail and to check and confirm (or not) what others have presented before.

But this statement once again ignores the reality of research endeavours in climate science. The global climate models are proprietary “black boxes” so there is no possibility that an outsider could repeat their runs. Meanwhile, the biggest complaint of the “show me” crowd is the current practice of not providing the data necessary to replicate studies. It is fine to tell the public that they are free to attempt to replicate work (on their own dime) when the work was carried out at the taxpayer’s expense; but then to have the taxpayer-funded scientist threaten “If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone” you are really left to wonder. Ultimately, the trust has evaporated in the field and an effort has to be made by those scientists working on taxpayer-funded grants at taxpayer-funded universities in taxpayer-funded jobs, to provide taxpayers and decision-makers with the information necessary to evaluate the quality of the conclusions on which literally billions of taxpayer dollars will ultimately be spent.

In the discussion thread of the same posting, the blog proprietor expressed another sentiment which is surprisingly common in the field of climate science. I was discussing my interest in renewable energy technologies and got the following response:

What has that got to do with climate science? Are you suggesting that how our climate will respond to changes in anthropogenic forcings will depend on whether or not renewables are a viable alternative to fossil fuels?

This seems to be a common refrain amongst the “climate scientists” in the climate discussion. They are so fixated on their own little worlds that they have no understanding of the larger picture. Put another way, they are so fascinated with the bristlecones that they fail to see the forest. I suppose my response to these people would be to ask the question: What is the point of research in the field of climate science?

If their answer is “to keep a lot of academics in grant monies and travelling to exotic locations on the government’s dime” then I suppose that renewable energy technologies really don’t matter. If the answer is: “to understand anthropogenic forcings on the planet and how humans can, if necessary, address/reduce these forcings to avoid potential mass human suffering and ensure the continuing health of the ecosphere” then renewable energy technologies, and understanding their influences, is a critical component of the field.

As I have mentioned previously, my area of thesis research was the intersection of environmental data quality and environmental decision-making. My research involved identifying the strengths and limitations of environmental datasets and providing tools to allow for the effective re-use of the data in alternative settings, including that of decision-making. Thus, understanding the fundamentals of policy development and decision-making was a component of my studies. In this I am something of an anomaly in the field. To explain, I had two supervisors (due to the interdisciplinary nature of my studies). One of my supervisors was a senior scientist who had left his active research program to dedicate that time to policy development and enhancing decision-making. My other supervisor, meanwhile, was a firm believer in ensuring that his students had well-rounded educations and early in my career we agreed that my interest in interdisciplinary studies was going to be a serious hindrance in the academic community (where hiring was done by “Departments” based of departmental needs). I’m not sure if it is still the same now, but when I was a grad student every department talked about the importance of interdisciplinary scholars but all wanted some “other” department to hire and fund them. As a consequence, my graduate education was, from its inception, aimed at a non-academic future.

The one thing I have noticed in the field of climate science is that for many academics, the absence of formal training in policy development and environmental decision-making does not hinder their willingness to provide unsolicited, and often wrong-headed, advice in the field. One of the areas where the advice is often most wrong-headed is the field of renewable energy technologies. It is accepted dogma that no first-world government on the planet is willing to go into energy poverty in order to meet climate change goals. The State of Illinois is not going to accept regularly scheduled black-outs to compensate for the fact that 47% of its energy supply currently comes from coal power plants. So if you want to get Illinois off coal you are going to need an alternative to “turning off the heat in the dead of winter”. I have written previously about the Energiewende policy in Germany and how its “success” is heavily dependent on off-shoring energy supplies (and thus the Tyndall gas emissions associated with those energy supplies).

In my least-read post ever, I even discussed the concepts of “energy density” and “power density” and how they influence which renewable technologies may be an appropriate option to address future energy needs in a region. One of the reasons I started to blog was that so many of the people demanding renewable energy the loudest did not seem to recognize that the wrong choice of renewable energy sources can actually result in increased Tyndall gas concentrations. As I wrote in a previous post on biofuels and bioenergy, research demonstrates that “converting rainforests, peatlands, savannas, or grasslands to produce food crop–based biofuels in Brazil, Southeast Asia, and the United States creates a “biofuel carbon debt” by releasing 17 to 420 times more CO2 than the annual greenhouse gas reductions that these biofuels would provide by displacing fossil fuels”. I also pointed to the research which indicates that corn-based ethanol, “instead of producing a 20% savings in greenhouse gases, nearly doubles greenhouse emissions over 30 years and increases greenhouse gases for 167 years”. What this means to the lay reader is that these renewable energy sources will take 167 years to be carbon neutral. If our intention is to address CO2 concentrations in the atmosphere within the next 8 or 9 generations, then that is the renewable fuel for you, but for me it looks like a poor candidate to replace Illinois’ coal power plants if I am looking for improvements in a shorter time frame.

So when a “climate scientist” says “I don’t see what that [renewable energy] has to do with climate science”? I suppose the nicest response I can come up with is: renewable energy is only important if you are interested in understanding changes in sources and sinks of CO2 in the atmosphere which I see as an important component of the discussion (and rather important for modelling) don’t you think?

mea culpa: as a reader has pointed out, the earlier version of this post had an issue with the CO2 subscript. My blogging platform doesn’t have fancy fonts and a copy and paste error on my part was not caught by my copy-editing process (reading over the post late at night while drinking a glass of wine). The mistake was all mine and I apologize for any confusion this might have caused.

Posted in Climate Change, Climate Change Politics, Renewable Energy | 11 Comments