What are the real marine risks of the Trans Mountain Pipeline Expansion?

Every morning, starting around 7 am, the Spirit of Vancouver Island leaves its berth in Swartz Bay for its first run to Tsawwassen. On-board the Spirit are tens of thousands of liters of diesel fuel to run the ship for the day. On her car decks the Spirit carries around 400 cars and a dozen or more transport trucks, each carrying tanks of gasoline or diesel fuel. Starting in Swartz Bay, the Spirit sails through the incredibly tight shipping lanes of the Gulf Islands, through Active Pass (a notoriously treacherous passage), through the active shipping lanes of the Strait of Georgia (all home to the endangered J-pod of resident BC Orcas) to the Tsawwassen Ferry terminal situated near environmentally fragile eelgrass beds that provide a habitat for countless small fish and the protected Pacific flyway. The ferry carrying hundreds of cars and trucks and thousands of people makes this trip numerous times daily without the support of any rescue tugs. Even scarier are the hazardous goods runs they do late at night where, in the dark, through this treacherous route, the ferries transport tens of thousands of liters of goods too dangerous to transport with civilians on-board.

You might ask why am I talking about ferries? The answer is because from a marine risk assessment perspective this route is a nightmare. The number of potential risks to human health and the environment are almost countless: spills, collisions, narrow passages, charted and uncharted rocks, engine loss all are potential outcomes from each trip and yet given the tremendous risk to human health and the environment our government has not cancelled this run to evaluate its continued safety to the coastal marine ecosystem. Just look at this link a colleague provided me of the August, 1970 splicing of Queen of Victoria by Soviet freighter Sergey Yesenin in Active Pass. Yet this week our provincial government announced that they are proposing a freeze on increases in the transportation of dilute bitumen (dilbit) partially based on the risks associated with the project. This caused me to think about risk and marine transport.

As I mentioned in my previous post, my job involves investigating and remediating contaminated sites. As part of my job, I also carry out the due diligence risks assessments to evaluate the risks posed by contaminated sites to human and ecological health. I evaluate risks every day but not the way most look at risk, I’m responsible for putting a number on risk, or more specifically, putting a number on the hazard a chemical poses to ecological health to determine if the risk is acceptable or unacceptable. There is an entire science to this task and I have spent a lot of time at this blog explaining how we do this. At the bottom of this post is a summary I have prepared that gives readers a chance to go through those posts at their leisure.

One of the first things you learn in studying risk assessment is that there is no such thing as an activity with zero risk. In everything we do we encounter risks. When we get in the car we put on our seat-belts; before our kids get on the ice they put on their helmets; before my daughters play soccer they put on their shin-pads. All these are tools used to reduce the risk of typical day-to-day activities. Industrial activities are no exception. Pipelines run the risks of leaks, tankers run the risks of spills and that is something we have to accept as part of living in a modern industrialized country. Using safety processes and procedures we work hard to minimize those risks but we can’t eliminate them in their entirety. But unless our government has a plan to eliminate the use of fossil fuels virtually overnight we will need to transport fossil fuels and pipelines are the safest way to transport fossil fuels overland. If the government succeeds in stopping the pipeline all they will have done is increased our risk of a major fossil fuel spill. As for the absolute safest way to transport fossil fuels, that would be modern, double-walled tankers.

Going back to the BC coast, while the BC Ferries pose a pretty significant risk, far more frightening, from a marine spill perspective, are the daily barge runs that move fuels from Vancouver and the refineries in the Puget Sound to keep Vancouver Island supplied with the diesel and gasoline necessary to keep its communities alive. These barges run on odd schedules, through good weather and bad and are never accompanied by two marine rescue tugs. Has our provincial government blocked the movement of these barges? Of course not! Even worse, look at those fuel barges going up the coast. Does the Nathan E Stewart ring a bell? As I have written previously, the provincial government has essentially ignored this risk for decades and failed to put in the money necessary to ensure a reasonable spill response. So when our current government says it wants to investigate expanded dilbit transportation (a hypothetical future risk) while ignoring a real, pressing and much more significant existing risk, you are left to wonder if it is really politics rather than a concern for the environment that is causing them to make this decision.

On another front, as I write this blog post the port of Vancouver is engaged in a public consultation process about plans to increase the size of Delta Port. This at a port that currently has approximately 23,000 ship movements a year and is looking to add an estimated 5000+ more ship movements if all the future upgrades are included. This dwarfs the 720 additional ship movements associated with the Trans Mountain expansion (TMX).

Now unlike the Port, the fuel barges or the BC Ferries, the NEB required a detailed risk analysis of the TMX. The critical document on this topic is the report Termpol 3.15 – General Risk Analysis and intended methods of reducing risk which evaluated the risks of the project. It concluded that “with effective implementation of risk reducing measures most of the incremental risk resulting from the project can be eliminated“. To put a number on it:

  • Without the project the risk of a credible worst case oil spill is estimated in 1 in every 3093 years….If all the risk reducing measures discussed in this report are implemented the frequency will be one in every 2366 years.
  • This means that after the Project is implemented, provided all current and future proposed risk control measures are implemented, the increased risk of a credible worst case oil spill in the study area from the Trans Mountain tanker traffic will be only 30% higher than the risk of such an occurrence if the Project did not take place.

By increasing the number of tankers by 7 times, but also implementing the changes that were ultimately mandated by the NEB, the risk of a spill is less than one event every 2000 years. So no, the risk does not increase by 7 times, it increases by barely 30% and 30% more of almost zero remains almost zero. Essentially they are saying that the project provides no significant increase in risk over those risks we accept every day (what I refer to as a de minimis risk below). In exchange for a negligible increase in risk we get economic prosperity and the economic health and goodwill of our neighbouring provinces. The dollars generated by this project are what pay for our health care and social services.

Certainly the government could try to make a case that the risks posed by the TMX (one accident every 2000+ years) may be too high for the benefits incurred. But that is not the argument they, or the opponents of the pipeline, have been making. They argue that BC should not incur any risk to compensate for our current level of prosperity. The problem is that our current level of prosperity is a direct result of our national union. To suggest that we accept no risk, in a world where we balance every other risk out there, is simply not a legitimate argument to make.Arguing that the TMX poses too much risk while simultaneously refusing to fund improved spill response in the Central BC Coast is the epitome of hypocrisy. It shows that the ban is not risk-based but simply political in nature. The opponents of the pipeline need to enumerate the risks and explain why the de minimis increase in risk associated with the pipeline is not worth the improvement in the quality of life it provides to British Columbians and Albertans alike.

Addendum on Risk and Toxicity

I have written a lot at this blog about how risk is communicated to the public and I have prepared a series of posts to help me out in situations like this. The posts start with “Risk Assessment Methodologies Part 1: Understanding de minimis risk” which explains how the science of risk assessment establishes whether a compound is “toxic” and explains the importance of understanding dose/response relationships. It explains the concept of a de minimis risk. That is a risk that is negligible and too small to be of societal concern (ref). The series continues with “Risk Assessment Methodologies Part 2: Understanding “Acceptable” Risk” which, as the title suggests, explains how to determine whether a risk is “acceptable”. I then go on to cover how a risk assessment is actually carried out in “Risk Assessment Methodologies Part 3: the Risk Assessment Process. I finish off the series by pointing out the danger of relying on anecdotes in a post titled: Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist.

Posted in Canadian Politics, Pipelines, Risk Communication, Trans Mountain, Uncategorized | 22 Comments

On dilbit, oil spill response and political gamesmanship

As many of my readers know my day job involves investigating and remediating contaminated sites. My particular specialty is the investigation and remediation of petroleum hydrocarbon impacts [and before anyone asks, no I have never worked for Kinder Morgan nor do I have any conflicts of interest associated with the Trans Mountain file]. I have a PhD in Chemistry and Environmental Studies and have spent the last 18 years learning how hydrocarbons behave when spilled in the natural environment. This post comes out of my surprise at the provincial government’s announcement that they are proposing a freeze on increases in the transportation of dilute bitumen (dilbit) until they can create an independent scientific advisory panel to help address the scientific uncertainties outlined in the  Royal Society of Canada (RSC) Expert Panel report The Behaviour and Environmental Impacts of Crude Oil Released into Aqueous Environments. The reason for my surprise is that, unlike most British Columbian’s, I have read the RSC report and the uncertainties expressed in the RSC report are not going to substantially change how spill response is planned or carried out in the West Coast of BC.

Before we go further I am going to make a blanket statement. It is my belief, informed by my years of study and practical experience in the field, that we know enough about how diluted bitumen (dilbit) will behave when spilled to design a world-class spill response regime. Why do I make such a statement? Let’s start by dispelling some myths. The first thing to understand is that virtually everything the activists (and certain politicians) tell you about dilbit is wrong. I have previously described the state of the research on dilbit and its behaviour in marine environments. To summarize, the research shows that dilbit behaves pretty much like other heavy oils in a spill scenario. In the marine environment dilbit floats, which is understandable based on it being a non-polar liquid with a density less than seawater. In freshwater environments the blend of dilbit and the length of time after the spill event will define how it behaves. Sometimes it will float for days on end and sometimes it will float for only a few days and then sink; as this graph from the National Academies of Science (NAS) report on the subject displays:

NAS study Fig 3-2

Whether it sinks or floats is something you can predict once you know the blend of the dilbit (see the difference between the two blends in the figure) and the conditions in the freshwater environment. If the dilbit spills into really silty water (either fresh or seawater) it will form oil-particle aggregates (OPAs) which, under certain conditions, will sink to the bottom. In other cases it gets entrained in the water column (and thus can become harder to clean). Once again this is almost exactly what all heavy crude oils do in the same conditions. To summarize, dilbit is not some existential threat to humankind, it is like many other heavy crude oils out there. When spilled it will cause a mess, but no bigger mess than a similar heavy oil. The world community has lots of practice and knowledge about how to handle heavy oil spills. This is a topic about which a huge amount of time, money and intellectual energy has been directed. The expertise of the world community can be used to inform our spill response.

At this point I can hear the activist saying: but what about the RSC report? To them I say: try actually reading the report. You see, the report is about all crude oils spills, so very little of its content refers to dilbit per se, rather the report is about all crude oils including dilbit. This is an important distinction because most of the recommendations for further research proposed by the RSC are general in nature and reflect all oils not just dilbit. Let’s look at what the Executive Summary says are the “high-priority research needs”:

High-Priority Research Needs Identified by the Expert Panel
1. Research is needed to better understand the environmental impact of spilled crude oil in high-risk and poorly understood areas, such as Arctic waters, the deep ocean and shores or inland rivers and wetlands.
2. Research is needed to increase the understanding of effects of oil spills on aquatic life and wildlife at the population, community and ecosystem levels.
3. A national, priority-directed program of baseline research and monitoring is needed to develop an understanding of the environmental and ecological characteristics of areas that may be affected by oil spills in the future and to identify any unique sensitivity to oil effects.
4. A program of controlled field research is needed to better understand spill behaviour and effects across a spectrum of crude oil types in different ecosystems and conditions.
5. Research is needed to investigate the efficacy of spill responses and to take full advantage of ‘spills of opportunity’.
6. Research is needed to improve spill prevention and develop/apply response decision support systems to ensure sound response decisions and effectiveness.
7. Research is needed to update and refine risk assessment protocols for oil spills in Canada.

Notice the the terms used: “better understand”, “increase the understanding”, “improve”, “update”. These aren’t the terms of a group that knows nothing about a topic, these are the terms of a group that wants to see incremental gains to our knowledge-base. Why is this important? Well because our provincial government has not banned the movement of ALL oil products even though the RSC report expresses concerns about all oil products. The provincial government has only expressed concern about one type of oil: dilbit. There is a serious disconnect here. It is almost as if the ban is not based on science but is instead a bit of political gamesmanship.

That being said research has advanced since the RSC report was published. What does the current research say? As Natural Resources Canada Research Scientist Dr. Heather Dettman points out:

light crude, a low-viscosity oil, may actually be more hazardous [than dilbit]. When light crude hits water, it’s “like adding cream to coffee. That’s it. It’s all mixed in, it gets stuck in the sediment.”

Dr. Dettman pointed to the 2010 dilbit spill in Michigan’s Kalamazoo River.

“It looks ugly and it’s not good for the fish. But because it’s there you can see it, you can pick it up, and then it’s gone,” she said. “We get a very high recovery rate.”

Here is a longer discussion with her on the radio. As Dr. Dettman points out, we have practical experience with handling a dilbit spill in in the Burrard Inlet and the results were heartening. Virtually no OPAs were formed and most of the dilbit was recovered. This was a lucky spill however, as it occurred in an area with limited wave action and no storms. Now let’s go back to Dr. Detman.

Dr. Dettman said she and her team have built substantially on the body of dilbit research since the Royal Society report was released three years ago. Their experiments – performed in an open tank filled with fresh North Saskatchewan River water – show that various blends of diluted bitumen won’t sink until the sludge has been left alone for at least 21 days, she said.

Even then, she added, only one type of bitumen found its way to the tank floor, even in warm conditions.

She said the data seem to indicate diluted bitumen tends to form a hardy slick on the water’s surface – a spill that can be somewhat contained – rather than dissolve into the water and end up coating riverbeds and marine life.

“The misinformation is that diluted bitumen will sink,” Dr. Dettman said. “But it’s not sinking.”

The reality of the situation is that any oil spill, be it crude oil or diluted bitumen, represents a tragedy and catastrophe. It will harm the natural environment, will kill some marine organisms, and will be very hard to clean up. The point of this blog post is that a diluted bitumen spill would not be a uniquely catastrophic situation. It would be comparable to a spill of any other heavy crude…you know the products that have been safely shipped in and through the Salish Sea for the last 50+ years. Banning the transport of dilbit until we have done more research has no basis in science. It is a political game. Any “independent scientific advisory panel” will end up concluding that we have the information to design a world-class spill regime. Anyone who says otherwise is either not aware of the state of research in the field of spill response or has a political axe to grind. You can decide where our current government and the anti-pipeline activists stand on this topic.

 

 

 

Posted in Oil Sands, Pipelines, Trans Mountain | 29 Comments

On that UBC Report comparing job numbers and the Site C Dam

My Twitter feed has been alive with news of a “new UBC Report” that according to its author, Dr. Karen Bakker from the UBC  Program on Water Governance, “concludes stopping Site C will create a larger number of sustainable jobs in the province“. This report has been cited in multiple locations in the last week so I have been hoping to get a chance to dig into it and having done so I am, once again, surprised at how easily questionable research can dominate the conversation on the Site C file. It was even cited in the Legislature. The selling point of the report (at least from what I have seen on Twitter) was that it showed that the BCUC Alternative portfolio produced 5 times more jobs in renewable energy than are expected to be produced by Site C. What I found through my research was that the majority of the jobs “created” by the BCUC Alternative Portfolio would likely never actually appear and that the costs for the renewables portfolio promoted by the anti-Site C activists would likely be much higher than was previously suggested. The rest of this blog post will examine why, in my opinion, the results from Dr. Bakker’s study are unreliable.

The report itself is actually an unsigned Briefing note and an accompanying spreadsheet analysis (link opens an Excel file). In the briefing note we are presented with an employment table that presents three columns of employment numbers. One for the BCUC Alternative portfolio, one for the BC Hydro Alternative portfolio and one for Site C Continued. The take-home message is that the cumulative person-years or “jobs” modeled for the BCUC Alternative portfolio is 208,498 person-years by 2094 while Site C only generates 40,578 person-years by 2094.  This is where the headline 5 times the job numbers headline comes from (actually 5.13 but who is counting).

Looking at the spreadsheet we find the basis for this table in  the tab “Comparison-LLF”. The BCUC numbers are derived from the tab “BCUC-LFF”. On that tab we find the “Employment Summary-All Resources (by year)” table starting at column T. Looking at this spreadsheet we can see where all the jobs are coming from. Adding up the columns we discover that the 208,498 person-years for the BCUC Alternative portfolio is made up of the following subtotals:

  • 10,296 person-years from decommissioning Site C  (5% of the total)
  • 183,600 from demand-side management (DSM) programs (88% of the total)
  • 14,602 person-years from the various wind projects  (7% of the total)

Looking at this you immediately recognize that contrary to what the people on Twitter have been claiming, renewables, on their own, do not generate more jobs than Site C. Rather according to the briefing note Site C generates 40,578 jobs, almost 3 times as many jobs as the actual wind projects.

Looking at this table one recognizes an obvious initial flaw of the model with respect to the wind jobs. According to the model the wind facilities will generate a lot of jobs during their initial construction phase (for the Wind -PC18 column (Column X) that would be 310 person-years per year from 2034 through 2038). The authors then confusingly have those facilities providing a constant number of maintenance jobs (52 person-years per year) between 2039 and 2094 (over 55 years).

Anyone who has studied wind energy projects understands that wind facilities are not designed to operate for 55+ years. The typical wind project has a 20-25 year lifespan at which point the turbines have to be decommissioned and new turbines installed if the power is still needed. From a jobs perspective this would significantly increase the number of jobs generated by the wind projects. Tearing down old turbines and building new ones would generate a lot of person-years of work and yet these jobs are completely missing from the analysis. While this necessary component of a wind project life cycle analysis would represent a good thing for the people touting renewables as a job builder; it does pose a bit of a quandary for the the anti Site c activists.

The problem is that many of their cost-calculations omit the need to decommission and re-build facilities 3+ times over the time period Site C will be in service. In order to effectively replace Site C you have to triple the number of turbines AND include the cost to decommission each generation of obsolete turbines during each life cycle. By tripling the number of turbines and incorporating decommissioning costs, suddenly the costs of those replacement renewables roughly triples. This is a fatal flaw in the cost calculations produced by the anti-Site C activists. I have been banging on this drum for a bit but, confusingly, have heard little from the supporters of Site C on this obvious error in price calculations. A single generation of wind turbines cannot replace a dam intended to operate for 70-120 years yet this is what the modelers assume in their analysis.

The BCUC’s actual Alternative Portfolio spreadsheet does partially account for  this cost but makes a number of odd assumptions including reducing the costs for refurbishing facilities (by 30% of original costs); omitting any decommissioning costs; and most strangely assuming that operations and maintenance costs go down over the lifespan of the facilities. I’m not sure about you, but I’ve found that as systems age they need more maintenance not less.

Going back to the Briefing Note, I have no issue with the person-years associated with decommissioning Site C in the BCUC Alternative portfolio but I have a very serious issue with those 183,000+ jobs attributed to demand-side management (DSM).

You might ask where Dr. Bakker and her colleagues got that huge number. Well according to the Briefing Note:

According to a study carried out for BC Hydro, spending on conservation or demand-side management (DSM) programs creates 30 jobs per $1M spent. 1

That 1 identifies Footnote 1 the Power Smart Employment Impacts DSM Programs, Rates and Codes and Standards, F2008 to F2037 (citing p. iv.). Now any time a footnote references a Roman Numeral that means the authors are sending you to the Executive Summary of the cited report. A good scientist never relies on the Executive Summary of a report because the Executive Summary typical does not contain any of the provisos from the body of the work itself. Looking at the Power Smart Report we do see in the Executive Summary that “employment intensity” for Power Smart DSM was an estimated 34.4 person-years per million dollars spent. The question lies: where does that number comes from?

Reading the report the 34 person-years of employment per million spent on DSM includes two components: Investment Effects and Re-Spending Effects. Investment Effects are described as:

These expenditures are required to implement the energy saving measures and are comparable to the construction expenditures and employment from supply-side projects. As depicted in Figure 4-1, investment employment ramps up over the initial years of the DSM Plan and achieves a plateau until F2028. Thereafter, it falls fairly rapidly with expenditures, but the decline is mitigated because some projects will be completed and paid out after F2028. Overall, the pattern of investment employment directly follows the expenditure pattern.

That is to not to say these are all direct jobs associated with spending all those millions. As described in the glossary, Investment Effects include:

Direct, indirect and induced employment estimated from the initial DSM investment expenditures in programs, rates, codes and standard measures.

Thus those original employment boost involves direct, indirect and induced employment. The other half is the “Re-spending Effects” which are described as:

The employment impacts from re-spending activity are estimate at 50,900 PYs [between 2008 and 2037] and are created as a byproduct of the economic benefits associated with the DSM expenditures (see Figure 4-1). Since these employment benefits continue, driven by the ongoing energy bill savings, they can be likened to the operation and maintenance employment from supply-side projects.

Did you get that second type? If not then let’s look at the Glossary [Definitions] that describes “Re-Spending Effects” as:

Direct, indirect and induced employment estimated from consumers’ re-spent electricity bill savings in the economy.

These two definitions should be raising red flags all over the place. First and foremost the Site C numbers are direct numbers not “induced or indirect employment”. Thus they are comparing apples to oranges. Moreover, Re-Spending Effects can only happen if you have substantial reductions in the consumer’s hydro bill associated with the DSM program. Now this is a huge assumption on the Site C file. As has been repeatedly told, if the project is scrapped then the sunk costs have to be recouped, reportedly by a 10% surtax placed on everybody’s hydro bills. This will also help cover the decommissioning costs. That surtax will eat up any initial savings the consumer might see from shutting down the project. It is unclear how people will re-spend savings that they never receive in the first place.

Moreover as I describe in my previous blog post, the only way for DSM to get us where we need to be [dramatically reduce our electricity demand] is by substantially increasing hydro bills. That is how DSM works, you make electricity more expensive to encourage consumers to use less. This brings up a rule-of-thumb I was taught as a student about DSM programs.  Consumers are generally used to how much they are willing to spend on household bills like hydro. If you increase the price of a household budgeting line item (electricity in this case) consumers will work to drop demand until they are spending about the same amount as they were spending previously. If the price rises substantially, they are often willing to spend a bit more to maintain a quality of life but will not make massive changes unless there is a big upside in savings. What this means is that the increase in price will likely drop demand but will not likely have a major effect on consumer hydro bills. Consequently the DSM measures will not result in reduced hydro bills that can be used to generate re-spending effects.

The absence of re-spending of non-existent savings becomes a serious consideration in the BCUC Alternative model because according  to the Power Smart document, the direct jobs (the Investment Effects) associated with the continued investment in DSM quickly disappear. In the Power Smart reference Figure 2-1 shows that under a steady investment state employment increases associated with Investment Effects essentially disappear about ten years into the spending cycle. By their measure only 50,900 person years are generated regardless of how long the money is spent because after that time the monies are spent on established programs and not on lots of people to set up those programs. For the UBC modelers this is a problem since they assume that 183,600 person-years will be generated by DSM between now and 2094 but if the Re-Spending Effects don’t appear then about 132,700 of those jobs simply vanish from the equation. Take those 132,700 person-years out of the equation then all those talking points go away. Instead of the BCUC Alternative portfolio generating 208,498 jobs it only produces 75,798 jobs which is barely double what Site C generates. Given the mushiness of the assumptions used in this report those two number may as well be the same.

Looking at the Briefing note I remain amazed at how easily the public can be swayed by someone with a fancy title and a complicated spreadsheet. Like the Swain model once you dig into the numbers it becomes increasingly clear that the output of the model is entirely dependent on the input assumptions and that, in this case, the input assumptions are demonstrably faulty. Wind farms don’t last for over 55 years they last closer to 25 and if wind is going to replace Site C then you will need to account for the equivalent time frame for a fair comparison.  On the DSM front, a modeling exercise that assumes that 64% of your total jobs will be derived from people spending savings they never obtained is not a good thing. Finally for the anti-Site C folks who keep proclaiming that renewables will make up for the jobs from Site C the Briefing Note makes it clear that the actual renewable job numbers don’t come close to comparing to the number of jobs generated by Site C. On the positive front, as I have pointed out numerous times at this blog, Site C will not come close to supplying the energy we need to electrify our transportation sector. As such we will need DSM (and all the ensuing jobs) in addition to Site C if we are to meet our Paris Agreement energy commitments.

 

 

Posted in Site C, Uncategorized | 7 Comments

Reviewing the demand estimates used by the opponents of Site C

This week Business in Vancouver (BiV) printed an article about the Site C project titled:  B.C. might not need any additional wind power either which included a number of quotations from Dr. Harry Swain a gent with whom I have disagreed on the topic of Site C. In the article Dr. Swain stated that BC has all the power-generating capacity it needs for the next 20 years and therefore does not need Site C. He indicated that the basis of his claim was his modelling on the topic. This led me to wonder what was in that model and how the opponents of the Site C dam were able to generate numbers that ran completely contrary to both my findings and those of BC Hydro. This blog post examines that question and demonstrates, once again, the importance of looking at the underlying data used in environmental decision-making. By the time you finish reading this blog post I think you will agree with me that the modelling used by the opponents of Site C is flawed and not worthy of consideration in the Site C debate.

To begin I had to get the model described in the article. As many of you know Dr. Swain et al. presented a forecast model to the BCUC (link downloads an Excel model from BCUC website). I downloaded that model a while back and discovered that the critical inputs were not included in the spreadsheet but rather referred back to a secondary spreadsheet called 2016-2036-Forecast-w-Revised-Trade-BCUC-RB-Eoin-Finn-Oct18.xlsx. At that point I was stymied, since I didn’t think that Dr. Swain or his colleagues would give me a copy of their model. However, when the BiV article came out I asked the article’s author if he had been given any information to support Dr. Swain’s claims. The response was a copy of Dr. Swain’s updated model titled BC-Hydro pro-forma 2017-37Rev5SiteC.xls. Moreover, much to my surprise I was informed that Dr. Swain had agreed to the release of the file to me. This represents a level of professional courtesy that was much appreciated and hopefully represents a step towards working together to meet BC’s continued energy challenges.

Now as a first note, I will point out that the 2017-37 Model differs in output from the 2016-36 model used in the BCUC submission. As I do not have the earlier model I cannot see the difference between the two but I can point out what I view as limitations with the 2017-2037 model (called the Model hereafter) and explanations for why I believe it does not present a reasonable estimation of future demand in BC.

In the BiV article Dr. Swain stated:

With the modelling that I did, I assumed – as BC Hydro did – that the population is going to increase, that GDP will increase

While that is strictly true (the Model has a correction for inflation) it does not tell the whole story. According to BC Statistics British Columbia’s population is expected to rise from 4.8 million in 2017 to 5.9 million in 2037. This represents an increase of 23% over the time covered by the Model. The problem is that the Model does not address that population growth directly. Rather than looking at per capita demand it simply assumes that demand will grow or decline using the average of historic residential demand growth between 2006 and 2016. The choice of dates is very significant since it includes the market crash from 2008-2009 which caused a retraction in our economy and associated energy use. The inclusion of the recession in the input number for the spreadsheet results in lower growth in residential demand for the entire time frame covered by the Model.

Moreover, looking at the residential sector demand estimates I identified a number of further critical flaws. The time period under consideration (2006-2016) was one where BC Hydro carried out intense demand-side management activities which temporarily de-coupled residential energy growth from population growth. To further lower the future residential demand the Model includes an elasticity factor (addressed later) which is sufficiently high to essentially eliminate demand increases associated with population growth over the last 10 years of modelling (2027-2037). According to projections the population of BC is supposed to grow by 550,000 souls between 2027-2037 but under the Model residential energy use will stagnate during that period. In total, the Model projects residential demand to increase by 7% over the entire 20 years of the Model. Not 7% per year but 7% over the period from 2017 to 2037 even as the population increases by 23% during that time.

On the commercial demand side the model has similar flaws. As I noted in my previous blog post on electricity demand commercial demand in BC pretty much mirrors GDP growth. The Model has commercial demand increasing by only 3% between 2017 and 2037. In a province with a 23% growth in population the service and commercial parts of our economy are only going to grow at 3%? This is simply not a reasonable assumption.

On the industrial side it gets even worse. I don’t look forward to living in the British Columbia the Model projects for 2037 as we will have no industrial base to pay taxes to fund our services. The Model has industrial use dropping by 66% over the 20 year period. It projects total industrial demand at 4,431 GWh in 2037. According to BC Hydro statistics mining alone used 3,800 GWh in 2017. The forestry sector used around 6,800 GWh with pulp and paper using about 4,400 GWh of that forestry number. Think of it, under the Model in 2037 BC will use the same amount of power in its entire industrial base that it currently uses for pulp and paper. Is this a reasonable number? If I told you that BC’s mining industry would be completely gone by 2037 and its forestry sector would be cut by more than half would you believe me? Funny thing Swain et al. said that very thing to the BCUC and no one called them on it. Looking at how BC Hydro generated its projected demand you discover that BC Hydro looked at each of its large industrial customers individually to project industrial demand in the future. I think I will trust BC Hydro on this topic.

Continuing our look at demand we have electric vehicles. On the topic of electric vehicles (EVs) the Model once again ignores population changes and assumes that the personal vehicle fleet will remain static with the same number of passenger vehicles on the road in 2037 as in 2016. This decreases the number of EVs needing electricity. The Model assumes that there will be no attempt to electrify commercial or transport trucks so there is no demand there. Moreover, the increase in EV uptake is extremely back-loaded with a net total increase in EVs of 29,310 between 2017 and 2024. [The Model uses compounding interest in their rows so the big increases on the demand end are located at the later end of the model.] This allows the Model to minimize the electricity demand in the early years while claiming larger numbers at the end of the model run (so they can claim they had those higher numbers). Interestingly, based on the numbers from FleetCarma.com by early Q3-2017 EV sales in BC had surpassed the Model’s projections for the entire year. Right now EV uptake is running at about twice the rate projected by the Model which will have a commensurate increase in electricity demand.

Looking what we have found so far, the Model presents unreasonably low demand numbers for every significant column on the demand side of the ledger. Based on this there is no wonder how they got their numbers so low. Now I could stop here but there are two other points about the Model that should be exposed. The first is the price elasticity component.

As we know, price elasticity addresses how demand goes down as price goes up. BC Hydro was criticized for using a relatively low elasticity value. BC Hydro’s research indicates that price elasticity should range between -0.08 and -0.13 (even as they used a lower number). The Model uses a residential elasticity of -0.15. This results in a larger than is typically observed reduction in demand associated with hydro rate increases.

Coincidentally, the model assumes rate increases in the Residential, Commercial and Industrial sectors of 3.8% per year every single year between 2017 and 2037 (no rate freezes here). The result in the unseemly residential power rate of $234/MWh and commercial rate at $201/MWH in 2037. Needless to say that huge number has the effect of driving down residential and commercial demand based on price elasticity. Admittedly it makes that $88/MWh Site C power look pretty good. Combining the extremely high power rates and extremely high elasticity rate results in massively suppressed demand numbers in 2037. These basic assumptions in the Model are the reason demand is so low in 2037.

Amusingly, while the Model projects incredibly expensive power for residential and commercial customers it assumes virtually no increase in the price received for electricity through sales. The Model assumes that the trade price for electricity will rise all the way to $40/MWh in 2025 and no further increases thereafter. In 2037 they have Hydro selling electricity to California at $40/KWh while simultaneously selling it to residential customers at $234/MWh. Why would the Model do such a thing? Well that way it can minimize the amount of income generated by the dam so the incongruous combination of stratospheric residential rates and negligible export rates results in a decrease in demand and a minimization of the income generated by the dam. The best of both worlds if you don’t want the dam built but absolutely unsupportable if you care about reliable data being used in decision-making.

Ultimately what this blog post shows is that the model used by Dr. Swain and his colleagues is so completely flawed that it is simply not a reasonable tool to be used in any decision-making process. What I find most confusing is why I haven’t read about this from anyone else. The assumptions for this model were all out there and yet no one went through the effort of examining them. The people pushing for the dam needs to shake their heads. So much misinformation is coming out about this project and the people supporting it simply shrug and move on. Wouldn’t it be nice if the people supporting the project put in the sweat equity that the opponents of the dam have been contributing. Then, maybe, we might have enough useful information on the table to make a good decision as to whether we complete or scrap the dam.

 

Posted in Site C, Uncategorized | 8 Comments

Why efforts to fight Climate Change will change the conclusions of the BCUC Site C Inquiry Report

I have been incredibly busy at work over the last few weeks and so was not able to get involved in the public consultation portion of the BCUC Site C Inquiry process. Such are the downsides of not being a paid activist; when my work calls, my activism (which is a hobby for which I receive no compensation) suffers. Happily my work deadlines have passed leaving me time to read the BCUC Site C Inquiry Final report (the Report).  What I read left me completely flabbergasted. The conclusions of the Report depend entirely on its load forecasts and the load forecast upon which all the major assessments are based completely ignores the overriding environmental issue of our age: fighting climate change. What also astounds me is that this incredibly important fact has not been highlighted in any of the analyses of the Report that I have read to date.

Since it is such an important point let’s evaluate it immediately. In the conclusion of the Report (on page 187) the Summary states:

We take no position on which of the termination or completion scenarios has the greatest cost to ratepayers. The Illustrative Alternative Portfolio we have analyzed, in the low-load forecast case, has a similar cost to ratepayers as Site C. If Site C finishes further over budget, it will tend to be more costly than the Illustrative Alternative Portfolio is for ratepayers. If a higher load forecast materializes, the cost to ratepayers for Site C will be less than the Illustrative Alternative Portfolio.

Let’s unpack that statement. Throughout its submissions BC Hydro has suggested that the Panel consider a mid load forecast in carrying out subsequent cost assessments. The Panel reports that it found the mid load forecast “excessively optimistic” and chose to use the low load forecast in conducting subsequent analyses. Now this is the critical point. Under the low load forecast the alternative renewables and demand-side management  (DSM) portfolio is comparable in price with completing Site C. Thus the decision as to whether to cancel or complete Site C is not clear. This is important because as the Panel points out, under the mid load and high load forecasts building Site C is clearly the better decision for ratepayers. Thus the entire conclusion of this Report depends on which forecast was chosen by the Panel

The question arises therefore: why did the Panel decide to rely on the low load forecast (which made the decision a toss-up) rather than the mid load or high load forecasts (which make Site C a slam dunk)? Well the answer to that is simply mind-blowing for me. As detailed on page 81:

Given the uncertainty, the Panel finds additional load requirements from potential electrification initiatives should not be included in BC Hydro’s load forecast for the purpose of resource planning. Although available information indicates that the effects of electrification on BC Hydro’s load forecast could potentially be significant, the timing and extent of those increases remain highly uncertain.

As someone who has been active on the climate change file this almost knocked me off my chair. The Panel decided that one of the primary tools for fighting climate change (reduction in reliance on fossil fuels via electrification) should be completely omitted from consideration in assessing future electricity demand in BC. My regular readers will note that the entire basis of my submission to the BCUC Inquiry was the need to consider the electricity needs associated with reducing our dependence on fossil fuels. When the preliminary report came out I was a bit surprised that the Panel had omitted any discussion of the Paris Agreement and our climate change goals. I saw that as an oversight and commented in a follow-up submission. Now I realize that it was a deliberate decision. It is as if the Panel lives in a world where Canada never signed the Paris Agreement.

That left me to wonder, why would the Panel make such an extraordinary decision?  Well Mr. Morton, the Chair of the Panel, explained it this way in a radio interview on CKNW radio:

We can’t make any predictions about what government policy would be in the future so our analysis did not include potential changes of government policy. They included what government policy is today and we pointed out that government policy would certainly change things…..if government electrification policy changed that would change demand. Again we couldn’t really make assumptions about what policy may or may not be in the future.

Reading and re-reading that quote I cannot believe that the Chair of the Panel (a regulator) could make that statement in light of what we already know about climate change. What is even more disconcerting is that page 129 of the Report includes text from Section 2 of the Clean Energy Act that defines British Columbia’s energy objectives and enumerates the requirements to reduce our emissions by 2050 while referring them back to the Greenhouse Gas Reduction Targets Act. To clarify, the province has a whole slew of “Climate Action Legislation” on the books. One of the primary ways of decarbonizing in a manner consistent with the Clean Energy Act (and the other applicable Acts) is via electrification and yet the BCUC suggests they can’t foresee policy implications that include electrification?

You might ask how the Panel came to this conclusion? Well that answer goes back to a critical weakness of this process: the rush to complete it and the absence of time for the Panel to effectively weigh the evidence they were presented against the body of scientific research in the public realm. Benjamin Disraeli is attributed with the quotation: “History is made by those who show up“. In this case the people who showed up to talk to the Panel were the activists who want the project cancelled and they brought their paid consultants with them. The people who did not show up (with some limited exceptions) were the scientific community of British Columbia. The result was that the panel was flooded with misinformation and anecdotes and lacked the time (and possibly expertise) to effectively weed out the bad information.

Since electrification represents the key factor in differing between the low load and high load forecasts let’s consider the Panel’s analysis and findings against electrification. On page 81 under “Potential disrupting trends” the Panel indicates that it leans heavily on the work of Hendriks et al. (Hendriks) This begs the question: who is this Hendriks fellow? Well according to his online CV, Richard M. (Rick) Hendriks is the Director of Camerado Energy Consulting Inc. which has been working for the Treaty 8 Nations against the Site C project since at least 2010. He is, or until recently was (I really don’t know), being paid to oppose the project.

Hendriks’ submission includes at its core the details from a paper that I have previously addressed at this blog. In my original blog post I note that the previous work by Hendriks and Dr. Karen Bakker of UBC attempts, and in my opinion fails, to discount the research from the Deep Decarbonization Pathways Project (DDPP) and Trottier Energy Futures Project (TEFP). That work was produced for, and  ultimately reviewed and published by, Environment and Climate Change Canada (ECCC) in their assessment report on Canadian energy needs under various climate change scenarios.

In a practical sense what we have is a difference of professional/scientific opinion. On one side we have research groups from leading research institutions in 16 of the world’s largest greenhouse-gas-emitting countries; a team of more than a dozen energy experts from the Canadian Academy of Engineering; all overseen by a team of subject matter experts from the Federal government. On the other side we have a consultant “trained in engineering, science and social science” who has spent the better part of a decade working for a group opposed to the dam and a water governance expert. Would anyone care to guess which side the Panel believed? Well it was the gent who showed up and talked to them in person (Mr. Hendriks). Thousands of hours of analysis by dozens of the world’s top subject-matter experts was dismissed by the Panel who chose instead to rely on the guy who showed up for a presentation and to answer questions.

The Panel also mysteriously chose to trust Mr. Hendriks over the far more qualified Dr. Jaccard (and his research Group) when it comes to electrification of British Columbia’s vehicle fleet. Once again the explanations are hard to explain. Dr. Jaccard and Associates prepared an Electrification Potential Review that included estimates of electricity demand under a number of scenarios and assumptions. The report concluded that electric vehicles would result in Terra-watt hours of demand which would have, once again, driven us from the low load to the mid or high load forecasts. Hendriks dismissed that detailed analysis by going back to a truly horrendous BC Hydro load forecast that suggested that by 2030 a little over 8% of British Columbia’s automobile fleet would be electric vehicles. [The link is to my analysis that demonstrates why the load forecast is so poor.] To summarize, the BC Hydro analysis assumes that about 8% of BC’s vehicle fleet will be electric vehicles in 2030. Now recognize, some analysts are claiming that we won’t be able to buy internal combustion engines in 2030 but the BC Hydro forecast used by Hendriks (and thus the Panel) assumes that electric vehicles will still be no more than a novelty at that point in time. To put it another way, we will have surely have failed in our fight against climate change if that is the case. So once again on one side we have a respected expert who provides a detailed analysis, supported by references to the peer-reviewed research, that shows a high demand for electricity in 2030 and on the other we have Mr. Hendriks who cites a back-of-the-envelope calculation from BC Hydro that pre-dates our signing of the Paris Agreement. Anyone want to guess who the Panel chose to believe? The guy who showed up to the meeting of course.

I can’t repeat it enough because this point is so important. The entire basis of the Panel’s conclusion that to build or not build Site C is a toss-up is based on the assumption that the BC and Federal Governments will do nothing to fight climate change. This in a province and country where both governments have dedicated massive resources to fighting climate change. Were the efforts to fight climate change through electrification included in the analysis then in the Panel’s own words “the cost to ratepayers for Site C will be less than the Illustrative Alternative Portfolio.” Looking at the Panel’s own report the basis for their discounting electrification is a couple paper-thin analyses that run exactly opposite to the massive consensus of scientific and regulatory opinion in Canada. Essentially we are making a $10 Billion bet that Canada will do nothing significant to fight climate change and the sole basis for that bet is an analysis done by a consultant working against the project and a low-quality BC Hydro analysis completed prior to Canada signing the Paris Agreement.

Posted in Canadian Politics, Climate Change, Site C, Uncategorized | 12 Comments

Agriculture near Site C: confronting mythology with facts

This blog is about evidence-based environmental decision making. I strive to present facts supported by references and emphasize the importance of using reliable data in decision-making. This is why I have spent so much time on the Site C Dam project, as many of the arguments against the dam have been built on a structure anecdotes, exaggeration and bad information. Nowhere has this been more evident than in the discussions around the agricultural potential of the Peace Valley and the spurious arguments about food security.

My last dive into this topic dealt with the now thoroughly debunked claim that the area to be flooded by Site C could feed 1 million people. This claim was started by a retired Professional Agrologist named Wendy Holm and was repeated by supporters and anti-Site C activists. Happily thanks to this blog and others like me Ms. Holm has backed down from her ridiculous claim. Rather Ms. Holm has become much more circumspect in her language. She no longer claims that the area flooded by Site Site C will feed 1 Million people instead she has adjusted her claim to:

this land is capable of producing sufficient vegetables to meet the nutritional needs of more than one million people a year, in perpetuity

The claim is based on a single, uncorroborated study written by a consultant in the 1980’s. Yes you read that right, a consultant wrote a non-peer reviewed report in the 1980’s and since that time no other researcher or other source has presented any details to support that claim. In real science, a claim is made and it is examined and studied and compared against real data but that is not how Ms. Holm works. She has taken a historic report she found that supported her general world-view and treated it like the word of god. She then used this uncorroborated assessment to extrapolate wildly (as I will discuss below) to come out with a fantastically inflated number that every anti-Site C activist seems to repeat like a Gospel. So let’s look at her claims a bit more closely.

Let’s start with her extrapolations. Ms. Holm, insists:

All 3,816 hectares of alluvial soils to be flooded are extremely high capability land (Class 1-3, improved ratings).

Juxtapose this with a  previous article in the Times Colonist: Reports of lost Site C farmland simply not true which states:

 the loss of valley bottom land with agricultural capability is closer to 3,800 ha, of which only 1,600 ha has actual potential. I would also point out that little of the land — less than 400 ha being flooded — was actually being cropped, and then mainly for forage, not food crops.

So the question arises how can they both be right? Well the answer is simple.  To help you visualize a kind researcher has posted a map of the area to be flooded. As you can see, much of the area to be flooded represents islands in the middle of the river that are inaccessible to industrial farming equipment. Yes the land is Class 1-3 but if you can’t access the land (because it is in the middle of the river) then it really doesn’t represent useful farmland. Mr. Anderson (author of the first article) only includes land that can actually be accessed with farm machinery, which makes sense if you plan on intensely farming an area. Ms. Holm has used the results of a GIS exercise that counted every square centimeter of land on every isolated little island in the middle of the river. This allows her to extrapolate wildly and few have called her to task on the subject.

So who should you trust on the topic? Ms. Holm’s supporters are quick to claim her expertise (she is a retired Agrologist) however it would appear Mr. Anderson has a wee bit of expertise in this area specifically:

James D. Anderson was director of farmland resources for the Ministry of Agriculture, Forests and Fisheries from 1980 to 1985 and involved in the first environmental review and agricultural assessment done of Site C in 1982.

On the face of it I would argue that Mr. Anderson has a strong claim to be a credible voice on this topic. He was, after all, the gent in charge of the whole shebang the last time this assessment was carried out. It is funny how the activist who are fighting the dam continue to highlight Dr. Swain’s expertise as Chair of the Joint Review Panel but they give short-shrift to the man who actually was in charge of the Department when Ms. Holm’s famous vegetable study was written.

The next question arises: who is right about the potential of the land? Well the the proof of the pudding is in the eating. In 1980 a claim was made that the Peace Valley could serve as a vegetable Mecca let’s look how that prediction has that turned out? This being a blog that relies on data lets look at some data. Every 5 Years Statistics Canada does a Census of Agriculture, the results of which are posted online.  While the most recent census results are not yet reported, the results from 2006 and 2011 are online for the Peace River Regional District. Let’s see how the actual facts line up with the mythology being portrayed by Ms.Holm.

According to the Census of Agriculture, in 2006 there were 26 hectares (in the entire Peace Valley) dedicated to commercial growing of vegetables. By 2011 that number had jumped to ZERO….yes you read that right in this Mecca of vegetables there were no hectares dedicated to commercial  vegetables in 2011 in the entire Peace River Regional District. Not just the Site C-affected Valley bottom, but the entire Peace River Regional District. Almost 824,000 hectares of farmland area and none, nada, nil dedicated to the commercial growing of vegetables. Sure some backyard gardens certainly grew carrots and lettuce but no agricultural land was dedicated to vegetables. That represents a pretty reasonable debunking of Ms. Holm’s hypothesis.

Ms. Holm argues that much of the best land in the Valley has been reserved for the legal flood reserve since the 1950s. What this fails to note is that the Valley has been farmed since the 1920’s and no one bothered to set up a commercial vegetable patch in the first 30+ years the valley was farmed. Moreover, as Ms. Holm and Mr. Anderson point out there are 400 hectares of this prime land that is currently being farmed and yet in 2011 exactly none of it was used for vegetables, rather the 26 hectares that were being farmed in 2006 had stopped being used for the purpose.

Let’s be absolutely clear here, Ms. Holm insists that Site C is so important because it represents most of the Class 1-2 land in the valley except according to the people who actually track this information the Peace River Regional District has over 5,000 hectares of Class 1 soils and almost 121,000 hectares of Class 2 soils. This means that literally thousands of hectares of Class 1 soils exist outside the legal flood reserve and would, under Ms. Holm’s hypothesis, represent ideal locations for vegetables and fruits. Yet, as the statistics demonstrate none of those thousands of hectares are being used to grow fruits or vegetables, Moreover, over 100,000 Hectares of Class 2 soils exist in the Peace Valley Regional District which would supply ample space for the needed growing area under a climate change scenario.

Now this seems a bit strange. Ms. Holm claims that the Peace Valley is the ideal location to grow vegetables and the entire farming community of the Peace Valley disagrees with her. In science we call that testing a hypothesis. A hypothesis was proposed in 1980 that the Peace Valley would be an excellent location for vegetables. Farmers tried it out and ultimately stopped growing vegetables sticking instead with grains and forage. That is a pretty definitive debunking of that hypothesis.

As a further note,  in her most recent letter to the Editor in the Times Colonist Ms. Holm expanded her repertoire of crops to include fruits, which she mentioned three times. Anyone care to guess how many hectares in the entire Peace Valley Regional District were dedicated to commercial fruit production? If you guessed zero in both 2006 and 2011 you would be right. It is almost as if farmers have more sense than to risk their livelihood on tree fruit and vegetable crops that are susceptible to frost. That far north an early/late frost can destroy an entire crop so farmers have decided to avoid those crops.

As this post is getting long (and it is getting late) I want to briefly touch on a final topic of  mythology being put forward by the anti-Site C activists. That we need to preserve the flood plain affected by the Site C dam for food security purposes.  According to the official numbers the Site C Dam will flood approximately 0.4% of the agricultural land in the Peace District or 0.2% of the agricultural land in BC. Doesn’t this put these food security arguments into perspective? It is ridiculous to claim that the flooding of the land required for Site C will put our food security at risk? We currently have almost 2 million hectares of ALR that we aren’t even bothering to farm (including 426,000 in the Peace District) and the activists claim we will go hungry if we flood around 5,000 hectares of it in the Peace?

Moreover, when it comes to the production of fruit and vegetables we don’t necessarily need to depend on Class 1-3 lands in the north because I have a secret to tell you. Our future food security in BC for fruit and vegetables is actually going to come from greenhouses. Anyone who has been to my neck of the woods has seen the greenhouse  industry springing up left, right and center. They are able to produce incredible quality produce from lands of all classes (even commercial and former industrial lands). As for the question: where is the soil going to come from for use in the greenhouses? Well that would be municipal organics management and composting. Composting facilities in the lower mainland are producing more high-quality organic soils than we know what to do with. Access to good soil will not be the limiting factor in the growth of the greenhouse vegetable industry.

Now let’s look at how greenhousing has flourished in the last decade. Going back to the Agricultural Census let’s look at the Metro Vancouver stats: Greenhouse space for vegetable production almost quadrupled from 1996 to 2011 from 500,000 mto 1.8 million m2 . Our food security for vegetables in British Columbia is not dependent on a small portion of a northern valley prone to unexpected frosts but rather to using the resources we have at hand (agricultural land and green-housing technologies) far closer to vast majority of consumers in the Lower Mainland. The Peace Valley, meanwhile, will retain its characteristics as our bread basket and can do that with Site C in place.

To conclude, our food security is not at stake from building Site C, rather the energy produced by Site C will help provide clean power to greenhouses that can produce higher yields closer to the population base of our province. Unlike the oft-repeated claims from Ms. Holm the Peace Valley is not a fruit and vegetable Mecca, rather commercial fruit growers have avoided the area for the last 100 years while the few farms that tried out vegetables have abandoned the effort. Put simply, just because Ms. Holm and the anti-Site C activists keep repeating a myth doesn’t make it any more real. The data make it clear that her fabulous report from 1980 was simply a case of wishful thinking and combined with political activism has created a mythos that desperately needs to be exposed to the light of evidence-based decision-making.

 

 

Posted in Canadian Politics, Site C | 13 Comments

Some ideas to help teach Evolution under BC’s new Grade 7 Science curriculum

I am going to take a break from writing about tame topics like pipelines and Site C to try my hand at a truly contentious topic: teaching evolution in the BC classroom.

As any Grade 7 teacher (or Grade 6 if you are doing A/B Year Schedules) knows, the new BC curriculum has made the topic of Evolution one of the “Big Ideas” for Grade 7 Science. Needless to say a lot of teachers have not had to teach the subject before and are at a loss how to address many of the challenges associated with this potentially hot-button topic; especially given the age of the children being taught. As a practicing scientist, I do a lot of science outreach in the schools and this year my wife asked me to come down to her school to help a couple of her colleagues who were looking for assistance teaching the subject to their students. In this post I would like to share some ideas I have gleaned on how to make this challenging topic understandable for elementary-aged kids and to avoid/side-step some of the landmines associated with teaching this topic.

The BC Curriculum Guide breaks down the topic into three areas:

  • changes in traits over time,
  • survival needs, and
  • natural selection.

The problem is that these are not intuitive divisions so I will give some simple ideas about important topics you may wish to cover.

What’s DNA? – let’s talk LEGO

In order to really understand evolution kids need a basic understanding of the concept of DNA. The problem is DNA is not an easy topic to teach or understand in Grade 7. When I was in high school we were taught the DNA was like a recipe book. Follow the recipe right and you make a living creature. Make a mistake following your recipe and you might have a delight or a disaster.

While that may fly for older kids, for younger kids I find that teaching DNA using LEGO is a better analogy. Think of DNA as the instruction booklets that come with a set of LEGO and the bricks as the proteins etc.. that make up a cell/body. Since my son is a LEGO fan I use a Super-Star Destroyer as an example. It comes with multiple little books and multiple little bags of parts. Using the books you assemble the parts. Because the project is large the chance exists for a mutation where one brick is put in the wrong spot. It might be 15 steps later before you discover that the error (a mutation) means another brick can’t be placed where it is supposed to go. Maybe your saucer unit won’t fit on the top and you have to move it. The change may be good or bad but it is a change. If you are lucky the change makes your LEGO creation cooler and not a disaster. It you are unlucky your Super-Star Destroyer loses its sensor array and the Millennium Falcon sneaks up behind it and blows it up.

Natural Selection – Mold and bacteria fight it out

Building on the idea of DNA we can talk about natural selection. To introduce the topic I like to talk molds and bacteria. I use molds because every kid has seen moldy bread and we all know about bacterial infections. I explain how mold and bacteria have been at war since forever (fighting over the surface of an old rotten orange as an example). There is only so much orange to go around and so the slow-growing mold really have to work hard to beat the fast-growing bacteria. I explain how one lucky penicillium had a mutation that caused it to make a protein called penicillin. This protein killed bacteria and because it did the lone penicillium was able to reproduce and win in its battle against the bacteria and take over the orange. Because of its success it was able to reproduce and now kitchens everywhere have penicillin producing mold ready to eat old, stale bread.

Given your time availability you can then segue over to the idea of antibiotic resistance as that goes back to the idea of survival needs and the need to adapt to survive.

Natural Selection – Survival of the fittest

This is a relatively easy topic. Most kids have seen documentaries where the pack of  wolves/lions stalk a herd of prey animals, identify the weakest member of the herd and attack it. In this scenario the weak are killed and the strong get away to have kids.

The other side of the coin can also be interesting for kids. Many of the kids at our school fish and crab and many have measured crabs and thrown back the little ones. I point out how that throwing back the little ones means that the little ones may actually be fitter in that case. Fitter doesn’t always have to be bigger just more likely to make it to the next generation.

Adaptive Radiation- Dump the finches let’s talk dogs

Having talked about natural selection we can talk about the idea of adaptive radiation: how organisms diversify from an ancestral species. Darwin has his eureka moment by looking at finches in the Galapagos and since that time science teachers have used that model to teach their students, to great boredom. Little brown birds are exciting to bird lovers like me but bore my kids to death. What my kids do love is dogs and dogs are a great way to introduce the topic of adaptive radiation.

I ask the students to identify the dogs they have and then go from there. In one class we had two boys, both with chihuahuas, and I asked if they imagine 100,000 years ago whether packs of chihuahuas roamed the plains taking down buffalo. We all agreed that this was likely not the case. I explained that when the first wolves decided that being friends with humans was a better way to get a meal than trying to eat humans they looked nothing like the dogs of today. Our ancestors bred dogs based on traits and we ended up with the breeds we have today. I then provided some typical traits we have bred for: protection – mastiffs; ability to catch vermin – terriers; ability to fetch downed wild fowl – retrievers etc.. This provides an easy to understand example of the essence of adaptive radiation. It also help because so many families have new mixed-breeds like the Labradoodle.

A Common Ancestor – No we didn’t evolve from apes, think more like long-lost cousins

Eventually every class has to deal with the common ancestor problem. Now we all know that we have a common ancestor because DNA is a pretty complicated way to keep track of your proteins and we all have DNA. But how do you explain that to kids. The best way I have found is to talk about families. I have brothers and they have kids. My kids therefore have cousins. My kids are not descended from their cousins they are descended from a common ancestor (their grandparents). Extrapolate backwards and we can infer that everything with DNA must radiate back to to a common ancestor. Moving downwards we come to recognize that we are all related but may not be descended from each other.

About that whole religion thing

The part of teaching Evolution that really had our teachers on edge was the fact we live in a very religious community with adherents of lots of faiths and the teachers really want to avoid stepping on religious toes. While teaching evolution will anger some parents it can be done in a way that reduces the likelihood of negative feedback. As a first step it is important to point out that everyone has their own way to look at the world and no one way is necessarily better than another.

Science is one way in which many people look at the world. As I scientist I was taught the scientific method. An individual sees something; makes a hypothesis; collects data against which to test the hypothesis; and revises or discards the hypothesis based on the information collected. The process is iterative. Do this enough times and the hypothesis becomes more robust (a Theory) and maybe given enough time it becomes a Law. Scientists being conservative will often call a well-tested theory a “Theory” long after it really should be called a Law. Such is the case with Evolution. Science has done enough work to demonstrate that evolution happens and we even understand, to a great extent, how it happens. Does this, therefore mean that evolution contradicts religious teaching? I don’t think it has to in the elementary classroom.

It is important to point out that at this current place in time, science doesn’t have all the answers. We still don’t have a handle on consciousness and while the basics of evolution are in place we are still ironing out how that first life went from connecting naturally occurring proteins to forming simple organic molecules to a functioning nucleus with self-replicating DNA. This provides a lot of wiggle-room for the elementary-level educator.

To help understand I have previously explained the topic this way. I was brought up in the Catholic tradition and in the Catholic tradition the original bible was written in Hebrew, Aramaic and Greek with the Genesis story written in ancient Aramaic. Ancient Aramaic and ancient Hebrew were languages that had limited vocabularies and as such the Genesis story would be similarly limited in how it could explain the origins of humanity. As an example, neither ancient Aramaic nor ancient Hebrew have words for really big numbers (like a billion) and they certainly don’t have language to describe an accretion disk coalescing to form a planet. As such the language used in the Bible is necessarily simplified. A “day” in Genesis could mean anything from a solar day to a billion years and as such evolution doesn’t have to contradict your students’ religious heritage. Rather evolution can snuggle up in that gray zone that gets the lesson taught and avoids lots of angry letters/phone calls from parents.

Now I apologize in that I tried to fit a lot into a single post and so a lot of detail is missing. I gave two presentations this week and each took about an hour of which at least 20 mins was kids asking questions and teachers clarifying details that they (or their students) were not clear on. Given our school (we live in a very religious community) and the fact that last night was meet the teacher, I expected my wife to relay some parental feedback from the classes I visited, but I have yet to get any. That is a good thing in my books.

Posted in Uncategorized | 7 Comments