Integrating Renewable Energy Part 2: Electricity Market & Policy Challenges

Written by Kasparas Spokas

The rising popularity and falling capital costs of renewable energy make its integration into the electricity system appear inevitable. However, major challenges remain. In part one of our ‘integrating renewable energy’ series, we introduced key concepts of the physical electricity system and some of the physical challenges of integrating variable renewable energy. In this second instalment, we introduce how electricity markets function and relevant policies for renewable energy development.

Modern electricity markets were first mandated by the Federal Energy Regulatory Commission (FERC) in the United States at the turn of millennium to allow market forces to drive down the price of electricity. Until then, most electricity systems were managed by regulated vertically-integrated utilities. Today, these markets serve two-thirds of the country’s electricity demand (Figure 1) and the price of wholesale electricity in these regions is historically low due to cheap natural gas prices and subsidized renewable energy deployment.

The primary objective of electricity markets is to provide reliable electricity at least cost to consumers. This objective can be further broken down into several sub-objectives. The first is short-run efficiency: making the best of the existing electricity infrastructure. The second is long-run efficiency: ensuring that the market provides the proper incentives for investment in electricity system infrastructure to guarantee to satisfy electricity demand in the future. Other objectives are fairness, transparency, and simplicity. This is no easy task; there is uncertainty in both supply and demand of electricity and many physical constraints need to be considered.

While the specific structure of electricity markets varies slightly by region, they all provide a competitive market structure where electricity generators can compete to sell their electricity. The governance of these markets can be broken down into several actors: the regulator, the board, participant committees, an independent market monitor, and a system operator. FERC is the regulator for all interstate wholesale electricity markets (all except ERCOT in Texas). In addition, reliability standards and regulations are set by the North American Electric Reliability Council (NERC), which FERC gave authority in 2006. Lastly, markets are operated by independent system operators (ISOs) or Regional Transmission Operators (RTOs) (Figure 1). In tandem, regulations set by FERC, NERC, and system operators drive the design of wholesale markets.

Wholesale energy market ISO/RTO locations (colored areas) and vertically-integrated utilities (tanned area). Source: https://isorto.org/

Before we get ahead of ourselves, let’s first learn about how electricity markets work. A basic electricity market functions as such: electricity generators (i.e. power plants) bid to generate an amount of electricity into a centralized market. In a perfectly competitive market, the price of these bids is based on the costs of an individual power plant to generate electricity. Generally, costs are grouped by technology and organized along a “supply stack” (Figure 2). Once all bids are placed, the ISO/RTO accepts the cheapest assortment of generation bids that satisfies electricity demand while also meeting physical system and reliability constraints (Figure 2a). The price of the most expensive accepted bid becomes the market-clearing price and sets the price of electricity that all accepted generators receive as compensation (Figure 2a). In reality it is a bit more complicated: the ISO/RTOs operate day-ahead, real-time, and ancillary services markets and facilitate forward contract trading to better orchestrate the system and lower physical and financial risks.

Figure 2. Schematics of electricity supply stacks (a) before low natural gas prices, (b) after natural gas prices declined, (c) after renewable deployment.

Because real electricity markets are not completely efficient and competitive (due to a number of reasons), some regions have challenges providing enough incentives for the long-run investment objective. As a result, several ISO/RTOs have designed an additional “capacity market.” In capacity markets, power plants bid for the ability to generate electricity in the future (1-3 years ahead). If the generator clears this market, it will receive extra compensation for the ability to generate electricity in the future (regardless of whether it is called upon to generate electricity) or will face financial penalties if it cannot. While experts continue to debate the merits of these secondary capacity markets, some ISO/RTOs argue capacity markets provide the necessary additional financial incentives to ensure a reliable electricity system in the future.

Sound complicated? It is! Luckily, ISO/RTOs have sophisticated tools to continuously model the electricity system and orchestrate the purchasing and transmission of wholesale electricity. Two key features of electricity markets are time and location. First, market clearing prices are time dependent because of continuously changing demand and supply. During periods of high electricity demand, prices can rise because more expensive electricity generators are needed to meet demand, which increases the settlement price (Figure 2a). In extreme cases, these are referred to as price spikes. Second, market-clearing prices are regional because of electricity transmission constraints. In regions where supply is low and the transmission capacity to import electricity from elsewhere is limited, electricity prices can increase even more.

Several recent developments have complicated the economics of generating electricity in wholesale markets. First, low natural gas prices and the greater efficiency of combined cycle power plants have resulted in low electricity bids, restructuring the supply stack and lowering market settlement prices (Figure 2b). Second, the introduction of renewable power plants, which have almost-zero operating costs, introduce almost-zero electricity market bids. As such, renewables fall at the beginning of the supply stack and push other technologies towards the right (higher-demand periods that are less utilized), further depressing settlement prices (Figure 2c). A recent study by the National Renewable Energy Laboratory expects these trends to continue with increasing renewable deployment.

In combination, these developments have reduced revenues and challenged the operation of less competitive generation technologies, such as coal and nuclear energy, and elicited calls for government intervention to save financial investments. While the shutdown of coal plants is welcome news for climate advocates, nuclear power provided 60% of the U.S. carbon-free electricity in 2016. Several states have already instated credits or subsidies to prevent these low-emission power plants from going bankrupt. However, some experts argue that the retirement of uneconomic resources is a welcome indication that markets are working properly.

As traditional fossil-fuel power plants struggle to remain in operation, the development of new renewable energy continues to thrive. This development has been aided by both capital cost reductions and federal- and state-level policies that provide out-of-market economic benefits. To better achieve climate goals, some have argued that states need to write policies that align with wholesale market structures. Proposed mechanisms include in-market carbon pricing, such as a carbon tax or stronger cap-and-trade programs, and additional clean-energy markets. Until now however, political economy constraints have limited policies to weak cap-and-trade programs, investment and production tax credits, and renewable portfolio standards.

While renewable energy advocates support such policies, system operators and private investors argue these out-of-market policies could potentially distort wholesale electricity markets by suppressing prices and imposing regulatory risks on investors. Importantly, they argue that this leads to inefficient resource investment decisions and reduced competition that ultimately increases costs for consumers. As a result, several ISO/RTOs are attempting to reform electricity capacity market rules to satisfy these complaints but are having difficulty finding a solution that satisfies all stakeholders. How future policies will be dealt with by FERC, operators and stakeholders remains to be resolved.

As states continue to instate new renewable energy mandates and technologies yet to be well-integrated with wholesale markets, such as battery storage, continue to evolve and show promise, wholesale market structures and policies will need to adapt. In the end, the evolution of electricity market rules and policies will depend on a complex interplay between technological innovation, stakeholder engagement, regulation, and politics. Exciting!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Integrating Renewable Energy Part 1: Physical Challenges

Written by Kasparas Spokas

Meeting climate change mitigation targets will require rapidly reducing greenhouse gas emissions from electricity generation, which is responsible for a quarter of all U.S. greenhouse gas emissions. The prospect of electrifying other sectors, such as transportation, further underscores the necessity to reduce electricity emissions to meet climate goals. To address this, much attention and political capital have been spent on developing renewable energy technologies, such as wind or solar power. This is partly because recent reductions of the capital costs of these technologies and government incentives have made this strategy cost-effective. Another reason is simply that renewable energy technologies are popular. Today, news articles about falling renewable energy costs and increasing renewable mandates are not uncommon.

While capital cost reductions and popularity are key to driving widespread deployment of renewables, there remain significant challenges for integrating renewables into our electricity system. This two-part series introduces key concepts of electricity systems and identifies the challenges and opportunities of integrating renewables.

Figure 1. Schematic of the physical elements of electricity systems. Source: https://www.eia.gov/energyexplained/index.php?page=electricity_delivery

What are electricity systems? Physically, they are composed of four main interacting elements: electricity generation, transmission grids, distribution grids, and end users (Figure 1). In addition to the physical elements, regulatory and governance structures guide the operation and evolution of electricity systems (these are the focus of part two in this series). These include the U.S. Federal Regulatory Commission (FERC), the North American Electric Reliability Council (NERC), and numerous state-level policies and laws. The interplay between the physical and regulatory elements has guided electricity systems to where they are today.

In North America, the electricity system is segmented into three interconnected regions (Figure 2). These regions are linked by only a few low-capacity transmission wires and often operate independently. These regions are then further segmented into areas where independent organizations operate wholesale electricity markets and areas where federally-regulated vertically-integrated utilities manage all the physical elements (Figure 2). Roughly two-thirds of the U.S. electricity demand is now located in wholesale electricity markets. Lastly, some of these broad areas are further subdivided into smaller balancing authorities that are responsible for supplying electricity to meet demand under regulations set by FERC and NERC.

Figure 2. Left: North American Electric Reliability Corporation Interconnections. Right: Wholesale market areas (colored area) and vertically-integrated utilities areas (tanned area). Source: https://www.energy.gov/sites/prod/files/oeprod/DocumentsandMedia/NERC_Interconnection_1A.pdf & https://isorto.org/

Electricity systems’ main objective is to orchestrate electricity generation, transmission and distribution to maintain instantaneous balance of supply and continuously changing demand. To maintain this balance, the coordination of electricity system operations is vital. Electricity systems need to provide electricity where and when it is needed.

Historically, electricity systems have been built to suit conventional electricity generation technologies, such as coal, oil, natural gas, nuclear, and hydropower. These technologies rely on fuel that can be transported to power plants, allowing them to be sited in locations where electricity demand is present. The one exception is hydropower, which requires that plants are sited along rivers. In addition, the timing of electricity generation at these power plants can be controlled. The ability to control where and when electricity is generated simplifies the process by which an electricity system is orchestrated.

Enter solar and wind power. These technologies lack the two features of conventional electricity generation technologies, the ability to control where and when to generate electricity, and make the objective of instantaneously balancing supply and demand even more challenging. For starters, solar and wind technologies are dependent on natural resources, which can limit where they are situated. The areas that are best for sun and wind do not always coincide with where electricity demand is highest. As an example, the most productive region for on-shore wind stretches along a “wind-belt” through the middle of U.S. (Figure 3). For solar, the sparsely populated southwest region presents the most attractive sunny skies (Figure 3). As of now, long-distance transmission infrastructure to transport electricity from renewable resource-rich regions to high electricity demand regions is limited.

Figure 3. Maps of wind speed (left) and solar energy potential (right) in the U.S. Source: https://www.nrel.gov/

In addition, the timing of electricity generation from wind and solar cannot be controlled: solar panels only produce electricity when the sun is shining and wind turbines only function when the wind is blowing. Therefore, the scaling up of renewables alone would result in instances where supply of renewables does not equal customer demand (Figure 4). When renewable energy production suddenly drops (due to cloud cover or a lull in wind), the electricity system is required to coordinate other generators to quickly make up the difference. In the inverse situation where renewable energy generation suddenly increases, electricity generators often curtail the electricity to avoid dealing with the variability. The challenge of forecasting how much sun and wind there will be in the future adds more uncertainty to the enterprise.

Figure 4. Electricity demand and wind generation in Texas. The wind generation is scaled up to 100% of demand to emphasize possible supply-demand mismatches. Source: http://www.ercot.com/gridinfo/generation

A well-known challenge in solar-rich regions is the “duck-curve” (Figure 5). The typical duck-curve (named after the fact that the curve resembles a duck) depicts the electricity demand after subtracting the amount of solar generation at each hour of the day. In other words, the graph depicts the electricity demand that needs to be met with power plants other than solar, called “net-load.” During the day, the sun shines and solar panels generate electricity, resulting in low net-loads. However, as the sun sets and people turn on electric appliances after returning home from work, the net load increases quickly. Electricity systems often respond by calling upon natural gas power plants to quickly ramp up their generation. Unfortunately, natural gas power plants that can quickly increase their output are less efficient and have higher emission rates than slower natural gas power plants.

 

Figure 5. The original duck-curve presented by the California Independent System Operator. Source: http://www.caiso.com/

These challenges result in economic costs. A study about California concluded that increasing renewable deployment could result in only modest emission reductions at very high abatement costs ($300-400/ton of CO2). This is because the added variability and uncertainty of more renewables will require higher-emitting and quickly-ramping natural gas power plants to balance sudden electricity demand and supply imbalances. In addition, more renewable power will be curtailed in order to maintain stability (Figure 6), reducing the return on investment and increasing costs.

Figure 6. Renewable curtailment (MWh) and cumulative solar photovoltaic (PV) and wind power capacity in California from 2014 to 2018. Source: CAISO

Although solar and wind power do pose these physical challenges, technological advances and electricity system design enhancements can facilitate their integration. Several key strategies for integrating renewables will be: the development of economic energy storage that can store energy for later use, demand response technologies that can help consumers reduce electricity demand during periods of high net-load, and expansion of long-distance electricity transmission to transport electricity from natural resource (sun and wind) rich areas to electricity demand areas (cities). Which solutions succeed will depend on the interplay of future innovation, state and federal incentives, and electricity market design and regulation improvements. As an example, regulations that facilitate long-distance electricity transmission could significantly reduce technical challenges of integrating renewables using current-day technologies. To ensure efficient integration of renewable energy, regulatory and energy market reform will likely be necessary. For more about this topic, check out part two of our series here!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Carbon Capture and Sequestration: A key player in the climate fight

Written by Kasparas Spokas and Ryan Edwards

The world faces an urgent need to drastically reduce climate-warming CO2 emissions. At the same time, however, reliance on the fossil fuels that produce CO2 emissions appears inevitable for the foreseeable future. One existing technology enables fossil fuel use without emissions: Carbon Capture and Sequestration (CCS). Instead of allowing CO2 emissions to freely enter the atmosphere, CCS captures emissions at the source and disposes of them at a long-term storage site. CCS is what makes “clean coal” – the only low-carbon technology promoted in President Donald Trump’s new Energy Plan – possible. The debate around the role of CCS in our energy future often includes questions such as: why do we need CCS? Can’t we simply replace fossil fuels with renewables? Where can we store CO2? Is storage safe? Is the technology affordable and available?

Source: https://saferenvironment.wordpress.com/2008/09/05/coal-fired-power-plants-and-pollution/

The global climate-energy problem

The Paris Agreement called the globe to action: limit global warming to 2°C above pre-industrial temperatures. To reach this goal, CO2 and other greenhouse gas emissions need to be reduced by at least 50% in the next 40 years and reach zero later this century (see Figure 1). This is a challenging task, especially since global emissions continue to increase, and existing operating fossil fuel wells and mines contain more than enough carbon to exceed the emissions budget set by the 2°C target.

Fossil fuels are abundant, cheap, and flexible. They currently fuel around 80% of the global energy supply and create 65% of greenhouse gas emissions. While renewable energy production from wind and solar has grown rapidly in recent years, these sources still account for less than 2.1% of global energy supply. Wind and solar also face challenges in replacing fossil fuels, such as cost and intermittency, and cannot replace all fossil fuel-dependent processes. The other major low-carbon energy sources, nuclear and hydropower, face physical, economic, and political constraints that make major expansion unlikely. Thus, we find ourselves in a dilemma: fossil fuels will likely remain integral to our energy supply for the foreseeable future.

Figure 1: Global CO2 emissions (billion tonnes of CO2 per year): historical emissions, the emission pathway implied by the current Paris Agreement pledges, and a 2°C emissions pathway (RCP2.6) (Sources: IIASA & CDIAC; MIT & UNFCCC; IIASA)

CO2 storage and its role in the energy transition

CCS captures CO2 emissions from industrial sources (e.g. electric power plants) and transports them, usually by pipeline, to long-term storage sites. The ideal places for CO2 sequestration are porous rock formations more than half a mile below the surface. (Target rocks are filled with water, but don’t worry, it’s saltwater, not freshwater!) Chosen formations are overlain, or “capped,” by impermeable caprocks that do not allow fluid to flow through them. The caprocks effectively trap buoyant CO2 in the target rocks (see Figure 2).

Figure 2: Diagram of a typical geological CO2 storage site (Source: Global CCS Institute)

Scientists estimate that suitable rock formations have the potential to store more than 1,600 billion tonnes of CO2. This amounts to 70 years of storage for current global emissions from capturable sources (which are 50% of all emissions). Large-scale CCS could serve as a “bridge,” buying time for carbon-free energy technologies to develop to the stage where they are economically and technically ready to replace fossil fuels. CCS could even help us increase the amount of intermittent renewable energy by providing a flexible and secure “back-up” with low emissions. Bioenergy combined with CCS (BECCS) can also deliver “negative emissions” that may be needed to stabilize the climate. Furthermore, industrial processes such as steel, cement, and fertilizer production have significant CO2 emissions and few options besides CCS to reduce them.

In short, CCS is a crucial tool for mitigating the worst effects of global warming while minimizing disruption to our existing energy infrastructure and buying time for renewables to improve. Most proposed global pathways to achieve our targets include large-scale CCS, and the United States’ recently released 2050 decarbonization strategy includes CCS as a key component.

While our summary makes CCS seem like an obvious technology to implement, important questions about safety, affordability, and availability remain.

 

Is CCS Safe?

For CCS to contribute substantially to global emissions reduction, huge amounts of emissions must be stored underground for hundreds to thousands of years. That’s a long time, which means the storage must be very secure. Some worry that CO2 might leak upward through caprock formations and infiltrate aquifers or escape to the atmosphere.

But evidence shows that CO2 can be safely and securely stored underground. For example, the Sleipner project has injected almost 1 million tonnes of CO2 per year under the North Sea for the past 20 years. (For scale, that’s roughly a quarter of the emissions from a large coal power plant.) The oil industry injects even larger amounts of CO2 approximately 20 million tonnes per year – into various geological formations in the United States without issue in enhanced oil recovery operations to increase oil production. Indeed, the oil and gas deposits we currently exploit demonstrate how buoyant fluids (like CO2) can be securely stored in the subsurface for a very long time.

Still, there are risks and uncertainties. Trial CO2 injections operate at much lower rates than will be needed to meet our climate targets. Higher injection rates require pressure management to prevent the caprock from fracturing and, consequently, the CO2 from leaking. The CO2 injection wells and any nearby oil and gas wells also present possible leakage pathways from the subsurface to the atmosphere (although studies suggest this is likely to be negligible). Leading practices in design and maintenance can minimize well leakage risks.

Subsurface CO2 storage has risks, but experience suggests the risks can be mitigated. So, if CCS has such promise for addressing our climate-energy problem, why has it not been widely implemented?

 

The current state of CCS

CCS development has lagged, and deployment remains far from the scale required to meet our climate targets. Only a handful of projects have been built over the past decade. Why? High costs and a lack of economic incentives.

Adding CCS to coal and gas-fired electricity generation plants has large costs (approximately doubling the upfront cost of a new plant using current technology). Greenhouse gases are free (or cheap) to emit in most of the world, which means emitters have no reason to make large investments to capture and store their emissions. In order to incentivize industry to invest in CCS, we would need to implement a strong carbon price, which is politically unpopular in many countries. (There are exceptions – Norway’s carbon tax incentivized the Sleipner project.) In the United States, the main existing economic incentive for capturing CO2 is for enhanced oil recovery operations. However, the demand for CO2 from these operations is relatively small, geographically localized, and fluctuates with the oil price.

Inconsistent and insufficient government policies have thwarted significant development of CCS (the prime example being the UK government’s last-minute cancellation of CCS funding). Another challenge will be ownership and liability of injected CO2. Storage must be guaranteed for long timeframes. Government regulations clarifying liability, long-term responsibility for stored CO2, and monitoring and verification measures will be required to satisfy investors.

 

The future of CCS

The ambitious target of the Paris Agreement will require huge cuts in CO2 emissions in the coming decades. The targets are achievable, but probably not without CCS. Thus, incentives must increase, and costs must decrease, for CCS to be employed on a large scale.

As with most new technologies, CCS costs will decrease as more projects are built. For example, the Petra Nova coal plant retrofit near Houston, a commercial CCS project for enhanced oil recovery that was recently completed on time and on budget, is promising for future success. New technologies also have great potential: a pilot natural gas electricity generation technology promises to capture CO2 emissions at no additional cost. A technology that could capture CO2 from power plant emissions while also generating additional electricity is also in the works.

Despite its current troubles, CCS is an important part of solving our energy and climate problem. The recent United States election has created much uncertainty about future climate policy, but CCS is one technology that could gain support from the new administration. In July 2016, a bipartisan group of senators introduced a bill to support CCS development. If passed, this bill would satisfy Republican goals to support the future of fossil fuel industries while helping the United States achieve its climate goals. Strong and stable supporting policies must be enacted by Congress – and governments around the world – to help CCS play its key role in the climate fight.

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department at Princeton University studying carbon storage and enhanced oil recovery environments. More broadly, he is interested in studying the challenges of developing low-carbon energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas

 

Ryan Edwards is a 5th year PhD candidate in Princeton’s Department of Civil & Environmental Engineering. His research focuses on questions related to geological carbon storage, hydraulic fracturing, and shale gas. He is interested in finding technical and policy solutions to the energy-climate problem. Follow him on Twitter @ryanwjedwards.

An Apple a Day: Easier said than done

Written by Prof. Fernanda Márquez-Padilla

A few months ago, I pulled a muscle doing yoga and started going to physical therapy on a weekly basis soon after. I was supposed to do a 5-minute routine every day, and my discipline at doing so was mediocre at best. It wasn’t particularly hard, or painful, but still: it was so much easier to not do it.

At the same time, I was starting a research project on hypertensive patients’ behavior with respect to taking their medications as prescribed by their doctors (known in the medical literature as medication adherence), and had been reading about how people tend to be bad at doing so (with non-adherence considered “a worldwide problem of striking magnitude” by the WHO). “It doesn’t make much sense”, I remember thinking. Proper adherence to heart medication has been found to increase life expectancy, and significantly reduce the probability of negative health outcomes such as heart attacks, strokes, and other cardiovascular hospitalizations. And it’s “just” taking pills. Why don’t patients adhere? Then it hit me. I’m one of them: I’m terrible at adhering.

An important issue for health economics focuses on how to modify patients’ behavior. How can we motivate patients to engage in healthy conducts? Patient behavior has been found to be key for keeping individuals healthy. Improving patients’ medication adherence has great potential to reduce the costs of healthcare—especially for chronic patients who must often take specific medications for extended periods in order to manage their condition. However, modifying individuals’ behavior has been proven to be a challenging task, despite its positive implications for health outcomes and cost reductions.

A recent policy in Mexico undertaken by its largest public health provider, the Mexican Institute for Social Security (IMSS), created an interesting setup that unintentionally incentivized patients to improve their health behaviors—in this case, their medication adherence. The Receta Resurtible policy decreased the frequency with which hypertensive patients (i.e., high blood pressure) needed to see their physician and renew prescriptions, as long as their blood pressure remained stable and they were not late for renewing their prescriptions. In the new regime, patients could see their doctor every 90 days (as opposed to every 30). The policy’s main goal was to increase efficiency by eliminating arguably unnecessary check-ups from relatively stable chronic patients in order to free up clinic space and physicians’ time.

Waiting room at an IMSS Hospital. Source: paginabierta.mx

Now, why would this be an incentive for people to improve their health behavior? The key insight is that while consuming healthcare is a benefit for patients, it can also be time consuming and costly. Therefore, allowing chronic patients—who must be checked-up constantly—to go less often to see their doctor could actually be a type of “reward” that may be used to improve patient behavior. We may think of this as children being incentivized to study harder in order to avoid summer school.

In my research, I find that patients on the 90-day regime improved their medication taking behavior considerably. The number of days that they are out of medication between prescription fillings fell by 2.6 days in response to the policy (from a baseline of around 7.5 days). This is an improvement of 35%, comparable to the effects of other interventions for improving medication adherence, such as educational interventions or sending reminders to patients. My estimates suggest that patients improve their adherence as the total cost of getting their medication, which includes the non-monetary cost of actually renewing a prescription, falls. More interestingly, they further improve their behavior to be allowed to remain on the 90-day regime since they value its convenience. I was able to empirically test this thanks to great data from IMSS administrative records and a unique policy design.

Additionally, I find that patients’ health remained stable in spite of meeting with their physician less frequently. This point is particularly interesting for health policy, where the allocation of scarce medical resources should be done as efficiently as possible. Much debate has revolved around some prominent policies that seek to reallocate inputs for the production of health, such as reducing the frequency of certain procedures (i.e., consider the ongoing debate about the recommended frequency of mammograms) or allowing nurse practitioners to prescribe controlled medications. The value of these policies lies in the extent to which they can reduce the costs of providing healthcare, while not generating additional costs in terms of patients’ health or general wellbeing. In this sense, the Receta Resurtible policy appears to have increased efficiency by reducing how often patients should attend doctor’s appointments without harming their health.

I draw several general lessons on how to affect patients’ behavior from studying IMSS’s change in the frequency of prescription renewals. First, it is important to acknowledge that patients have a hard time adhering, and that sticking to a treatment is generally costly. Second, that in order to design the correct interventions to improve medication adherence, it is important to understand all the costs and benefits that patients face for engaging in any type of health behavior, and that these costs and benefits can be both monetary and non-monetary (such as the time and effort required to renew a prescription). Third, that incentives can come in the form of “getting out of something”—in this case, getting out of 8 check-ups per year. In a way, the policy created an additional benefit for improving medication adherence: the possibility of staying on the 90-day regime. This type of policy instrument may be useful to modify individuals’ behavior in other settings, and its design is particularly interesting as this type of incentive can be cost efficient and welfare improving: in this case, providing less healthcare is not only more efficient but it makes patients behave better as well, while keeping their health stable.

Perhaps next time I’ll be better at following my doctor’s suggested treatment!

 

Fernanda Márquez-Padilla holds a Ph.D. in Economics from Princeton University and is Assistant Professor at CIDE in Mexico City. Her research interests lie in the intersection of health and development economics, and is particularly interested in understanding patient behavior. She has worked as a consultant for the World Bank and RAND Corporation, worked for the Mexican Ministry of Finance, and has conducted research at Banco de México.

 

How Do Scientists Know Human Activities Impact Climate? A brief look into the assessment process

Written by Levi Golston

On the subject of climate change, one of the most widely cited numbers is that humans have increased the net radiation balance of the Earth’s lower atmosphere by approximately 2.3 W m-2 (Watts per square meter) since pre-industrial times, as determined by the Intergovernmental Panel on Climate Change (IPCC) in their most recent Fifth Assessment Report (AR5). This change is termed radiative forcing and represents a basic physical driver of higher average surface temperatures resulting from human activities. In short, it elegantly captures the intensity of climate change in a single number – the higher the radiative forcing, the larger the human influence on climate and the higher the rate of increasing surface temperatures. Radiative forcing is also significant because it forms the basis of equivalence metrics used in international environmental treaties, defines the endpoint of the future scenarios commonly used for climate change simulations, and is physically simple enough that it should be possible to calculate without relying on global climate models.

Given its widespread use, it is important to understand where estimates of radiative forcing come from. Answering this question is not straightforward because AR5 is a lengthy report published in three separate volumes. Chapter 8 of Volume 1, more than any other, quantitatively describes why climate change is occurring due to natural and anthropogenic causes and is, therefore, the primary source for how radiative forcing is assessed by the IPCC. One of the key figures is reproduced below, illustrating that the basic drivers of climate change are human-driven changes to aerosols (particles suspended in the air) and greenhouse gases, along with their relative strengths and uncertainties:

LeviFig1
Fig. 1: Assessments of aerosol, greenhouse gas, and total anthropogenic forcing evaluated between 1750 and 2011. Lines at the top show the 5-95% confidence range, with a slight change in definition from AR4 to AR5 [Source: Figure 8.16 in IPCC AR5].
This post seeks to answer two questions: how is the 2.3 W m-2 best estimate determined in AR5? And further, why is total anthropogenic forcing not known more precisely than shown in Figure 1 given the numerous observations currently available?

1. Variations on the meaning of radiative forcing

Fundamental laws of physics say that if the Earth is in equilibrium, then average temperature of the Earth is such that there is a balance between the energy that the Earth receives and the energy that it radiates. When this balance is disturbed, the climate will respond due to the additional energy in the system and will continue to change until the forcing has fully propagated through the climate system at which point a new equilibrium (average temperature) is reached. This response is controlled by processes with a range of timescales (e.g. the surface ocean over several years and glaciers over many hundreds of years), so radiative forcing depends on when exactly it is calculated. This leads to several subtly differing definitions. While the IPCC distinguishes between radiative forcing and effective radiative forcing, I do not attempt to distinguish between the two definitions here and refer to both as radiative forcing.

Figure 2 shows the general framework for assessing human drive change used by the IPCC, which is divided into four major components. Firstly, the direct impact of human activities through the release (emission) of greenhouse gases and particulates into the atmosphere is estimated, along with changes to the land surface through construction and agriculture. These changes cause the accumulation of long-lived gases in the atmosphere including carbon dioxide, the indirect formation of gases through chemical reactions, and an increase in number of aerosols in the atmosphere (abundance). Each of these agents influences the radiation balance of the Earth (forcing) and over time causes warming near the surface (climate response).

LeviFig2
Fig. 2: Linear [uncoupled] framework for modeling climate change shown with solid arrows. Dashed arrows indicate climate feedback mechanisms driven by future changes in temperature. Credit: Levi Golston
2. Individual drivers of change

The two major agents (aerosols and greenhouse gases) are further sub-divided by the IPCC as shown below. Each of the components are assessed independently, then summed together using various statistical techniques to produce the best estimate and range shown in Figure 1.

LeviFig3
Fig. 3: Estimates of radiative forcing (dotted lines) and effective radiative forcing (solid lines) for each anthropogenic and natural agent considered in AR5. [Source: Figure 8.15 in IPCC AR5].
Since the report itself is an assessment, each of the estimates in Figure 3 were derived directly from the peer-reviewed literature and are not the result of new model runs or observations. I have recently identified the specific sources incorporated in this figure elsewhere if one wants to know exactly how any of the individual bars were calculated. More generally, it can be seen that the level of confidence varies for each agent, with the most uncertainty for aerosol-radiation and aerosol-cloud interactions. Positive warming is driven most strongly by carbon dioxide followed by other greenhouse gases and ozone. It can also be seen that changes in solar intensity are accounted for by the IPCC, but are believed to be small compared to changes from the human-driven processes.

3. Can net radiative forcing be more directly calculated?

Besides adding together individual processes, is it also possible to independently assess the total forcing itself, at least over recent decades where satellite and widespread ground-based observations are available?  In principle, changes in the Earth’s energy balance, primarily seen as reduced thermal radiation escaping to space and as heat uptake by the oceans, should relate back to the net forcing causing these changes in a way that provides an alternate means of calculating human’s influence on the climate. To use this approach, one would need a good idea of how sensitive the Earth’s response will be in response to a given level of forcing. However, this sensitivity is equally or more uncertain than the forcing itself, making it difficult to improve on the process-by-process result. The ability to account for the Earth’s overall energy balance and then quantify radiative imbalances over time also remains a challenge. Longer data records and improved knowledge of climate sensitivity may eventually advance the ability to directly determine total radiative forcing going forward.

Still-North_America-1280x720_full
Fig. 4: Simulation of CO2 concentrations over North America on Feb 12th, 2006 by an ultra-high-resolution computer model developed by NASA. Photo Credit: NASA
4. Summary

The most widely cited number is based on an abundance-based perspective with step changes for each forcing agent from 1750 to 2011, resulting in an estimated total forcing of 2.3 W m-2. This number does not come from an average of global climate models, as might be imagined, but instead is the sum of eight independent components (seven human-driven, one natural), each derived and assessed from selected recent sources in the peer-reviewed literature.

Radiative forcing is complex and requires models to translate how abundances of greenhouse gases and aerosols actually affect global climate. For gases like carbon dioxide, documented records are available going back to pre-industrial times and earlier, but in other cases additional modelling is needed to help determine the natural state of the land surface and atmosphere. The total human-driven radiative forcing (Figure 1) is still surprisingly poorly constrained in AR5 (1.1 to 3.3 W  m-2 with 90% confidence), which is a reminder that while we are certain human activities are causing more energy to be retained by the atmosphere, continued work is needed on the physical science of climate change to determine by exactly how much.

Levi

Levi Golston is a PhD candidate in Princeton’s Environmental Engineering and Water Resources program. His research develops laser-based sensors coupled with new atmospheric measurement techniques for measuring localized sources, particularly for methane and reactive nitrogen from livestock, along with methane and ethane from natural gas systems. He blogs at lgsci.wordpress.com/

Losing the Climate War to Methane? The role of methane emissions in the global warming puzzle

Written by Dr. Arvind Ravikumar

There is much to cheer about the recent climate agreement signed last December at the 21st Conference of Parties (COP 21) in Paris, France to reduce greenhouse gas emissions and limit global temperature rise to below 2° C. Whether countries will implement effective policies to achieve this agreement is a different question. Leading up to the Conference in Paris, countries proposed their intended nationally determined contributions (INDCs). These refer to the various targets and proposed policies pledged by all countries that signed the United Nations Framework Convention on Climate Change on their intended contribution to reduce global warming. The United States, among other things, is banking on the recently finalized Clean Power Plan by the Environmental Protection Agency (EPA) – this policy aims to reduce US greenhouse gas (GHG) emissions from the power sector by 26 to 28% in 2030, partly by replacing high-emitting coal fired power plants with low-emitting natural gas fired plants, and increased renewable generation (primarily wind and solar). Electricity production by natural gas fired plants is therefore expected to increase over the next few decades, acting as a ‘bridge-fuel’ to a carbon-free economy. Even though the US Supreme Court recently halted the implementation of the Clean Power Plan, the EPA anticipates that it will eventually be upheld.

A major component of natural gas is methane. This is a highly potent greenhouse gas whose global warming potential (i.e. ability to increase the Earth’s surface temperature through the greenhouse effect) is 36 times that of carbon dioxide in long-term (100-year impact) and over 80 in the near-term (20-year impact). Although carbon dioxide is a major component of US greenhouse gas emissions (see Fig. 1), it is estimated that methane contributes around 10% of the total emissions. Thus, given its significantly higher global warming potential, methane emissions and leakage can potentially erode the climate benefits of declining coal production.

Figure 1: US greenhouse gas inventory (2013) Data from EPA
Figure 1. US greenhouse gas inventory (2013). Source: EPA

Methane emissions are fairly diversified across natural and man-made sources. Figure 2 shows the sources of methane emissions in the US (2013) as estimated by the EPA through its GHG monitoring program. While 50% of emissions can be attributed to agriculture and waste-disposal activities, we can see that about 30% of methane emissions come from the oil and gas industry. Much of this can be attributed to the recent boom in non-conventional or shale gas production through fracking technology. The combination of low natural gas prices and higher demand from the power sector makes it imperative to reduce methane emissions as much as technologically feasible.

Fig 2
Figure 2. US methane emission by source (2013) . Source: EPA.

Currently, methane leaks occur at all stages of the natural gas infrastructure – from production and processing, transmission to distribution lines in major cities. While the global warming effects of higher methane concentrations are fairly well understood, there is currently little consensus on the magnitude of emissions from the natural gas infrastructure. For example, a recent study found that the average methane loss in all distribution pipelines around Boston was about 2.7%, significantly higher than the 1.1% reported in inventory estimates to the EPA. Another study that was published in the academic journal, Science, showed that various independent measurements of methane leakage rate across the US infrastructure varied from about 1% to over 6%. Climate benefits of switching from coal to natural gas fired power plants would critically depend on this leakage rate.

[…], detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions.

To better estimate methane leakage, the Environmental Defense Fund (EDF), a non-profit organization based in Washington, DC, organized and recently concluded a series of 16 studies to find and measure leaks in the US natural gas supply chain. While some of the results are currently being analyzed, much of the data show that conventional inventory estimates maintained by the EPA have consistently underestimated the leakage from various sources. It was shown that the Barnett shale region in Texas that produces about 7% of the nation’s natural gas, emitted 90% more methane compared to EPA estimates. To complicate matters further, until recently, estimates from atmospheric top-down data measured using satellites and aircrafts significantly exceeded land-based bottom-up measurements using methane sensors. On a similar note, detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions. Such a small fraction of large emission sources will further complicate direct measurements where typically only a small fraction of the facilities in a region are measured. While the EDF and other studies have been instrumental in our current understanding of methane leaks in the US and its contribution to greenhouse gas emissions, much work is required to understand sources, and most importantly, ways to cost-effectively monitor, detect and repair such leaks.

Aerial footage of the recent natural gas leak from a storage well in Aliso Canyon near LA. The leak is estimated to have released 96000 metric tons of methane, equivalent to about 900 million gallons of gasoline burnt and $15 million worth of natural gas. Source: Environmental Defense Fund, 2015.

Methane leakage in the context of global warming has only recently caught public attention – see here, here and here. In addition to greater awareness in business and policy circles, significant efforts are required to identify economically viable leak detection and repair programs. Currently, the industry standard to detect methane leaks include high-sensitivity but high-cost sensors, or low-cost but low-sensitivity infrared cameras. There is an immediate need to develop techniques that can be used to cost-effectively detect leaks over large areas (e.g. thousands of squared miles). From a regulatory perspective, EPA has released proposed regulations to limit methane leaks from the oil and gas industry. This comes on the heels of the goals set by the Obama administration’s Climate Action Plan to reduce methane emissions from the oil and gas sector by 40 to 45% from 2012 levels by 2025. These regulations require oil and gas companies involved in the entire natural gas life cycle to periodically undertake leak detection and repair procedures, depending on the overall leakage levels. The final rule is expected to be out sometime in 2016.

The success of the Clean Power Plan in reducing greenhouse gas emissions will significantly depend on the strength of the proposed regulations to curb methane leaks. We now have a better estimate of fugitive emissions (leaks) of methane from the US natural gas infrastructure. Concurrently, there should be a greater focus on developing cost-effective programs to detect and repair such leaks. It was recently reported that replacing old pipelines with newer ones in the gas distribution network in a city is effective in reducing leaks, and improving public safety. With a considerably higher global warming potential than carbon dioxide, methane has the potential to erode the climate benefits earned by switching from high emitting coal plants to low emitting natural gas power plants. Ensuring that does happen will take a coordinated effort and commitment from both the industry and government agencies.

35d1259

Arvind graduated with a PhD in Electrical Engineering from Princeton University in 2015 and is currently a postdoctoral researcher in Energy Resources Engineering at Stanford University. Somewhere later in grad school, he became interested in the topics of energy, climate change and policy. Arvind is an Associate Editor at Highwire Earth. You can read more about his work at his personal website.

Human Impacts on Droughts: How these hazards stopped being purely natural phenomena

Written by Dr. Niko Wanders

We often hear about droughts around the world including those recently in the U.S. and Brazil, which has threatened the water safety for this year’s Olympic Games. Despite their natural occurrence, there is still a lot that we do not understand fully about the processes that cause them and about how they impact our society and natural ecosystems. These topics are of great interest to scientists and engineers, and of great importance to policy makers and stakeholders.

The elusive definition of a drought

A drought can be broadly defined as a decrease in water availability below levels that are considered normal within a region. This means that droughts do not only occur in warm, sunny, dry countries but can take place essentially anywhere. What makes it hard to come up with a single, precise definition of a drought is that this below-normal water availability can be found at the different stages of the water cycle: precipitation, soil moisture (i.e. how much water there is in the soil), snow accumulation, groundwater, reservoirs and streamflow. Therefore, more useful definitions of drought conditions have to be tailored for specific sectors (e.g. agriculture or power generation) by focusing on the stage of the water cycle that is relevant for them (e.g. soil moisture for farmers, and streamflow for controllers of hydroelectric and thermoelectric plants).

Droughts can cover areas that range from a few thousand squared miles to large portions of a continent and can last anywhere from weeks to multiple years. Normally they start after a prolonged period of below-normal precipitation, sometimes in combination with increased evaporation due to high temperatures. This then causes a reduction in water availability in the soil, which can lead to lower groundwater and river levels as a result of decreased water recharge from groundwater aquifers into rivers. Snowfall is another important factor because it adds a steady release of water resources into streams throughout the Spring. When most of the precipitation comes as rain, it will wash out fast, leaving the Spring with dry conditions once again. The evolution of a drought through the water cycle is called drought propagation and normally takes multiple weeks to several months to take place.

So far this season, El Niño has been bringing some relief to the California drought. The current snow accumulation is above normal which is good news for this drought stricken region. The forecasts for the upcoming months look hopeful and it is likely that California will see some relief of the drought in the coming months. Nevertheless, it will take multiple years before groundwater and reservoir levels are back to their normal conditions, so the drought and its impacts will still remain for at least the coming years.

season_drought
Figure 1. U.S. Seasonal Drought Outlook provided by NOAA.
Droughts’ impacts on society

Extensive and long-lasting droughts can accumulate huge costs for the regions affected over time. For example, the ongoing California drought caused $2.2 billion in damage for the year 2014 alone. This is only an estimate of the damage to society in monetary terms, while the severe impacts on the region’s ecosystems are difficult to measure and quantify. As a result of the drought conditions, reservoir storages in most of California are at record low levels and strict water conservation policies have been implemented.

The severity of a drought’s impacts, however, depends greatly on the wealth, vulnerability, and resiliency of the region affected, including the degree to which the local economy and services rely on water. Despite the huge costs of the California drought, the U.S. is more capable of mitigating its effects and eventually recovering from it given the country’s general financial strength compared to many developing nations. According to reports by the United Nations and the Inter-Agency Standing Committee, an estimated 50,000 to 260,000 people lost their lives in the severe 2011 drought in the Horn of Africa, due to the fact that the financial means to provide food aid were not present and outside help started too late.

To have better tools to deal with these extreme events, several government agencies and institutes around the world have created drought monitors to track current drought conditions and to forecast their evolution. Examples are the Princeton Flood and Drought Monitors for Latin America and Africa, the U.S. Drought Monitor and the European Drought Observatory. These websites provide information on current drought conditions, which can be used to take preventive measures by governments and other stakeholders. Additionally, they can be used to inform the general public on current conditions and the need for preventive measures, such as conservation.

Latin American and African Drought Monitors developed at Princeton University
Figure 2. Latin American and African Flood and Drought Monitors developed at Princeton University. Credit: Terrestrial Hydrology Research Group at Princeton University.
The power to affect a drought

Traditionally, droughts have only been thought of as a natural phenomena that we have to endure from time to time. However, a recent commentary in Nature Geoscience that included two Princeton contributors argued that we can no longer ignore how humans affect drought occurrences. For example, when conditions get drier from lack of rainfall, people are more likely to use water from the ground, rivers and channels for irrigation. These actions can impact the water cycle over large areas, affecting the water resources of communities downstream and of the local communities in the near future. In the case of California, the severe drop in groundwater levels has escalated in the last three years due to a combination of the extreme drought conditions and the resulting heavy pumping for irrigating crops. The extra water that becomes available from pumping of groundwater is only a temporary and unsustainable solution that will alleviate the drought conditions in the soil locally for a short period of time. Most of the irrigated water will evaporate and only a small portion will return into the groundwater. In the long run, these depleted groundwater resources need to be replenished to recharge rivers and reservoirs – a process that can take multiple years to decades. Furthermore, extracting groundwater in large amounts can lead to subsidence – a lowering of the ground levels – that can sometimes be irreversible and have permanent effects on future water availability in the region. Thus, through our actions we have the power to affect how a drought develops, making it necessary to rethink the concept of a drought to include our role in enhancing and mitigating it.

Figure 3. On the left: Measurement of recent subsidence in San Joaquin Valley, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and Jan. 22, 2015 by satellite, Photo Credit: NASA
Figure 3. On the left: Measurement of subsidence (i.e. lowering of the ground levels) in the San Joaquin Valley during the past three decades, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and January 22, 2015 by satellite, Photo Credit: NASA.

But it’s not all bad news. Last year I carried out a study with my collaborator, Dr. Yoshihide Wada, that found that sometimes human interventions can have a positive effect on the impact of natural drought conditions. This is most clear when we look at reservoirs that are built in many river systems around the world. It is shown that by building these structures the river discharge is more equally spread throughout the year. High flows or floods can be dampened by storing some of the water in the reservoirs, while this water can be used in the dry season or during a drought event to reduce the impact of low flows. This in itself opens up opportunities for regional water management that can help reduce the region’s vulnerability to droughts. Three limitations of the reservoirs are that they increase the amount of evaporation by having large surface areas, their benefits are limited in prolonged drought conditions simply because their storage is not infinite, and finally, they have a large impact on plants and animals in the downstream ecosystems (e.g. migrating fish species that need to swim upstream).

HumanDrought
Figure 4. Impact of human intervention on future hydrological drought, as a result of irrigation, reservoir operations and groundwater pumping. Darker colors indicate higher levels of confidence (Figure adapted from Wanders and Wada, 2015).
Drought in the future

Scientist have carried out many studies to explore what will happen to the characteristics and impacts of droughts in the future. Multiple research publications show that droughts will most likely increase in severity compared to the current conditions in many of the world’s regions with projected increases in human water demand, painting a stressful future. This then requires an adjustment in the way we deal with drought conditions, how we monitor and forecast these extremes, and how we consume water in general.

A short-term solution is trying to improve our monitoring and forecasting of these events so that we are better prepared. For example, additional improvements in meteorological and hydrological forecasts for conditions 3-6 months in advance would help operators manage their reservoirs in a way that would reduce the impact of upcoming drought events. These improvements require scientists to become more aware of the impact that humans have on the water cycle, which is a growing area of interest in recent years, but is definitely not standard practice.

Apart from increasing our possibilities to forecast upcoming drought events, we could also change our response to ongoing drought conditions by trying to be more efficient with the remaining available water. This could be achieved by using more efficient irrigation systems, building separate sewage systems for rainwater (that could be used for drinking water) and domestic and industrial wastewater (that is only reusable after severe treatment), and not cultivating crops that have a high water demand in areas with a natural low water availability. All these measures require long-term planning and willing government agencies and societies that would like to push and achieve these goals. Often a severe event (with significant damage) is needed to create the necessary awareness to realize that these measures are a necessity, such as the case in California that has resulted in new water laws and in Australia a few years ago.

Humans and the natural water system are strongly intertwined, especially in hydrological extreme conditions. Our impact on the water cycle is significant and cannot be neglected, both in normal conditions and under extreme hydrological ones. It will be important in the coming decades for us to learn how to responsibly manage our valuable water resources within a changing environment.

 

wande001 klein formaat

Dr. Niko Wanders is a Postdoctoral Research Fellow in the Civil and Environmental Engineering Department at Princeton working together with Prof. Eric Wood. His research interests include the study of the physical processes behind droughts,  as well as the factors that influence their magnitude and impact on society. Niko received a NWO-Rubicon Fellowship to work on the development of a global sub-seasonal drought forecasting system. The aim of the project is to develop a system that cannot only forecast upcoming drought events, but also make reliable forecast on the drought impact on agricultural production, water demand and water availability for human activities.

Rethinking Our Approach to Protected Areas for Conservation

Written by Justine Atkins

Over the last fifty years, there has been progressively more widespread recognition that species’ biodiversity is rapidly declining. This is a huge problem, and not only ethically: biodiversity also has crucial economic returns such as ecotourism and promoting ecosystem resilience to climate change and invasive species. It is now well-established that the overwhelming responsibility for this decline rests firmly on our shoulders. Therefore, humans must change the way in which we interact with the environment.

One of the key ways in which we have responded to this ecological crisis is through the establishment of protected areas. These areas of land or ocean are sectioned off and restricted from human use, (theoretically) protecting the ecosystems within them from negative anthropogenic impacts such as deforestation and hunting. At least four international treaties have been established with the aim of protecting a representative example of all ecosystems and species types that exist in the world today [1]. Most recently, the Convention on Biological Diversity (CBD) set targets of protecting 17% of terrestrial area and 10% of the world’s oceans by 2020, to be specifically achieved through the establishment and expansion of strategically designed and managed protected area (PA) networks.

Perhaps surprisingly, global PA coverage is actually moving steadily towards these targets. Unfortunately, equally surprising is that biodiversity continues to decline despite this increased investment in conservation. The disconnect between the potential and realized impact of these reserves has led scientists to begin questioning the efficacy of PAs as a strategy for conserving biodiversity. This shift in perspective has, in turn, forced researchers to look more closely at how the effectiveness of PAs is assessed. Past evaluations have proven inconclusive, demonstrating both the huge benefits and significant shortcomings of protection. For example, many populations of large mammals within Africa’s reserves are still showing declines, while, on the other hand, raptor species in Botswana are much more abundant within PAs than outside these areas.

Why are there such contrasting outcomes? The answer involves several complex and interacting issues. Firstly, there is a wide variety of ways in which PAs can intervene in the environment — each with its own set of costs and benefits. Marine PAs, for example, can save declining fisheries stocks but are likely to negatively affect species living outside these areas as fisherman move their activities to neighboring areas. Secondly, establishing a PA has complicated socioeconomic impacts; these can range from being helpful, such as providing jobs, to harmful, such as forcing indigenous people off their land. Thirdly, because we lack a unified framework, past assessments of the consequences of any particular protection method have had to rely on “before-and-after” style analysis. This method is problematic because it fails to account for what would have happened to, for example, the biodiversity in a PA if that area had not been put under protection. Making direct comparisons to a baseline of conditions is crucial in science and is also referred to as the use of “control groups” (Box 1).

Box 1. Impact evaluation
Impact evaluation (IE) is a method of assessing the potential or realized consequences of a conservation policy (e.g. protected areas). IE involves the use of scientifically rigorous paired comparisons to assess the effects of an intervention. Unlike performance measurement, which monitors changes in ecological and socioeconomic indicators (such as number of species within a given area) over time, evaluating the impact of a protected area also accounts for changes that might have occurred in that area even in the absence of protection. In this way, we can get a picture of the transformations that have occurred within a protected area that are i) the direct result of the establishment of protection and ii) simply due to natural fluctuations in ecosystem characteristics over time and space.

Several closely related experimental techniques are key to the IE method:

- Control groups are groups of test subjects or sites which very closely resemble the subjects or sites receiving an experimental treatment (for example, a clinical trial of a new prescription drug) but are not themselves subject to the treatment. These groups ‘control’ for factors beside the treatment that could influence the outcome.

- Matched pair experimental design directly compares each ‘treated’ site or subject with a matching ‘control’ site or subject.

- Counterfactual is a quantified assessment of what would have happened if there had been no intervention in an area, or, to continue with the drug trial example, if no medication had been given to a sick patient.

A great ecological experiment that incorporates these techniques is found at La Selva research station in Costa Rica in which many comparative experiments are being carried out using plots of intact primary forest paired with plots of land that are, in all ecological aspects (e.g. elevation, gradient, size), very similar but were cut down or burnt different numbers of years ago (e.g. 10, 20, etc.) for a variety of purposes.

Recognizing the urgency of this problem for biodiversity conservation, the prominent journal Philosophical Transactions of the Royal Society B (Phil Trans for short) emphasized the need for a revised methodology of PA assessment in a recent special issue. As a way forward, this collection of articles proposes and presents several applications of a new control group-oriented technique called impact evaluation (Box 1). Impact evaluation (IE) is a growing field in conservation science. Like previous assessment strategies, IE measures the effects of an intervention (such as building a new PA). Unlike before, however, IE also explicitly considers what would have happened without any intervention, described by researchers as the “counterfactual” (Box 1).

The Phil Trans articles convincingly argue that considering the counterfactual is the only way to truly quantify how protected areas affect biodiversity, ecosystem conservation, and human welfare. Collectively, the authors show that this new method is crucial given the limited budgets in conservation. If implemented on a broad scale, IE could allow for much greater payoff in protected area development than is currently being observed.

While this goal of minimizing biodiversity loss and maximizing socioeconomic benefits may seem ambitious, there is already empirical evidence that suggests such a goal is within reach. So far, IE research has concentrated on measuring PA effectiveness in relation to changes in deforestation rates and species loss. With a baseline of unprotected areas for comparison, researchers have specifically quantified how the impact of a PA changes according to variation in environmental and socioeconomic characteristics. The Phil Trans issue documents how this approach is applied effectively in areas as varied as the Brazilian Amazon and freshwater systems in Northern Australia. Results of IE show, for example, that PA management has led to a greater reduction in the spread of an invasive mimosa plant in Kakadu National Park than would have been observed in the absence of a PA.

Great Barrier Reef
The Great Barrier Reef on the eastern coast of Australia is one of the largest marine protected areas in the world. While protection has allowed this ecosystem to remain a highly complex and biodiverse environment, this status and the future of the reef’s marine life remain under increasing threat. In large part, this is because it was initially judged ‘effective enough’ to have only a small percent of the area as specific ‘no-take’ zones, with commercial fishing and oil and gas exploration still allowed in many regions. This approach, now subject to ongoing review and revision, calls into question current methods for measuring the effectiveness of protected areas. Photo credit: Jurgen Freund / NaturePL

Socioeconomic effects are also more readily identified within the IE framework. The debate over PAs and poverty is long-running and controversial. In large part, this disagreement is because there is weak and inconclusive quantitative evidence of the impact of PAs on people. People instead rely on highly subjective assumptions and inconsistent anecdotal findings. However, under the direction of IE, conservation scientists can recommend reserve strategies that more accurately describe the direct benefits and costs of PAs for human welfare. For example, in Bolivia, an impact-based assessment of PAs provided empirical support for earlier qualitative findings that PAs are in fact not linked with poverty traps. Using pre-, mid- and post-implementation IE, researchers found similarly counterintuitive results in relation to the socioeconomic impacts of marine PAs in Indonesia.

It is understandably difficult to see why such an approach has not been the established practice for many years. Unfortunately, those in a position to transition to using IE in the context of PAs have little incentive to do so. In an increasing publication-centered field, academics are reluctant to commit to projects evaluating the impact of conservation initiatives because i) generally, prestige journals prefer to focus on core science and are less likely to publish such work and ii) funding for this line of research is limited. A recent opinion piece in Conservation Magazine from the directors at the Wildlife Conservation Society highlights this paradox.

On a more hopeful note, international funding agencies could be a lifeline for the facilitation of IE. These organizations not only have monetary means, they are also key stakeholders in that they control a vast proportion of the funding for PAs and would benefit greatly from the decreased cost: benefit ratio that impact evaluation could deliver.

In light of the continuing rate of species’ extinctions on our planet, it is crucial that we invest in preserving biodiversity. Protected areas have the potential to be highly valuable conservation tools, but to achieve this potential, we need a good way of objectively assessing the effectiveness of PAs. The Philosophical Transactions issue clearly demonstrates that widespread uptake of IE could fulfill this need and re-establish protection-based strategies as a cornerstone of conservation science.

References

[1] The Stockholm Declaration (1972), World Charter for Nature (1982), the Rio Declaration at the Earth Summit (1992), and the Johannesburg Declaration (2002).

justine4

Justine is a first-year PhD student in the Ecology and Evolutionary Biology department at Princeton University. She is interested in the interaction between animal movement behavior and environmental heterogeneity, particularly in relation to individual and collective decision-making processes, as well as conservation applications.

Energy Efficient Buildings: The forgotten children of the clean energy revolution

Written by Victor Charpentier

The world’s population will increasingly become urbanized. In the 2014 revision of the World Urbanization Prospects, the United Nations (UN) estimate that the urban population will rise from 54% today to 66% of the global population by 2050. Therefore it is no surprise that cities and buildings are at the heart of the 11th Sustainable Development Goal of the UN: “Make cities and human settlements inclusive, safe, resilient and sustainable”. With an ambitious time objective of 2030, the goal is set to improve the sustainability of cities and the efficient use of their resources.

The impact of buildings on the energy consumption

Energy consumption is often described in terms of primary energy – that is, untransformed raw forms of energy such as coal, wind energy, or biomass. Buildings represent an incredible 40% of the total primary energy consumption in most western countries, according to the International Energy Agency (IAE). A growing awareness of energy issues in the United States led the Department of Energy (DOE) to create building energy codes and standards for new construction and renovation projects setting requirements for reduction of energy consumption (e.g. revised ASHRAE Standard 90.1 2010 & 2013). The LEED certification created in 1994 by the non-profit US Green Building Council for the American building industry has proven that there is a private sector interest to recognize the quality of new buildings. The DOE’s building energy codes mainly focus on space heating and cooling, lighting and ventilation, since these are the main energy consumers in buildings. Great energy savings can thus be reaped from improving the performance of new buildings and renovating existing ones in these categories. Refrigeration, cooking, electronic devices (featured in category ”others” in Figure 1) and water heating, related to the occupants’ activity, are comparatively minor.

Victor Fig 1
Figure 1. Operational energy consumption by end use in residential (left) and commercial (right) buildings. Source: DOE building energy data book.
Energy end-uses play a significant role in driving energy transitions

Despite the regulatory efforts that have been implemented in the past decade, significant improvements remain necessary to reach the ambitious goals set by the UN. To help achieve them in the US, the DOE has listed strategic energy objectives in its 2015 Quadrennial technology review. One of them reads: “Increasing Efficiency of Building Systems and Technologies”. This report notes that in the case of lighting technologies, for instance, 95% of the potential savings due to advanced solid-state lighting remains unrealized due to lack of technology diffusion. This underlines the need for implementation incentives in addition to research and development in the field of building technologies.

In contrast with this dire need for investments in end-use innovation, scientists showed in a 2012 study that the current investment levels in energy related innovation are largely dominated by energy-supply technologies. Energy-supply technologies are those that extract, process or transport energy resources, while end-use technologies are those that improve energy efficiencies and replace pollutant energy sources when feasible with clean sources (e.g. electric buses in cities). The discrepancy is high between supply and end-use investments. End-use technologies only represent about 2% of the total investments in energy innovations, as shown in Figure 2 below.

The consequences are that buildings technologies receive less investment to finance R&D than they should. In addition, the study suggests that end-use investments provide greater return-on-investments than energy-supply investments. The reason for this misalignment is mainly political as public and financial institutions, and policy makers tend to privilege the latter. The authors of the study suggest that this may be linked to a lack of coherent influence or lobbying for the end-use sector in great contrast with large energy supply companies such as oil or nuclear companies. Thus, to make longer strides in reducing our carbon footprint from the energy sector this needs to change.

Victor Fig 2
Figure 2. Investments (or mobilization of resources) for energy technologies, energy efficiency improvements in end-use technologies (green), energy resource extraction and conversion separated into fossil-fuel (brown), renewable (blue), and nuclear, network and storage (grey) technologies. Source: Nature Climate Change, 2(11), 780-788.
Building energy efficiencies: application to the design of better building skins

One way of improving energy efficiency in buildings is by focusing on the design of their skins or envelopes, which shelter their inside from the conditions outside. As interfaces between the controlled interior environment of buildings and the weather variations on the outside, building skins regulate the energy flow between these two environments. High insolation through windows, for instance, can result in large energy consumption needed for cooling the building. The extreme case imaginable would be in a skyscraper with an all-glass façade in a moderate or warm climate. In fact, balancing the heat coming from the sun (mainly through the windows) represents in average almost 50% of the cooling load in non-residential buildings and more than 50% in residential buildings. The warming that climate change will bring to many regions around the world will also make this worse.

Conventional shading devices such as fixed external louvers and Venetian blinds (see examples in Figure 3) can have a strong impact on the reduction of cooling loads. If they are controlled correctly and regularly adjusted, the building’s annual cooling load can be decreased by as much as 20%. 

 

Victor Fig 3
Figure 3. Current shading systems often combine external fixed louvers (left) and interior Venetian blinds (right). Source: Unicel Architecture – Blindtex.

One can add an additional level of performance by making these skins adaptable such that they provide benefits under varying conditions (weather, urban context, occupancy) through the physical change of their geometry. Implementations of such adaptive building skins have demonstrated reduction of energy demand by as much as 51% and high efficiency in moderate to hot climates. For instance, the Al Bahr twin towers (in Abu Dhabi), seen on Figure 4, are a good example of modern building skin implementation.

 

torres-sol-Aedas_Architects_1

Victor Fig 4
Figure 4. Dynamic façade of Al Bahr twin towers, Abu Dhabi, United Arab Emirates. GIF Source: CNN cited by https://www.papodearquiteto.com.br. Pictures’ Source: http://compositesandarchitecture.com/.

As those two systems demonstrate, there is great potential for these advanced shading systems and for building innovation in general, but their development is still slowed down by the lack of innovative policy and desire to invest in energy efficient building technologies.

Let’s get buildings on board with the energy revolution

Buildings do not get as much attention as automobiles or new technologies but they may be equally important in our long-term future. This is because the energy consumed for heating and cooling spaces, lighting, ventilation and others represents a very large part of our total energy consumption. However, there are solutions and fixes to this situation. Buildings have been greatly improved over the 20th century but we need to take them a step further to prepare better, more efficient homes and offices that will meet our new standards of living in a warming world. The facts call for stronger investment and political commitment. Let’s get buildings on board with the energy revolution!

 

charpentier_v-175x200

Victor is a second-year PhD student in the Department of Civil and Environmental Engineering advised by Professor S. Adriaenssens. His research interests lie in reducing energy consumption of buildings and elastic deformation of shell structures. 

A Precarious Puzzle of Expanding Deserts: How arid Asia has varied over time and the confusion over recent desertification

Written by Jane Baldwin

Inner Mongolia (Nei MengGu in Mandarin Chinese) lies right at the border of the nation of Mongolia within mainland China (see Figure 1). Pictures of yurts, traditional pony races, Mongolian wrestlers, and most of all rolling grasslands attract many Chinese tourists to this region each year (see Figure 2). In summer 2009, while I was an undergraduate studying Mandarin Chinese in Beijing, I also became enticed to this region. Tasked by my program to use my newly polished Mandarin to conduct a “social study” in an area outside Beijing, Inner Mongolia seemed both a very foreign and fascinating locale to investigate.

Jane Fig 1
Figure 1. Inner Mongolia, a Chinese province, lies just south of the nation of Mongolia. It is part of the arid lands that stretch across interior Asia. Source: adapted from Nasurt, 2011.

A group of my classmates and I took an overnight train from Beijing to Hohhot, and then a bus far into the countryside to our first yurt encampment. As expected, the great expanse of the scenery was stunning—the landscape stretched out before us only punctuated by occasional small hills, yurts, and sheep. However, we were shocked to discover the lush grasses in pictures were reduced to dry scrub only an inch or two high (see Figure 3).

Jane Fig 2
Figure 2. The Inner Mongolian pastoral ideal branded by Chinese tourist agencies. Source: http://www.chinadaily.com.cn/m/innermongolia/2015-04/10/content_20401697.htm
Jane Fig 3
Figure 3. The state that many grasslands in Inner Mongolia are currently in or approaching following recent desertification. Source: http://www.theguardian.com/world/2015/apr/10/inner-mongolia-pollution-grasslands-herders

I was concerned by this difference, and decided to focus my interviews with the local people on these environmental changes. The local nomadic herders informed me that desertification (or shamohua in Mandarin—literally translated as “change into desert”) had become a serious issue in this region over the past 20 years or so. One herder I interviewed recalled that as a teenager, the grasses had reached as high as his horse’s flank, while now they extended no higher than his horse’s hoof. These observations led me to wonder many questions which did not yield firm answers through my interviews: What was the cause of these dramatic changes? Were the local people responsible for the degradation? Or was it caused by larger scale climate variations outside of their control? And what would be the appropriate policy response to deal with the degradation and still respect the people who had lived there for generations?

__________________________________________________________________________________

Since that summer, this suite of questions around deserts and desertification has inspired much of my study and research, both as an undergraduate and now a PhD candidate in Atmospheric and Oceanic Sciences. My PhD research focuses broadly on understanding the climate of arid and semi-arid regions across Asia that define the margins of these grasslands (see Figure 1). As the largest deserts outside the tropics, this region presents a number of interesting climate dynamics questions. However, through research, classwork, and personal reading I have also sought to understand this region from a variety of angles beyond climatological, in particular geological, historical, and political. While spiraling in towards the desertification question, I have developed a mental narrative for this region, and its changes and controls of its climate over different periods of time.

Observing sediments and fossils, geologists have pieced together a record of arid Asia that shows this region to have varied greatly over the long geological timescales of millions of years. 50 Ma (million years ago), what is now Central and northern East Asia was covered in warm, damp forest populated by ancient horses, and rhino ancestors larger than modern elephants. A few theories exist for what spurred the relatively rapid (few million year-long) transition to the cool, dry climate we know today. Around this time India’s collision with the Eurasian subcontinent was creating the colossal Tibetan Plateau and Himalayas. The longest running theory for the formation of these deserts is that this newly risen topography blocked moisture from reaching Central and northern East Asia, drying this region. Climate modeling studies have indeed indicated that the Tibetan Plateau creates significant aridity outside the tropics in Asia . However, new research presents an alternative theory for the formation of this region. A large inland sea on the western margin of Central Asia, called the Paratethys, was recently found to have retreated just prior to the transition of this region to an arid environment; the migration of this moisture source may have played a dominant role in drying Central Asia. Which of these mechanisms (Tibetan Plateau uplift or the retreating Paratethys) was most important for the drying of Asia, and whether they might be linked, are both still open and actively researched questions.

More recent environmental history (i.e. the past few thousand years) is recorded in tree rings. When trees are water-stressed, how much their trunks grow radially depends in large part on how much rainfall there is. Widths of tree rings thus provide a proxy for historical drought/wet periods. The dry climate of this region over the past few thousand years, and its variations in precipitation recorded in these tree rings are hypothesized to have played key roles in human history, with the most dramatic example being the expansion of the Mongol Empire. Genghis Khan and the nomadic steppe tribes allied with him relied on horses for travel, sustenance, and warfare. Tree rings suggest that during the 13th century when the Mongol Empire expanded to cover China, Central Asia, and parts of the Middle East and Europe, the region was warm and persistently wet; these climatic conditions favored high grassland productivity, supporting Mongol political and military power during this critical period. This is but one example of how climatic and historical changes link tightly in this water-stressed region.

Over the past hundred years, the clearest climatic trend on the global scale has been warming caused by anthropogenic carbon emissions, primarily CO2 released from burning fossil fuels. How this global signal will translate to the regional scale is still a topic of active research in the climate science community. The most recent UN Intergovernmental Panel on Climate Change (IPCC) report shows that warming is clearly predicted over Asia as carbon emissions continue to increase. However, there is little consensus among climate modeling studies regarding how precipitation will change over arid Asia. This uncertainty is concerning for an environment that is already exhibiting symptoms of increasing water-stress. Desertification or land degradation has occurred across the margins of arid Asia over the past few decades, including places as diverse as the former Soviet countries that exist in the Aral Sea drainage basin, Qinghai Province on the Tibetan Plateau, and of course Inner Mongolia. While the UN Convention to Combat Desertification has motivated countries to submit plans to fight this degradation, on-the-ground action has been slow and limited. Facing the double threat of ill-planned development and global warming, these delicate regions on the border of Asia’s great deserts are currently in a precarious position.

__________________________________________________________________________________

While my understanding of arid environments and particularly their variability has increased significantly since I first visited Inner Mongolia in the summer of 2009, the recent desertification of this region is still a puzzle for me and for the scientific community at large. Over the past decade, the Chinese government has tried a number of strategies to deal with the desertification in Inner Mongolia. Citing overgrazing as the cause of the increased aridity, the government has resettled pastoralist nomads into cities—nomads who have grazed the steppe for thousands of years. Since 2003, the total number of urban resettlements in Inner Mongolia is 450,000. Meanwhile, in the tradition of the great engineering emperors of yore, the Chinese government is supporting a “Great Green Wall” of trees planted to halt the expanding desert and decrease dust transport. By the project’s planned end in 2050, it is intended to stretch 4,500km (2,800 miles) along the edge of China’s Northern deserts, covering 405 million hectares—a truly massive endeavor.

Unfortunately, without knowing the root cause of the desertification or how this region will respond to ongoing global warming, it is difficult to predict whether these policies are appropriate. While the Chinese government points its finger at overgrazing, some experts believe that it was the government’s prior actions in this region (fencing land and supporting agriculture over pastoralism) and ongoing mining pollution that has pushed this region away from a sustainable equilibrium and towards desertification. Adding flame to the fire, ecologists and hydrologists wonder whether the Great Green Wall’s trees will grow successfully or just deplete the water supply further. Meanwhile, recent climate studies provide an alternative explanation to these land-use centric arguments, suggesting that non-local climatic causes such as global warming and decreasing East Asian monsoon strength may explain the increasing aridity.

In this quagmire of rapid environmental change and scientific uncertainty one thing is clear: it is critical for there to be a robust dialogue between scientists and policy makers for Inner Mongolia, and the dry climates in Asia at large, to have a chance at developing sustainably.

 

baldwin_j-175x200

Jane is a PhD candidate in Princeton’s Atmospheric and Oceanic Sciences program in joint with NOAA’s Geophysical Fluid Dynamics Laboratory, where she is advised by Dr. Gabriel Vecchi. Her research employs a combination of dynamical climate models and earth observations to elucidate the ties between global and regional climate, and move towards useful predictions of climate change at regional levels.