Assessing the Utility of Food Certifications in Advancing Environmental Justice

Written by Shashank Anand, Hezekiah Grayer II, Anna Jacobson, and Harrison Watson

Sustainability is the notion that we should consume with caution, as the Earth is a delicately balanced ecosystem with limited natural resources. Social justice generally aims to eliminate disparities and inequities between discrete demographics. These include inequalities between persons of different socioeconomic status, race, gender, and sexual orientation. Environmental justice (EJ) intersects both of these movements: EJ is the notion that specific ecological burdens of society should be shared equitably across communities. Historical trends suggest that as we expand, consume, pollute, and produce, the benefits and costs of industrialization are inequitably distributed. This inequality comes at the cost of poor health for those living in highly polluted areas. Inequitable distribution of pollutants has recently brought EJ to the center of political discourse due to its correlation with increased Covid-19 mortality and racially skewed disease outcomes

Unfair treatment of workers at farms and manufacturing plants is a prime example of an injustice that ethical spending can aim to rectify. The misuse of pesticides, low worker wages, poor living conditions for farmers, and child labor are all sources of social and environmental injustices in food production. Socially conscious purchasing could be key in fighting these injustices. Academic institutions, which often purchase food en masse to serve thousands of individuals, have a sizable impact on humanity’s social and environmental footprint. Institutions like Princeton thus have a practical interest in reducing their footprint and a deontological obligation to mitigate their negative societal impact. 

Potential local food purchasing power of fourteen Michigan colleges and universities (Source: Michigan Good Food Work Group Report Series)

In general, it is difficult to assess the relative social and EJ impact of discrete products due to the inherently unquantifiable nature of justice. Certifications like Fair Trade and the Rainforest Alliance attempt to assuage buyers’ concerns by identifying and establishing environmentally just organizations. Certifications like USDA Organic and the Non-GMO Project endorse products and operations from an environmental sustainability standpoint. 

CASE STUDIES

Rainforest Alliance (RA) is an international NGO that provides certifications in sustainable agriculture, forestry, and tourism. RA seeks to “protect forests… and forest communities.” For farmers, the certification process involves site audits that check for compliance with the Rainforest Alliance Standards for Sustainable Agriculture. Standards include child labor protections and worker protection against the use of harmful pesticides listed in the Sustainable Agriculture Network Prohibited Pesticide List. RA Standards address economic and gender disparities on farms through the use of an “assess-and-address” approach. Farms are responsible for setting the goals that will mitigate the effects of “child labor, forced labor, discrimination, and workplace harassment and violence”. RA Standards also enforce implementation of a “salary matrix tool” for the collection of comprehensive wage data and identification of wage gaps. 

Support from RA has historically proven impactful, most notably on certified cocoa farms in Côte d’Ivoire, the world’s largest cocoa producer. A 2011 survey conducted by the Committee on Sustainability Assessment analyzed the impact of RA on the economic, environmental, and social dynamics of these cocoa farms. RA certification was shown to increase school attendance (noted as the percentage of children who have completed the appropriate number of grades for their age) by 392%, thereby reducing child labor; increase crop yields by 172%; and improve farm income by 356% compared to uncertified farms (see figures 4, 6, and 10 at this link). Despite these documented successes, there has been a history of exploitation of previous Standards on certified farms. In 2019, for example, pineapple farms in Costa Rica were cited employing undocumented workers and illegal agrochemicals despite RA restrictions. 

Fair Trade USA (FTU) is a certification that focuses on social and EJ much like RA does. FTU cites ideals in democratic and fair working conditions for its workers. FTU employs an Impact Management System (IMS) towards these ends; the IMS is used to assess the social and economic impact of growers’ practices. FTU is distinct from its well-known parent company, Fairtrade International: the two split in 2012 over a dispute about certified growers’ company size. 

FTU implements a price premium, ensuring that if the market value of a product falls, FTU products have a floor price on store shelves, thus ensuring workers earn some minimum wage. FTU also requires a small additional fee, the “Fair Trade Premium”, on top of the purchase price of the product. The premium is used to improve local infrastructure for the producers. How it is used is decided democratically by workers at the farm. In a poor economy, Fair Trade products are likely to be pricier than their uncertified counterparts. In a thriving economy with high demand, this difference will be negligible (see figure 1 at this link). A 2009 case study of coffee production in Nicaragua found that many Fair Trade coffee producers still had trouble finding places to sell their coffee. In times of high coffee prices, producers found that they reaped little financial benefit from the Fair Trade label. 

The Non-GMO Project (NGP) certifies distributors and farms whose procedures align with “standards consumers expect.” Certification is obtained after evaluation of the presence of genetically modified organisms (GMOs) in produced foods. GMO crops are often bred to be more resistant to drought or pests. This may lead them to outcompete local crops and flora. Combined with the potential unknown behavior of these nonnative crop variants and risk of gene flow, e.g. through cross-pollination, many communities want to keep excessive GMO cultivation out of their neighborhoods. NGP upholds the long-standing Non-GMO Standard, which outlines requirements for companies looking to sport the butterfly label. These standards necessitate greater coordination between cleaning and transference of products between storage facilities (termed “elevators”) as well as increased investments in process monitoring to account for the potential introduction of GMOs along the production process. NGP partners with third-party certification bodies (also known as technical administrators) that audit businesses and farms for compliance with all Non-GMO Standards. Application fees, as well as Non-GMO product premiums, contribute to the conservation of environmental health through the protection of genetic diversity in organic agriculture. 

USDA Organic was created by the Organic Foods Production Act (OFPA) in 1990, which mandated the USDA to develop federal-level regulations in the US for organic food. It was actualized in 2002, after 10 years of public debate, as a compulsory certification requiring producers and handlers with annual organic sales greater than $5,000 to discontinue the use of prohibited substances. To ensure the insulation of formed policies from special interest groups, OFPA also instituted the National Organic Standards Board (NOSB) that includes 15 volunteers representing the consumer, organic farmer/handler, retailer, scientist, and environmental conservationist. A two-thirds majority of NOSB is required to add a material in the National List of Allowed and Prohibited Substances (NLAPS). Third-party certifying agents issue the product as organic after confirming that the producer or handler has discontinued the use of prohibited substances for three years. 

USDA Organic and a growing market for organic produce have resulted in high product premiums. Unfortunately, a booming market does not guarantee good wages, living standards, or fair treatment for farm labor. There are cases recorded where working conditions have worsened due to the heavy work and time demands of organic farming. Some new programs build on USDA Organic’s structure with additional focus on standards for animal welfare and worker fairness. Regenerative Organic Certification (ROC) is an example of such a program. It is too early to determine whether these certification programs will be successful or will earn the trust of the market.

DISCUSSION 

Consumer activism flourishes with effective metrics on desired qualities (e.g., EJ) to inform conscientious purchasing. Certification efficacy for social and EJ depends on two main questions: on a policy level, how relevant are the certifications’ guidelines to the social and EJ movement? In practice, how successfully are rules enforced; are audits thorough, unbiased, and based on clear criteria? These questions help us establish whether certifications actually impact procedure at the farm-level. Certifications lacking in the first quality risk being irrelevant to social and EJ, while certifications lacking in the second risk being inconsequential. 

The missions of certifications like RFA and FTU to enable sustainable livelihoods for farmworkers and promote environmental stewardship are in line with core tenets of social and EJ. However, the auditing processes of these certifications have demonstrated weaknesses, as noted by recent RFA-certified pineapple farms in Costa Rica. Furthermore, the guidelines for these certifications may be poorly communicated with farm workers as shown by a study from Vakila and Nygren on Nicaraguan Fair Trade-certified coffee farms. 

USDA Organic and NGP are more closely aligned with environmental sustainability than social or EJ, yet they have more streamlined auditing processes because sustainability can be more directly quantified (e.g., unit volume of water usage). USDA Organic, for example, strictly regulates pesticides and herbicides, thus protecting farm workers’ health. Prohibited chemicals in NLAPS include methyl bromide, sulfuryl fluoride, and phosphine (aluminum phosphide or magnesium phosphide), exposure to which can affect fetal development and can lead to irreversible damage. NGP, on the other hand, does not regulate chemical substances; on the contrary, the products it promotes forgo the health benefits associated with reduced pesticide use in farming GM crops. In general, many larger social justice themes (minimum wage, underage labor, unfair working conditions) are not addressed by these sustainability certifications. 

The cost of buy-in is one major obstacle for smaller distributors. For example, the harvest process for GMOs and Non-GMOs must be separated to prevent contamination, leading to more labor for farmworkers. Investigations check for use of USDA Organic’s prohibited substances for three years leading up to product harvest; a waiting period that may prove prohibitive to some smaller farms. These smaller farms may not be able to afford the fees of the certification process, or the costs of regulations/liability insurance as required by schools’ procurement offices. Interviews with local players in food distribution, however, alleviated these concerns: Ms. Linda Recine of Princeton Dining Services confirmed that many small farms have difficulties affording the certification label, but asserted that a network of farmers, larger distributors, and university support systems help small businesses obtain necessary certifications and build a sustainable customer base. She cited a pilot conference hosted by the Princeton University Department of Finance and Treasury and Princeton University Central Procurement. This conference focused on woman-, veteran-, and minority-owned businesses; through the conference, Princeton offered to subsidize the first year of various certifications at no cost to the vendor. For obtaining expensive liability insurance, as well, outside help proves paramount: Ms. Recine says that many small farms may be able to get their goods onto campus by partnering with larger distributors. Jim Kinsel of Honeybrook Organic Farm stated that open communication with customers about the certification waiting period usually assuages their concerns about uncertified crops. 

Image of tomatoes being grown on a farm (Source: Canva Images)

Cost of buy-in shows that many certifiable farms may lack a formal label. Additionally, if farms pursuing certification already employ environmentally just practices before they apply for the label, we may see biases which interfere with our ability to assess certification efficacy objectively. A recent meta-study confirmed that many reports investigating the efficacy of certifications did not control for possible selection bias. 

With certifications alone, we are left with an incomplete picture of ethical consumption. If EJ certifications rely on vague self-improvement, sustainability certifications are not as justice-relevant, and all certifications are audited by third parties whose reliability is hard to ascertain, is a certification stamp on a unit of packaging truly enough to assert that a product was ethically produced? The ethical consumer is caught between a rock and a hard place; incomplete information makes it impossible to gauge EJ using certification labels alone. We will need additional information from producers to rely more comfortably on the value of consumer certifications. 

The solution to these concerns may lie in local purchasing. Sarah Bavuso and Linda Recine of Princeton Dining Services emphasized the importance of forming relationships with producers, citing the value of allowing farmers to see the campus and of university officials taking trips to farms and production sites. This relationship allows Princeton to be more hands-on with its food and to interfere when questions of ethics arise. Indeed, a 2007 study suggests that forming relationships with local farms decreases the distance that products travel, allows for cooperative relationships with individual farmers, and introduces flexibility in verification processes. 

Decreasing food-miles through local purchasing may be a critical component of both sustainability and EJ: as food travels and the supply chain lengthens, more middlemen get involved, and there are more opportunities for injustices and unsustainable practices. Each border that food passes through serves as another regulatory vulnerability for the introduction of harmful pesticides and food contamination. At each stop on the road, food loses freshness and emits greenhouse gases (GHGs) by burning fossil fuel through transit. Additionally, laws and regulations are more easily ascertained locally: consumers are more likely to know the minimum wage and regulations on working conditions for farms near their own homes. 

Local farms may also be smaller and more sustainable than larger national chains. Mr. Kinsel claims that larger farms are more likely to cut corners in the name of profit. While Ms. Recine confirms that larger producers may be less inclined to act ethically, she states that these farms have “come a long way” towards humane and ethical behavior, largely thanks to students and universities vocally lobbying for causes that were important to them. Purchasing certified food that is also locally grown may address many of the concerns introduced by the information gap mentioned above. 

Image of vegetables displayed at an outdoor stand (Source: Canva Images)

CONCLUSION 

Rather than relying entirely on certifications like USDA Organic, a supply chain can be created where the university shares the risk of crop production under unpredictable hydroclimatic conditions with the local farming community. One realization of a more local supply chain is Community Supported Agriculture (CSA), where schools select membership for a season and receive fixed volumes of freshly harvested produce from local farms. Students receive fresh and nutritional food from farms that abide by local regulations. Farmers get money from subscriptions upfront, allowing them to expand and invest early. Schools build working relationships with constituent farms and their management, creating a point-person on the farm grounds who can verify safe conditions for farmers. Many local farms in the Princeton area (like the Snapping Turtle Farm and the Cherry Grove Organic Farm) already have some of the same certifications as larger factory farms. 

A CSA supply chain would fit neatly into many residential colleges for small portions of salads or boiled eggs and meats. Non-perishable products like crackers and cereals could still be purchased from larger certified producers. In this supply chain, certifications are relied upon for goods that are difficult to buy from local producers. The local economy around the university is enhanced by the CSA program employed for fruits, vegetables, and meats. There are, of course, logistic questions to be resolved: a supply chain where crop proportions are not predetermined is quite different from the institutional status quo. The feasibility of such a supply chain will likely need to be vetted through a pilot program or a case study of other institutions implementing a similar program. CSAs have been implemented on some scale at schools like the University of KentuckyRutgers University, and the New Jersey Institute of Technology. We suggest schools start small: by implementing a CSA supply chain in an on-campus cafe or residential college. The program can be scaled up over time, after feasibility studies and conversations with local farmers. 

The feasibility of establishing a local supply chain will depend on how universities currently source their food. Ms. Bavuso indicated that many schools fall into one of two classes: self-operated schools, whose food procurement departments are university-run and in-house, and non-self-operated schools, whose food procurement is outsourced via contracts. Many schools employ some combination of these operations, with state schools being particularly strictly regulated via contracts (Aramark, University of Delaware; Sodexo, The College of New Jersey). Self-operated schools like Princeton will likely have more flexibility in vetting and choosing vendors. Non-self-operated schools aiming for social change will likely have to do so by lobbying distributors through the schools’ purchasing power or threatening to withdraw their business if practices are not improved. Not all schools will have the means to investigate each food product on their shelves: it will likely be useful to leverage an inter-school consortium of food procurement research, see the National Association of College & University Food Services, allowing inter-institutional procurement departments to swap findings and relevant research. 

The authors of this article do not wish to claim that certifications are entirely ineffective in gauging the social and EJ of food procurement. But certifications are not a panacea for ethical supply chains. Universities relying solely on these certifications for assessing food safety and social and EJ are not doing due diligence when it comes to ethical spending. It may take additional effort to switch to a CSA-style supply chain like the one suggested above; but if institutions are serious about the values that they promote in their dining services brochures, this added effort will be well worth the improvement seen in the quality and justice of the campus food. 

Princeton’s president Christopher Eisgruber wrote in June of 2020: “As a University, we must examine all aspects of this institution — from our scholarly work to our daily operations—with a critical eye and a bias toward action. This will be an ongoing process, one that depends on concrete and reasoned steps[.]” The authors of this article believe that a CSA pilot program would be one such concrete step towards action, a step that would be directly in line with the larger themes of environmental and social justice that have become more pronounced in the societal collective consciousness during recent years. At the very least, it is the duty of university procurement departments to state the steps they intend to take to address inequity. Princeton’s recent Supplier Diversity Plan is one example of such an effort in that it aims to support more diverse-owned businesses. As entities with large economic impacts, universities do have the power to effect real societal change. 


Shashank Anand: I am a Ph.D. Candidate in the Department of Civil and Environmental Engineering, working with Prof. Amilcare Porporato. My research focuses on understanding the role of ecohydrological and geomorphological processes in the evolving landscape topography by analyzing process-based models and learning from the available observations.

Hezekiah Grayer II: I am a 2nd year PhD candidate in the Program in Applied and Computational Mathematics, where I am fortunate to advised by Prof. Peter Constantin. My academic goals intersect fluid mechanics, plasma physics, and partial differential equations.

Anna Jacobson: I am a 3rd year PhD candidate in the department of Quantitative and Computational Biology. I am affiliated with the Andlinger Center for Energy and the Environment and the High Meadows Environmental Institute. For my thesis work, I study energy systems and environmental policy.

Harrison Watson: I am a Ph.D Candidate in the Department of Ecology and Evolutionary Biology working with Professors Lars Hedin, Rob Pringle, and Corina Tarnita. My work currently focuses on clarifying the forces that influence land carbon cycles using eastern and southern African savannas as a study system.

It’s Past Time for Princeton to Divest from Fossil Fuels

Written by Ryan Warsing of Divest Princeton

If you’re reading this, you probably don’t need to be persuaded that the planet is on fire, and we need to do something to put it out fast.  We see evidence all around us:  California is again in the throes of a record wildfire seasonglaciers the size of Manhattan are sliding into the sea, and in some of the most densely populated parts of the world, massive cities are being swallowed by the tide.  There is little dispute that these disasters stem from our burning of fossil fuels, and that by most any measure, we are failing to prevent the worst.

(Sources: Nik Gaffney / Flickr; Pixabay; Don Becker, USGS / Flickr; CraneStation / Flickr – Creative Commons)

Meanwhile, in balmy Princeton, New Jersey, the university’s Carbon Mitigation Initiative (CMI) and Andlinger Center for Energy and the Environment have signed splashy agreements with BP and Exxon (respectively) to fund research into renewable fuels, carbon capture and storage, and other climate innovations.  Since 2000, these companies have pumped over $30 million into CMI and the Andlinger Center, with the latter recently extending its Exxon contract for another five years.  

To put it politely, we of Divest Princeton say these partnerships do more harm than good.  True, they may create new and valuable knowledge, but that isn’t really why they exist.  In one leaked exchange from 1998, Exxon representatives strategized about the need to “identify and establish cooperative relationships with all major scientists whose research in the field supports our position,” and to “monitor and serve as an early warning system for scientific development with the potential to impact on the climate science debate, pro and con.”

Taking this statement literally — and why shouldn’t we? — BP and Exxon’s support for Princeton is more than simple altruism.  It’s more than good PR.  Rather, it’s part of a years-long effort not to aid, but to manage climate research toward ends not in conflict with their extractive business model.  Tellingly, these do-gooder oil companies plan to increase production 35% by 2030.  This would be cataclysmic.

Their schemes are made possible by funding and power gifted by Princeton.  We cannot tolerate, let alone enable these activities any longer.  Not when they pose such obvious conflicts with our university’s core values and threaten our fellow students and faculty working around the world.  Princeton must stand up for itself.  How better than by divesting from fossil fuels?

The divestment movement has grown rapidly in recent years, with institutions like Georgetown UniversityBrown, Cornell, and Oxford recently joining its ranks.  Collective actions have taken a toll — Goldman Sachs says that divestment is partly to blame for widespread credit de-ratings in the coal industry, and Shell is on-record saying divestment will present “a material adverse effect on the price of our securities and our ability to access equity capital markets.”  Essentially, divestment works.

We argue that the moral imperative of divestment should be compelling enough on its own; if Princeton moved to divest and the markets didn’t budge an inch, at least then our conscience would be clean.  At least then we could call ourselves “sustainable” with a straight face and live honestly by our motto: “in the nation’s service, and the service of humanity.”  

Detractors maintain that any “demands” on Princeton’s endowment would constrain its ability to earn huge returns, depriving students of the financial support they need to prosper.  This is absurd.  Billion-dollar endowments like the Rockefeller Brothers Fund have demonstrated that divestment can be a net positive.  Fossil fuel stocks have also been declining for years.  It looks increasingly clear that an investor gains little “diversifying” in fossil fuel, and that the risks of divestment have been well overblown.  Shareholders — especially shareholders with a fiduciary responsibility like Princeton’s — should be looking for the exit.

In order to remain within 1.5°C of global warming by mid-century — the threshold at which the IPCC and Princeton’s own Sustainability Action Plan say “catastrophic consequences” will be unavoidable — the fossil fuel industry’s ambitious exploration and development will need to be mothballed.  Undrilled oil fields and unmined coal will become stranded assets, or dead weight on their companies’ books.  To have faith in these investments, Princeton must think stranded assets will actually go to use, in which case, Princeton ignores its own scientists and legitimizes the activities central to our climate crisis.

Video showing a progression of changing global surface temperature anomalies from 1951-2019. The average temperatures over 1951-1980 are the baseline shown in white. Higher than normal temperatures are shown in red and lower than normal temperatures are shown in blue. The final frame represents the 5 year global temperature anomalies from 2015-2019. Scale in degrees Celsius. (Source: Lori Perkins / NASA and NOAA)

Others have argued that regardless of donors’ ulterior motives, divesting would only leave good money and research on the table.  To these people, the “greenwashing” corporations seek from partnering with elite institutions is both inevitable and of little consequence compared to the novel scholarship their funding provides.  The catch here is that quality research and a morally invested endowment are not mutually exclusive.  There isn’t a rule saying our research must be funded by BP or Exxon — if Princeton truly valued this knowledge, it would channel its creative energies toward finding funding elsewhere.

“Elsewhere” could very easily be the university’s own wallet.  Princeton is quick to remind us it holds the biggest per-student endowment in the country.  The endowment today is a bit larger than $26 billion, roughly the size of Iceland’s GDP and larger than GDPs of half the world’s countries.  In the last ten years alone, Princeton’s endowment has more than doubled.  In this light, the money needed to sustain current research is practically a rounding error.  If just a few Trustees put their donations together, they could recoup Exxon’s latest $5 million donation in under five seconds!

We tried to anticipate these doubts in our divestment proposal, which was given to Princeton’s administration last February.  Since then, we have met with Princeton’s Resources Committee and invited experts — former Committee Member Shannon Osaka, President of the Rockefeller Brothers Fund Stephen Heintz, and Stanford researcher Dr. Ben Franta — to help present our case.  Discussions will continue through the end of 2020, culminating in a forum with 350.org’s Bill McKibben in November.

As a reward for our persistence, the Resources Committee has indicated it might decide on our proposal by Christmas.  If it approves, the proposal goes to the Board of Trustees, and the clock starts over.  This, dear readers, is the “fast track.”

It has been demoralizing to watch Princeton, one of the world’s great centers of higher learning and a temple to empirical evidence, run interference for companies that have scorned the truth, knowingly endangered billions, and literally confessed to their ill intent.  From its byzantine system for proposing divestments to its arbitrary requirement saying divestment must take the form of complete dissociation (a prohibitively high bar), Princeton’s strategy is to frustrate and outlast causes like ours.  Most of the time, it succeeds.

But our cause is different from the others.  With climate change, waiting is simply not an option.  The immovable object will meet an unstoppable force, and the unstoppable force will win.

The longer we delay, the longer we allow fossil fuel companies to weaponize Princeton’s gravitas, spreading disinformation and quack science while purporting to be part of “the solution.”  Until Princeton inevitably divests from these bad actors, we will continue to withhold our donations, continue to protest, and continue to organize, fighting fire with fire.


Divest Princeton is a volunteer movement of Princeton students, alumni, parents, faculty, and staff.  Sign their “No Donations Until Divestment” petition and learn more here.

Integrating Renewable Energy Part 2: Electricity Market & Policy Challenges

Written by Kasparas Spokas

The rising popularity and falling capital costs of renewable energy make its integration into the electricity system appear inevitable. However, major challenges remain. In part one of our ‘integrating renewable energy’ series, we introduced key concepts of the physical electricity system and some of the physical challenges of integrating variable renewable energy. In this second instalment, we introduce how electricity markets function and relevant policies for renewable energy development.

Modern electricity markets were first mandated by the Federal Energy Regulatory Commission (FERC) in the United States at the turn of millennium to allow market forces to drive down the price of electricity. Until then, most electricity systems were managed by regulated vertically-integrated utilities. Today, these markets serve two-thirds of the country’s electricity demand (Figure 1) and the price of wholesale electricity in these regions is historically low due to cheap natural gas prices and subsidized renewable energy deployment.

The primary objective of electricity markets is to provide reliable electricity at least cost to consumers. This objective can be further broken down into several sub-objectives. The first is short-run efficiency: making the best of the existing electricity infrastructure. The second is long-run efficiency: ensuring that the market provides the proper incentives for investment in electricity system infrastructure to guarantee to satisfy electricity demand in the future. Other objectives are fairness, transparency, and simplicity. This is no easy task; there is uncertainty in both supply and demand of electricity and many physical constraints need to be considered.

While the specific structure of electricity markets varies slightly by region, they all provide a competitive market structure where electricity generators can compete to sell their electricity. The governance of these markets can be broken down into several actors: the regulator, the board, participant committees, an independent market monitor, and a system operator. FERC is the regulator for all interstate wholesale electricity markets (all except ERCOT in Texas). In addition, reliability standards and regulations are set by the North American Electric Reliability Council (NERC), which FERC gave authority in 2006. Lastly, markets are operated by independent system operators (ISOs) or Regional Transmission Operators (RTOs) (Figure 1). In tandem, regulations set by FERC, NERC, and system operators drive the design of wholesale markets.

Wholesale energy market ISO/RTO locations (colored areas) and vertically-integrated utilities (tanned area). Source: https://isorto.org/

Before we get ahead of ourselves, let’s first learn about how electricity markets work. A basic electricity market functions as such: electricity generators (i.e. power plants) bid to generate an amount of electricity into a centralized market. In a perfectly competitive market, the price of these bids is based on the costs of an individual power plant to generate electricity. Generally, costs are grouped by technology and organized along a “supply stack” (Figure 2). Once all bids are placed, the ISO/RTO accepts the cheapest assortment of generation bids that satisfies electricity demand while also meeting physical system and reliability constraints (Figure 2a). The price of the most expensive accepted bid becomes the market-clearing price and sets the price of electricity that all accepted generators receive as compensation (Figure 2a). In reality it is a bit more complicated: the ISO/RTOs operate day-ahead, real-time, and ancillary services markets and facilitate forward contract trading to better orchestrate the system and lower physical and financial risks.

Figure 2. Schematics of electricity supply stacks (a) before low natural gas prices, (b) after natural gas prices declined, (c) after renewable deployment.

Because real electricity markets are not completely efficient and competitive (due to a number of reasons), some regions have challenges providing enough incentives for the long-run investment objective. As a result, several ISO/RTOs have designed an additional “capacity market.” In capacity markets, power plants bid for the ability to generate electricity in the future (1-3 years ahead). If the generator clears this market, it will receive extra compensation for the ability to generate electricity in the future (regardless of whether it is called upon to generate electricity) or will face financial penalties if it cannot. While experts continue to debate the merits of these secondary capacity markets, some ISO/RTOs argue capacity markets provide the necessary additional financial incentives to ensure a reliable electricity system in the future.

Sound complicated? It is! Luckily, ISO/RTOs have sophisticated tools to continuously model the electricity system and orchestrate the purchasing and transmission of wholesale electricity. Two key features of electricity markets are time and location. First, market clearing prices are time dependent because of continuously changing demand and supply. During periods of high electricity demand, prices can rise because more expensive electricity generators are needed to meet demand, which increases the settlement price (Figure 2a). In extreme cases, these are referred to as price spikes. Second, market-clearing prices are regional because of electricity transmission constraints. In regions where supply is low and the transmission capacity to import electricity from elsewhere is limited, electricity prices can increase even more.

Several recent developments have complicated the economics of generating electricity in wholesale markets. First, low natural gas prices and the greater efficiency of combined cycle power plants have resulted in low electricity bids, restructuring the supply stack and lowering market settlement prices (Figure 2b). Second, the introduction of renewable power plants, which have almost-zero operating costs, introduce almost-zero electricity market bids. As such, renewables fall at the beginning of the supply stack and push other technologies towards the right (higher-demand periods that are less utilized), further depressing settlement prices (Figure 2c). A recent study by the National Renewable Energy Laboratory expects these trends to continue with increasing renewable deployment.

In combination, these developments have reduced revenues and challenged the operation of less competitive generation technologies, such as coal and nuclear energy, and elicited calls for government intervention to save financial investments. While the shutdown of coal plants is welcome news for climate advocates, nuclear power provided 60% of the U.S. carbon-free electricity in 2016. Several states have already instated credits or subsidies to prevent these low-emission power plants from going bankrupt. However, some experts argue that the retirement of uneconomic resources is a welcome indication that markets are working properly.

As traditional fossil-fuel power plants struggle to remain in operation, the development of new renewable energy continues to thrive. This development has been aided by both capital cost reductions and federal- and state-level policies that provide out-of-market economic benefits. To better achieve climate goals, some have argued that states need to write policies that align with wholesale market structures. Proposed mechanisms include in-market carbon pricing, such as a carbon tax or stronger cap-and-trade programs, and additional clean-energy markets. Until now however, political economy constraints have limited policies to weak cap-and-trade programs, investment and production tax credits, and renewable portfolio standards.

While renewable energy advocates support such policies, system operators and private investors argue these out-of-market policies could potentially distort wholesale electricity markets by suppressing prices and imposing regulatory risks on investors. Importantly, they argue that this leads to inefficient resource investment decisions and reduced competition that ultimately increases costs for consumers. As a result, several ISO/RTOs are attempting to reform electricity capacity market rules to satisfy these complaints but are having difficulty finding a solution that satisfies all stakeholders. How future policies will be dealt with by FERC, operators and stakeholders remains to be resolved.

As states continue to instate new renewable energy mandates and technologies yet to be well-integrated with wholesale markets, such as battery storage, continue to evolve and show promise, wholesale market structures and policies will need to adapt. In the end, the evolution of electricity market rules and policies will depend on a complex interplay between technological innovation, stakeholder engagement, regulation, and politics. Exciting!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Integrating Renewable Energy Part 1: Physical Challenges

Written by Kasparas Spokas

Meeting climate change mitigation targets will require rapidly reducing greenhouse gas emissions from electricity generation, which is responsible for a quarter of all U.S. greenhouse gas emissions. The prospect of electrifying other sectors, such as transportation, further underscores the necessity to reduce electricity emissions to meet climate goals. To address this, much attention and political capital have been spent on developing renewable energy technologies, such as wind or solar power. This is partly because recent reductions of the capital costs of these technologies and government incentives have made this strategy cost-effective. Another reason is simply that renewable energy technologies are popular. Today, news articles about falling renewable energy costs and increasing renewable mandates are not uncommon.

While capital cost reductions and popularity are key to driving widespread deployment of renewables, there remain significant challenges for integrating renewables into our electricity system. This two-part series introduces key concepts of electricity systems and identifies the challenges and opportunities of integrating renewables.

Figure 1. Schematic of the physical elements of electricity systems. Source: https://www.eia.gov/energyexplained/index.php?page=electricity_delivery

What are electricity systems? Physically, they are composed of four main interacting elements: electricity generation, transmission grids, distribution grids, and end users (Figure 1). In addition to the physical elements, regulatory and governance structures guide the operation and evolution of electricity systems (these are the focus of part two in this series). These include the U.S. Federal Regulatory Commission (FERC), the North American Electric Reliability Council (NERC), and numerous state-level policies and laws. The interplay between the physical and regulatory elements has guided electricity systems to where they are today.

In North America, the electricity system is segmented into three interconnected regions (Figure 2). These regions are linked by only a few low-capacity transmission wires and often operate independently. These regions are then further segmented into areas where independent organizations operate wholesale electricity markets and areas where federally-regulated vertically-integrated utilities manage all the physical elements (Figure 2). Roughly two-thirds of the U.S. electricity demand is now located in wholesale electricity markets. Lastly, some of these broad areas are further subdivided into smaller balancing authorities that are responsible for supplying electricity to meet demand under regulations set by FERC and NERC.

Figure 2. Left: North American Electric Reliability Corporation Interconnections. Right: Wholesale market areas (colored area) and vertically-integrated utilities areas (tanned area). Source: https://www.energy.gov/sites/prod/files/oeprod/DocumentsandMedia/NERC_Interconnection_1A.pdf & https://isorto.org/

Electricity systems’ main objective is to orchestrate electricity generation, transmission and distribution to maintain instantaneous balance of supply and continuously changing demand. To maintain this balance, the coordination of electricity system operations is vital. Electricity systems need to provide electricity where and when it is needed.

Historically, electricity systems have been built to suit conventional electricity generation technologies, such as coal, oil, natural gas, nuclear, and hydropower. These technologies rely on fuel that can be transported to power plants, allowing them to be sited in locations where electricity demand is present. The one exception is hydropower, which requires that plants are sited along rivers. In addition, the timing of electricity generation at these power plants can be controlled. The ability to control where and when electricity is generated simplifies the process by which an electricity system is orchestrated.

Enter solar and wind power. These technologies lack the two features of conventional electricity generation technologies, the ability to control where and when to generate electricity, and make the objective of instantaneously balancing supply and demand even more challenging. For starters, solar and wind technologies are dependent on natural resources, which can limit where they are situated. The areas that are best for sun and wind do not always coincide with where electricity demand is highest. As an example, the most productive region for on-shore wind stretches along a “wind-belt” through the middle of U.S. (Figure 3). For solar, the sparsely populated southwest region presents the most attractive sunny skies (Figure 3). As of now, long-distance transmission infrastructure to transport electricity from renewable resource-rich regions to high electricity demand regions is limited.

Figure 3. Maps of wind speed (left) and solar energy potential (right) in the U.S. Source: https://www.nrel.gov/

In addition, the timing of electricity generation from wind and solar cannot be controlled: solar panels only produce electricity when the sun is shining and wind turbines only function when the wind is blowing. Therefore, the scaling up of renewables alone would result in instances where supply of renewables does not equal customer demand (Figure 4). When renewable energy production suddenly drops (due to cloud cover or a lull in wind), the electricity system is required to coordinate other generators to quickly make up the difference. In the inverse situation where renewable energy generation suddenly increases, electricity generators often curtail the electricity to avoid dealing with the variability. The challenge of forecasting how much sun and wind there will be in the future adds more uncertainty to the enterprise.

Figure 4. Electricity demand and wind generation in Texas. The wind generation is scaled up to 100% of demand to emphasize possible supply-demand mismatches. Source: http://www.ercot.com/gridinfo/generation

A well-known challenge in solar-rich regions is the “duck-curve” (Figure 5). The typical duck-curve (named after the fact that the curve resembles a duck) depicts the electricity demand after subtracting the amount of solar generation at each hour of the day. In other words, the graph depicts the electricity demand that needs to be met with power plants other than solar, called “net-load.” During the day, the sun shines and solar panels generate electricity, resulting in low net-loads. However, as the sun sets and people turn on electric appliances after returning home from work, the net load increases quickly. Electricity systems often respond by calling upon natural gas power plants to quickly ramp up their generation. Unfortunately, natural gas power plants that can quickly increase their output are less efficient and have higher emission rates than slower natural gas power plants.

 

Figure 5. The original duck-curve presented by the California Independent System Operator. Source: http://www.caiso.com/

These challenges result in economic costs. A study about California concluded that increasing renewable deployment could result in only modest emission reductions at very high abatement costs ($300-400/ton of CO2). This is because the added variability and uncertainty of more renewables will require higher-emitting and quickly-ramping natural gas power plants to balance sudden electricity demand and supply imbalances. In addition, more renewable power will be curtailed in order to maintain stability (Figure 6), reducing the return on investment and increasing costs.

Figure 6. Renewable curtailment (MWh) and cumulative solar photovoltaic (PV) and wind power capacity in California from 2014 to 2018. Source: CAISO

Although solar and wind power do pose these physical challenges, technological advances and electricity system design enhancements can facilitate their integration. Several key strategies for integrating renewables will be: the development of economic energy storage that can store energy for later use, demand response technologies that can help consumers reduce electricity demand during periods of high net-load, and expansion of long-distance electricity transmission to transport electricity from natural resource (sun and wind) rich areas to electricity demand areas (cities). Which solutions succeed will depend on the interplay of future innovation, state and federal incentives, and electricity market design and regulation improvements. As an example, regulations that facilitate long-distance electricity transmission could significantly reduce technical challenges of integrating renewables using current-day technologies. To ensure efficient integration of renewable energy, regulatory and energy market reform will likely be necessary. For more about this topic, check out part two of our series here!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Carbon Capture and Sequestration: A key player in the climate fight

Written by Kasparas Spokas and Ryan Edwards

The world faces an urgent need to drastically reduce climate-warming CO2 emissions. At the same time, however, reliance on the fossil fuels that produce CO2 emissions appears inevitable for the foreseeable future. One existing technology enables fossil fuel use without emissions: Carbon Capture and Sequestration (CCS). Instead of allowing CO2 emissions to freely enter the atmosphere, CCS captures emissions at the source and disposes of them at a long-term storage site. CCS is what makes “clean coal” – the only low-carbon technology promoted in President Donald Trump’s new Energy Plan – possible. The debate around the role of CCS in our energy future often includes questions such as: why do we need CCS? Can’t we simply replace fossil fuels with renewables? Where can we store CO2? Is storage safe? Is the technology affordable and available?

Source: https://saferenvironment.wordpress.com/2008/09/05/coal-fired-power-plants-and-pollution/

The global climate-energy problem

The Paris Agreement called the globe to action: limit global warming to 2°C above pre-industrial temperatures. To reach this goal, CO2 and other greenhouse gas emissions need to be reduced by at least 50% in the next 40 years and reach zero later this century (see Figure 1). This is a challenging task, especially since global emissions continue to increase, and existing operating fossil fuel wells and mines contain more than enough carbon to exceed the emissions budget set by the 2°C target.

Fossil fuels are abundant, cheap, and flexible. They currently fuel around 80% of the global energy supply and create 65% of greenhouse gas emissions. While renewable energy production from wind and solar has grown rapidly in recent years, these sources still account for less than 2.1% of global energy supply. Wind and solar also face challenges in replacing fossil fuels, such as cost and intermittency, and cannot replace all fossil fuel-dependent processes. The other major low-carbon energy sources, nuclear and hydropower, face physical, economic, and political constraints that make major expansion unlikely. Thus, we find ourselves in a dilemma: fossil fuels will likely remain integral to our energy supply for the foreseeable future.

Figure 1: Global CO2 emissions (billion tonnes of CO2 per year): historical emissions, the emission pathway implied by the current Paris Agreement pledges, and a 2°C emissions pathway (RCP2.6) (Sources: IIASA & CDIAC; MIT & UNFCCC; IIASA)

CO2 storage and its role in the energy transition

CCS captures CO2 emissions from industrial sources (e.g. electric power plants) and transports them, usually by pipeline, to long-term storage sites. The ideal places for CO2 sequestration are porous rock formations more than half a mile below the surface. (Target rocks are filled with water, but don’t worry, it’s saltwater, not freshwater!) Chosen formations are overlain, or “capped,” by impermeable caprocks that do not allow fluid to flow through them. The caprocks effectively trap buoyant CO2 in the target rocks (see Figure 2).

Figure 2: Diagram of a typical geological CO2 storage site (Source: Global CCS Institute)

Scientists estimate that suitable rock formations have the potential to store more than 1,600 billion tonnes of CO2. This amounts to 70 years of storage for current global emissions from capturable sources (which are 50% of all emissions). Large-scale CCS could serve as a “bridge,” buying time for carbon-free energy technologies to develop to the stage where they are economically and technically ready to replace fossil fuels. CCS could even help us increase the amount of intermittent renewable energy by providing a flexible and secure “back-up” with low emissions. Bioenergy combined with CCS (BECCS) can also deliver “negative emissions” that may be needed to stabilize the climate. Furthermore, industrial processes such as steel, cement, and fertilizer production have significant CO2 emissions and few options besides CCS to reduce them.

In short, CCS is a crucial tool for mitigating the worst effects of global warming while minimizing disruption to our existing energy infrastructure and buying time for renewables to improve. Most proposed global pathways to achieve our targets include large-scale CCS, and the United States’ recently released 2050 decarbonization strategy includes CCS as a key component.

While our summary makes CCS seem like an obvious technology to implement, important questions about safety, affordability, and availability remain.

 

Is CCS Safe?

For CCS to contribute substantially to global emissions reduction, huge amounts of emissions must be stored underground for hundreds to thousands of years. That’s a long time, which means the storage must be very secure. Some worry that CO2 might leak upward through caprock formations and infiltrate aquifers or escape to the atmosphere.

But evidence shows that CO2 can be safely and securely stored underground. For example, the Sleipner project has injected almost 1 million tonnes of CO2 per year under the North Sea for the past 20 years. (For scale, that’s roughly a quarter of the emissions from a large coal power plant.) The oil industry injects even larger amounts of CO2 approximately 20 million tonnes per year – into various geological formations in the United States without issue in enhanced oil recovery operations to increase oil production. Indeed, the oil and gas deposits we currently exploit demonstrate how buoyant fluids (like CO2) can be securely stored in the subsurface for a very long time.

Still, there are risks and uncertainties. Trial CO2 injections operate at much lower rates than will be needed to meet our climate targets. Higher injection rates require pressure management to prevent the caprock from fracturing and, consequently, the CO2 from leaking. The CO2 injection wells and any nearby oil and gas wells also present possible leakage pathways from the subsurface to the atmosphere (although studies suggest this is likely to be negligible). Leading practices in design and maintenance can minimize well leakage risks.

Subsurface CO2 storage has risks, but experience suggests the risks can be mitigated. So, if CCS has such promise for addressing our climate-energy problem, why has it not been widely implemented?

 

The current state of CCS

CCS development has lagged, and deployment remains far from the scale required to meet our climate targets. Only a handful of projects have been built over the past decade. Why? High costs and a lack of economic incentives.

Adding CCS to coal and gas-fired electricity generation plants has large costs (approximately doubling the upfront cost of a new plant using current technology). Greenhouse gases are free (or cheap) to emit in most of the world, which means emitters have no reason to make large investments to capture and store their emissions. In order to incentivize industry to invest in CCS, we would need to implement a strong carbon price, which is politically unpopular in many countries. (There are exceptions – Norway’s carbon tax incentivized the Sleipner project.) In the United States, the main existing economic incentive for capturing CO2 is for enhanced oil recovery operations. However, the demand for CO2 from these operations is relatively small, geographically localized, and fluctuates with the oil price.

Inconsistent and insufficient government policies have thwarted significant development of CCS (the prime example being the UK government’s last-minute cancellation of CCS funding). Another challenge will be ownership and liability of injected CO2. Storage must be guaranteed for long timeframes. Government regulations clarifying liability, long-term responsibility for stored CO2, and monitoring and verification measures will be required to satisfy investors.

 

The future of CCS

The ambitious target of the Paris Agreement will require huge cuts in CO2 emissions in the coming decades. The targets are achievable, but probably not without CCS. Thus, incentives must increase, and costs must decrease, for CCS to be employed on a large scale.

As with most new technologies, CCS costs will decrease as more projects are built. For example, the Petra Nova coal plant retrofit near Houston, a commercial CCS project for enhanced oil recovery that was recently completed on time and on budget, is promising for future success. New technologies also have great potential: a pilot natural gas electricity generation technology promises to capture CO2 emissions at no additional cost. A technology that could capture CO2 from power plant emissions while also generating additional electricity is also in the works.

Despite its current troubles, CCS is an important part of solving our energy and climate problem. The recent United States election has created much uncertainty about future climate policy, but CCS is one technology that could gain support from the new administration. In July 2016, a bipartisan group of senators introduced a bill to support CCS development. If passed, this bill would satisfy Republican goals to support the future of fossil fuel industries while helping the United States achieve its climate goals. Strong and stable supporting policies must be enacted by Congress – and governments around the world – to help CCS play its key role in the climate fight.

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department at Princeton University studying carbon storage and enhanced oil recovery environments. More broadly, he is interested in studying the challenges of developing low-carbon energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas

 

Ryan Edwards is a 5th year PhD candidate in Princeton’s Department of Civil & Environmental Engineering. His research focuses on questions related to geological carbon storage, hydraulic fracturing, and shale gas. He is interested in finding technical and policy solutions to the energy-climate problem. Follow him on Twitter @ryanwjedwards.

The Case for Historic Buildings: Lessons on balancing human development and sustainability

Written by Isabel Morris

We need quality buildings to safely house our schools, hospitals, offices, and our homes. We also live in a world with limited resources for constructing and operating new buildings, which means we need buildings that are sustainable and resilient in addition to being safe and functional.

Most cities facing this challenge are full of underutilized historic buildings and sites with cultural, social, economic, and technological value. These historic places are precisely the solution required in growing cities, and they have surprising economic and environmental benefits.

Newly opened 20 Washington Rd on Princeton University’s campus: adaptive reuse of existing buildings in the Social Sciences neighborhood. (Source: Author)

Since the 1987 Brundtland Commission’s Report, “Our Common Future,” sustainability has been defined as the ability to meet the world’s current needs without compromising our ability to meet them in the future. This sustainable development, in construction and civil engineering, manifests itself in the environments we build and inhabit: cities. Here, perhaps especially, it is important to balance a building’s quality of life improvements with its environmental and social consequences. From “mega tall” skyscrapers, to slums, to the infrastructure that connects them, cities can be catalysts for economic opportunity, industry, and innovative constructions. Historic buildings are a tangible recording of a city’s story and can teach us not only about our history and culture, but also about sustainability.

Some historic buildings: 7th St and Indiana Ave, NW in Washington DC, Rome’s Colosseum, the Roman Via Appia, and Romania’s Sarmizegetusa Regia. (Source: Author)

As catalyzing drivers of development, cities seem to be in direct opposition with historic structures. Cities need buildings that are safe, resilient, efficient, and accessible…but how? What happens to old buildings that stand in the way of new projects? How do we measure and balance the value of historic buildings with the value of progress and modern sustainable building practices? The momentum of development and emerging green technologies drive cities to build for the future. At first glance, run down historic buildings without some modern features (like adequate steel reinforcement or airtight window frames) seem to stand in the way of city and human development, where it is much easier to opt for cheaper, faster, and larger buildings than investing in an existing building.

Why consider historic structures? Historic buildings can be buildings of any style, construction method, period, or function; important historical sites in the US range from the sites of the 1969 Stonewall uprising to 12th century Acoma Pueblo. Most of the world’s historic buildings and sites are protected by legislation and active conservation organizations, which recognize the invaluable artistic, historical, social, and scientific importance of these places. In addition to these less tangible values, heritage structures have a proven record of longevity and resilience in the face of two millennia (or more) of natural and anthropogenic hazards. Historic buildings are fascinating because they function as both sociocultural bulwarks and priceless repositories of technological advancements. Many of the world’s historic sites are “good” buildings that can teach us important lessons about sustainability and building construction.

By “good” buildings, we can mean a variety of things. In the most basic sense, a good building is one that physically serves its purpose (i.e., to physically encompass and support a hospital). From different perspectives, “good” collects more qualifications: the building’s function must be fulfilled attractively, efficiently, reliably, safely, and/or inclusively. Good buildings become even better when they serve their purpose and carry additional features, like full ADA accessibility, cultural significance, or LEED green building credits. Ideally, sustainable buildings and good buildings are the same. In reality, though, issues like short-term (rather than long-term) economic thinking can deepen the divide between “good” functional buildings and holistically good (and sustainable) buildings.

I argue that sustainable development can embrace the lessons and presence of historic buildings with positive environmental, social, and economic implications of historic buildings. In other words, why the best development solution is not destroying and replacing a historic building with a new and perhaps exemplary green building.

The UN’s 11th Sustainable Development Goal deals directly with the challenges facing cities (see also SDG 11 and SD: Cities). In recognizing the combination of exploding of urban populations (according to the Population Reference Bureau, 70% of the world population will be urban by 2050), and the humanistic value of heritage buildings and sites, the goal reads:

Goal 11: Make cities inclusive, safe, resilient and sustainable, including “strengthen efforts to protect and safeguard the world’s cultural and natural heritage.”

These four hallmarks (inclusive, safe, resilient, and sustainable) can be used to understand the various arguments in support of conservation and reuse of historic buildings.

There is a large body of work establishing the connections between heritage sites and humanity’s collective memory, or shared identity (see, for instance, a search of “collective memory” in the ICOMOS publications, or on Google Scholar). By definition, collective memory is an inclusive phenomenon. Historical sites are physical witnesses to shared heritage in the history and places that bind us together as humans. Our own stories can be shared and understood through physical places and spaces. Less abstractly, the acts of preservation, from documentation to regular maintenance, necessarily employ and involve entire communities (as in proven asset-based community development initiatives). ICOMOS guidelines exist for a project’s community engagement: for example, the Getty Conservation Institute recently completed a project on the participatory conservation of the Kasbah of Taourirt that relied on developing and utilizing local capacity in repair, technology, and documentation. Since heritage sites are rarely privately owned, we are all stakeholders of these resources and involved in decision making and use of these sites.

Safe

Vacant buildings are unsafe, and in many cities those vacant building are also historic. The correlation between increased crime and number of vacant properties has been established in the US. In fact, by using buildings that already exist within cities and reducing rates of vacancy in a city, historic buildings can both make cities safer and counteract urban sprawl (for example, see this excellent post on Sense and Sustainability). Safe cities, therefore, can be cities that embrace the potential and intrinsic value of their heritage buildings.

Resilient

In an age of urgent demand for resilient cities that can respond to increasing natural and man-made hazards (for example, rising earthquake, flooding, and fire risks in Seattle), we can learn invaluable lessons from heritage buildings that remain standing after 200, 300, 1500, 2000, or even 3200 years. The fact that these buildings have withstood assault on every front and remain stable speaks not only to the ingenuity of ancient builders but also to the resilience of these structures. Some ancient constructions intentionally dissipate earthquake loadings better than some modern buildings: compare the stacked drum columns of seismically active Greece to the monolithic columns of less-seismically active Rome. Because of their inherent resiliency, historic buildings do not necessarily require retrofitting and structural modification; like all buildings, historic buildings depend on regular maintenance for their longevity. Structurally safe and resilient historic buildings, with regular maintenance, can be more sustainable than new construction by eliminating the energy and waste involved in construction, use, and demolition of an entirely new building.

Sustainable

“Historic buildings are inherently sustainable.” So begins the Whole Building Design Guide, a knowledge portal for practitioners published by the National Institute of Building Sciences. The greatest advantage for historical buildings in the service of balancing sustainability and human development is, in fact, their inherent sustainability. These buildings can be adapted to a variety of new uses, whether the project is commercial, residential, or for public use. Not only does adaptive reuse of an existing historic building eliminate construction of a new building, it also eliminates accompanying construction and demolition waste. It is certainly important to consider the holistic energy use of buildings, from extraction, manufacturing, transport, and assembly of the materials in a building; to energy used by a building over its lifetime; to the demolition and disposal of its rubble. Recent life-cycle analysis (LCA) studies by the Preservation Green Lab compare similarly sized and used historic buildings to new construction options, concluding that most historic buildings can be reused with fewer environmental impacts than new “green” construction. Because they were constructed before interior climate control technology was developed, they are often equipped with efficient features instead. These include thick walls with optimal overhangs that trap winter heat during the day and release it at night and whose thermal mass helps the interior stay cool during summer months. Adaptive reuse of these structures can result in creative solutions, like Queen’s Quay and other projects in Toronto, that improve the sustainability and overall experience of a city. In looking at the “total energy” of buildings, in many cases the greenest building is one that is already built; embracing and using heritage buildings can be one of the best ways to make them sustainable.

Sustainable development for urban people and places naturally includes and necessitates preservation of our heritage sites. Furthermore, environmental steps toward sustainability simultaneously preserve both human and environmental health. This has a positive effect on our built heritage, reducing degradation mechanisms and threats to these sites, while improving environmental and social factors affecting our health.

Human development and sustainability, especially in an urban context, are balanced in the conservation and reuse of heritage sites. Residential and commercial buildings are responsible for 40% of the energy and 60% of the electricity used globally. Measures to increase building’s sustainability are in both the interest of human development and sustainable use of the world’s resources. In using a historic building, its lessons and embodied values can be preserved for future generations. The conservation of a city’s heritage sites is the conservation of humanity.


Isabel Morris is a 2nd year Civil Engineering PhD student in the Historic Structures Program. Her research is focused on using non-destructive methods, especially ground penetrating radar, to characterize materials for better conservation efforts around the world.

A World Without Hunger

Written by Matt Grobis

Safe, nutritious, and sufficient food, all year, for all people: the United Nation’s second Sustainable Development Goal aims to transform the world’s agriculture and distribution of food by 2030. With 800 million people suffering from hunger – more than 10% of the world’s population – food and agriculture are key to achieving the entire set of sustainable development goals.

Currently, there exists enough food to supply every person on the planet with a nutritious diet. Yet, large imbalances in access to this food also exist. This is often due to the cycle of poverty: people in poverty cannot afford nutritious food, which weakens them and then limits their ability to earn enough money to escape poverty. The results can be devastating. Poor nutrition is responsible for nearly 45% of deaths in children under 5, as well as causing a quarter of the world’s children to be stunted, or unable to develop normally.

Feeding future generations is similarly troubling. We have dedicated approximately 11% of the world’s land surface to agriculture (1.5 billion hectares), but to feed an expected 9 billion people in 2050, we will have to expand our global food production by 60%. Where will this land come from? We can work to improve crop yield from existing land, but the Food and Agriculture Organization (FAO) cautions that in many cases, local socioeconomic conditions “will not favor the promotion of the technological changes required to ensure the sustainable intensification of land use.” In other words, we can increase our food yield, but do we have the infrastructure in place to do it sustainably?

These are formidable challenges that require fast, efficient, and long-lasting solutions. By no exaggeration, the wellbeing and lives of billions of people – both present and future – depend on the actions taken to address hunger. The UN has therefore made ending world hunger a priority. “We can no longer look at food, livelihoods and the management of natural resources separately,” the FAO wrote in their 2016 bulletin Food and Agriculture. “A focus on rural development and investment in agriculture – crops, livestock, forestry, fisheries and aquaculture – are powerful tools to end poverty and hunger, and bring about sustainable development.”

A World Without Hunger
Mud stoves in Darfur, Sudan. Promoted by the Food and Agriculture Organization of the United Nations since the 1990s, these stoves decrease the need for fuelwood, a limited resource that can be dangerous to gather. Photo credit: plancanada.ca

How can we address problems as pervasive as hunger, when those problems are intimately linked with Earth’s other greatest challenges, such as poverty and climate change? For the FAO, the answer is to find solutions that address as many of these challenges simultaneously. In Darfur, Sudan, for example, the FAO is working to introduce fuel-efficient stoves that reduce the need for fuelwood, the principal source of energy that is becoming an increasingly limited natural resource. Women must travel far from home to collect fuelwood, which decreases the time they can invest in childcare, work, or education while also exposing themselves to the risk of physical or sexual violence. Mud stoves, on the other hand, require less fuelwood and produce no smoke. The local production of these stoves generates income for women.

“Tackling hunger and malnutrition is not only about boosting food production, but also to do with increasing incomes, creating resilient food systems and strengthening markets so that people can access safe and nutritious food even if a crisis prevents them from growing enough themselves.”
– Food and Agriculture Organization of the United Nations, Food and Agriculture

For Darfur, fuel-efficient stoves not only improve food security, hence addressing the UN’s second sustainable development goal of eradicating hunger. They also help decrease poverty (SDG #1), and they promote health and wellbeing (#3), gender equality (#5), affordable and clean energy (#7), climate action (#13), and protecting life on land (#15). Addressing the world’s largest challenges will require such multifaceted approaches.

 

12294849_10153775810177152_1904629783367632173_n

Matt Grobis is a 4th-year PhD candidate in Ecology and Evolutionary Biology and the Managing Editor of Highwire Earth. He researches the collective dynamics of fish schools in response to predation risk. Follow him on Twitter @mgrobis.

Conservation Crossroads in Ecuador: Tiputini Biodiversity Station and the Yasuní oil fields

Written by Justine Atkins

On an early morning boat, mist still rises off the water and the Amazonian air is thick with the characteristic dampness of tropical rainforests. We’re heading out in search of a nearby clay-lick where many parrot species congregate. In the partial slumber of any graduate student awake before 6 am, we sleepily scan the riverbank and tree line for any signs of life. It’s from this reluctantly awake state that our guide Froylan suddenly jolts us to the present and directs our gaze to a small clearing alongside the river. There, out in the open, a female jaguar sits in the grass near the river’s edge. By what seems like sheer luck, we have seen one of the most elusive Amazonian species, something our second guide José says he himself has only achieved five times in seven years.

This majestic female jaguar watches us closely from the safety of the grassy river bank, perhaps waiting for our boats to move on so she could continue on her route across the Tiputini river. Photo credit: Alex Becker

Of course, luck is only part of the story. The river we’re traveling down is the Tiputini River, which forms one edge of Ecuador’s Yasuní National Park — an area of some 3,800 square miles of pristine rainforest, historically left untouched by human development, that is practically overflowing with biodiversity. There are more species of plants, reptiles, insects, mammals and birds here than almost anywhere else in the Amazon and, by extension, the world.

Nestled in the dense array of kapok, ficus, Cecropias and Socratea or “walking palm” trees, is Tiputini Biodiversity Station (TBS). Established in 1993 and chosen specifically for its isolated location, the research station at Tiputini is a collaborative venture between Universidad San Francisco de Quito and Boston University. TBS supports ecological research at all levels, hosting everyone from visiting undergraduate students to PhD candidates to senior academics.

Almost everything about TBS and its surroundings reinforces the feeling that this is truly one of the most pristine and isolated centers of biodiversity in the world. As visitors to TBS for our Tropical Ecology field course, the first-year graduate students in Princeton’s Department of Ecology and Evolutionary Biology travelled by multiple planes, boats, buses and trucks over five hours from the nearest city (Coca) just to reach the field station itself. Photo credit: Alex Becker

Yet, as unfortunately seems inevitable whenever anyone talks about these last remaining ‘untouched’ areas, the pristine nature of TBS and Yasuní National Park comes with its own caveat. On our journey to the station, we are, probably naïvely, surprised to have to go through a security checkpoint run by the national oil company Petroamazonas. The mere presence of Petroamazonas indicates that the as yet undisturbed area surrounding TBS is up against a rapidly ticking clock. And with only a cursory glance over the basic facts of this situation, the sound of that ticking clock becomes deafening.

*     *     *

There are hundreds of millions, possibly billions, of barrels of Amazon crude oil lying beneath Yasuní National Park. For any nation, but particularly Ecuador — a relatively poor, developing country — the temptation to drill is immense. (Ecuador’s per capita GDP in 2013 was $6003, compared to the US GDP in the same year of $53,042.) For example, the government stood to make over $7 billion net profit (at 2007 prices) from the extraction and sale of 850 million barrels of oil from these reserves.

Yasuní had the potential to be a model for innovative environmental policy. It possesses unparalleled species’ richness, is located in a nation dependent on the extraction of non-renewable resources, and is home to the indigenous Waorani and two uncontacted groups, Tagaeri and Taromenane. In many ways, the variety of stakeholders and conflicts of interests and aims among them represents one of the most daunting conservation and sustainable development challenges the world faces today. How do we balance the needs of biodiversity maintenance, socioeconomic parity and protection of indigenous people when the goals of each seem to fundamentally misalign with one another?

The attempt to resolve this conflict was compellingly detailed in a National Geographic feature in January 2013. In 2007, President Correa proposed the so-called Yasuní-ITT Initiative (named after the three oil fields in the area it encompasses: Ishpingo, Tambococha, and Tiputini). The Yasuní-ITT sought $3.6 billion in compensation (to be contributed by international donors, both countries and corporations) in exchange for a complete ban on oil extraction and biodiversity protection for the ‘ITT block’ in the northeast corner of Yasuní.

With this initiative officially instated in 2010, Ecuador became one of the first nations to attempt sustainable development and action against climate change based on a model of truly worldwide cooperation. For this model to be successful, the government relied on other countries to recognize that an international desire to preserve the ecological value of Yasuní also meant an international responsibility to contribute to the opportunity cost of this preservation. There was a ground swell of support for this proposal within Ecuador and initially this was also met with enthusiasm abroad. However, by mid-2012, the Ecuadorian government had received only $200 million in pledges, contributions stalled and the Yasuní-ITT initiative was officially abandoned in August 2013.

Similar sustainability issues were at the forefront of the recent UN Climate Change Conference 2015 in Paris, also known as COP 21 (21st session of the Conference of Parties). Much of the prolonged negotiation and disagreement among the attending countries was based on the divergence of priorities among developed and developing nations. The former group was, by-and-large, pushing for uncompromising targets on emissions reduction and renewable energy use from the current highest emissions contributors, chief among which are developing nations like China and Brazil.

But developing nations felt strongly that they should not be excluded from the full benefits of industrialization, which developed nations have profited from in the past. One potential solution to this conflict, and one which led to part of the Paris Agreement, is for developed nations to support developing nations in the transition from fossil fuels to renewable, lower emission energy sources through financial compensation. Sound familiar? This was exactly the logic behind the Yasuní-ITT, so the failure of this initiative represents more than just a threat to Yasuní — it symbolically threatens action against climate change worldwide.

A closer look at the failure of the Yasuní-ITT reveals that there were in fact more complex considerations at play than simply a lack of pledged contributions. In an essay evaluating the decision to abandon the initiative, Ariana Keyman, an associate at the Busara Center for Behavioral Economics in Nairobi, assessed the particular political, economic and social factors that contributed to the Yasuní-ITT’s demise. Due to his dogged pursuit of a ‘New Latin American Left’, Ecuador’s President Correa was determined to increase spending on pro-poor socioeconomic development while also preserving the status of Ecuador’s environment and biodiversity. Unfortunately, as is often the case, something had to give and it was ultimately the environment that was compromised. This was only exacerbated by the historic dependency of this country on the oil industry and the ‘closed-door’ manner in which the Yasuní-ITT was both adopted and abandoned by the government. In this light, perhaps the case for international collaboration and economic cooperation on tackling the challenges of biodiversity conservation and climate change is not so hopeless, but it is still likely to be a bumpy road ahead.

*     *     *

Tiputini Biodiversity Station itself still seemed largely untouched during our trip in January 2016. Part of this was surely due to our unfamiliarity with the oil extraction process, but it’s clear that the continued tireless efforts of environmental groups are at least holding off the worst of the potential destruction for now. The founding director of TBS, Kelly Swing, wrote in a guest blog post in National Geographic in 2012 that the incursion of oil companies in this area has also in some ways helped scientists learn more about the incredible ecological communities in this region, thanks to increased funding and accessibility.

More than the literal isolation, the overwhelming presence of a brilliant array of mammals, birds, reptiles, amphibians and insects that seem to be almost dripping from the trees was a constant reminder of how far from urbanization we were and the sheer uniqueness of the location of TBS. Every morning, we awoke to the reverberating booms of howler monkeys and the screeching calls of caracara and macaws high above us. Walking to and from the dining area, we routinely spotted roosting bats, several species of anole lizards and learned to recognize the squeaking communications and rustling branches around us the local woolly monkey troop on their morning or evening commute. All of these wonderfully unique species (clockwise from top left: white-necked jacobin, motmot, woolly monkey, and tree frog) are threatened in some capacity by the oil industry. Photos credit: Alex Becker.

It appears, however, that the benefits are unlikely to outweigh the costs, particularly when the long-term consequences of the oil industry in Yasuní will be unknown for years to come. Swing was quick to point out that alread there are documented negative impacts — insects are being drawn to huge gas flares and eviscerated in large numbers, eliminating important food resources for frogs, birds and bats, and industrial noise pollution disrupting the communication channels of calling birds and primates, potentially limiting their ability to find mates, locate food, and avoid predators.

In establishing the research station along the Tiputini River, Swing said that their goal was “to be able to study and teach about nature itself, not human impacts on nature.” From our experience there, this goal was definitely realized in the most fantastic way possible, but how many other visitors who come after us that will be able to say the same thing we cannot say with any certainty. As global citizens, this is a concern that we should all be dedicated to addressing.

Justine is a first-year PhD student in the Ecology and Evolutionary Biology department at Princeton University. She is interested in the interaction between animal movement behavior and environmental heterogeneity, particularly in relation to individual and collective decision-making processes, as well as conservation applications.

Losing the Climate War to Methane? The role of methane emissions in the global warming puzzle

Written by Dr. Arvind Ravikumar

There is much to cheer about the recent climate agreement signed last December at the 21st Conference of Parties (COP 21) in Paris, France to reduce greenhouse gas emissions and limit global temperature rise to below 2° C. Whether countries will implement effective policies to achieve this agreement is a different question. Leading up to the Conference in Paris, countries proposed their intended nationally determined contributions (INDCs). These refer to the various targets and proposed policies pledged by all countries that signed the United Nations Framework Convention on Climate Change on their intended contribution to reduce global warming. The United States, among other things, is banking on the recently finalized Clean Power Plan by the Environmental Protection Agency (EPA) – this policy aims to reduce US greenhouse gas (GHG) emissions from the power sector by 26 to 28% in 2030, partly by replacing high-emitting coal fired power plants with low-emitting natural gas fired plants, and increased renewable generation (primarily wind and solar). Electricity production by natural gas fired plants is therefore expected to increase over the next few decades, acting as a ‘bridge-fuel’ to a carbon-free economy. Even though the US Supreme Court recently halted the implementation of the Clean Power Plan, the EPA anticipates that it will eventually be upheld.

A major component of natural gas is methane. This is a highly potent greenhouse gas whose global warming potential (i.e. ability to increase the Earth’s surface temperature through the greenhouse effect) is 36 times that of carbon dioxide in long-term (100-year impact) and over 80 in the near-term (20-year impact). Although carbon dioxide is a major component of US greenhouse gas emissions (see Fig. 1), it is estimated that methane contributes around 10% of the total emissions. Thus, given its significantly higher global warming potential, methane emissions and leakage can potentially erode the climate benefits of declining coal production.

Figure 1: US greenhouse gas inventory (2013) Data from EPA
Figure 1. US greenhouse gas inventory (2013). Source: EPA

Methane emissions are fairly diversified across natural and man-made sources. Figure 2 shows the sources of methane emissions in the US (2013) as estimated by the EPA through its GHG monitoring program. While 50% of emissions can be attributed to agriculture and waste-disposal activities, we can see that about 30% of methane emissions come from the oil and gas industry. Much of this can be attributed to the recent boom in non-conventional or shale gas production through fracking technology. The combination of low natural gas prices and higher demand from the power sector makes it imperative to reduce methane emissions as much as technologically feasible.

Fig 2
Figure 2. US methane emission by source (2013) . Source: EPA.

Currently, methane leaks occur at all stages of the natural gas infrastructure – from production and processing, transmission to distribution lines in major cities. While the global warming effects of higher methane concentrations are fairly well understood, there is currently little consensus on the magnitude of emissions from the natural gas infrastructure. For example, a recent study found that the average methane loss in all distribution pipelines around Boston was about 2.7%, significantly higher than the 1.1% reported in inventory estimates to the EPA. Another study that was published in the academic journal, Science, showed that various independent measurements of methane leakage rate across the US infrastructure varied from about 1% to over 6%. Climate benefits of switching from coal to natural gas fired power plants would critically depend on this leakage rate.

[…], detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions.

To better estimate methane leakage, the Environmental Defense Fund (EDF), a non-profit organization based in Washington, DC, organized and recently concluded a series of 16 studies to find and measure leaks in the US natural gas supply chain. While some of the results are currently being analyzed, much of the data show that conventional inventory estimates maintained by the EPA have consistently underestimated the leakage from various sources. It was shown that the Barnett shale region in Texas that produces about 7% of the nation’s natural gas, emitted 90% more methane compared to EPA estimates. To complicate matters further, until recently, estimates from atmospheric top-down data measured using satellites and aircrafts significantly exceeded land-based bottom-up measurements using methane sensors. On a similar note, detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions. Such a small fraction of large emission sources will further complicate direct measurements where typically only a small fraction of the facilities in a region are measured. While the EDF and other studies have been instrumental in our current understanding of methane leaks in the US and its contribution to greenhouse gas emissions, much work is required to understand sources, and most importantly, ways to cost-effectively monitor, detect and repair such leaks.

Aerial footage of the recent natural gas leak from a storage well in Aliso Canyon near LA. The leak is estimated to have released 96000 metric tons of methane, equivalent to about 900 million gallons of gasoline burnt and $15 million worth of natural gas. Source: Environmental Defense Fund, 2015.

Methane leakage in the context of global warming has only recently caught public attention – see here, here and here. In addition to greater awareness in business and policy circles, significant efforts are required to identify economically viable leak detection and repair programs. Currently, the industry standard to detect methane leaks include high-sensitivity but high-cost sensors, or low-cost but low-sensitivity infrared cameras. There is an immediate need to develop techniques that can be used to cost-effectively detect leaks over large areas (e.g. thousands of squared miles). From a regulatory perspective, EPA has released proposed regulations to limit methane leaks from the oil and gas industry. This comes on the heels of the goals set by the Obama administration’s Climate Action Plan to reduce methane emissions from the oil and gas sector by 40 to 45% from 2012 levels by 2025. These regulations require oil and gas companies involved in the entire natural gas life cycle to periodically undertake leak detection and repair procedures, depending on the overall leakage levels. The final rule is expected to be out sometime in 2016.

The success of the Clean Power Plan in reducing greenhouse gas emissions will significantly depend on the strength of the proposed regulations to curb methane leaks. We now have a better estimate of fugitive emissions (leaks) of methane from the US natural gas infrastructure. Concurrently, there should be a greater focus on developing cost-effective programs to detect and repair such leaks. It was recently reported that replacing old pipelines with newer ones in the gas distribution network in a city is effective in reducing leaks, and improving public safety. With a considerably higher global warming potential than carbon dioxide, methane has the potential to erode the climate benefits earned by switching from high emitting coal plants to low emitting natural gas power plants. Ensuring that does happen will take a coordinated effort and commitment from both the industry and government agencies.

35d1259

Arvind graduated with a PhD in Electrical Engineering from Princeton University in 2015 and is currently a postdoctoral researcher in Energy Resources Engineering at Stanford University. Somewhere later in grad school, he became interested in the topics of energy, climate change and policy. Arvind is an Associate Editor at Highwire Earth. You can read more about his work at his personal website.

Human Impacts on Droughts: How these hazards stopped being purely natural phenomena

Written by Dr. Niko Wanders

We often hear about droughts around the world including those recently in the U.S. and Brazil, which has threatened the water safety for this year’s Olympic Games. Despite their natural occurrence, there is still a lot that we do not understand fully about the processes that cause them and about how they impact our society and natural ecosystems. These topics are of great interest to scientists and engineers, and of great importance to policy makers and stakeholders.

The elusive definition of a drought

A drought can be broadly defined as a decrease in water availability below levels that are considered normal within a region. This means that droughts do not only occur in warm, sunny, dry countries but can take place essentially anywhere. What makes it hard to come up with a single, precise definition of a drought is that this below-normal water availability can be found at the different stages of the water cycle: precipitation, soil moisture (i.e. how much water there is in the soil), snow accumulation, groundwater, reservoirs and streamflow. Therefore, more useful definitions of drought conditions have to be tailored for specific sectors (e.g. agriculture or power generation) by focusing on the stage of the water cycle that is relevant for them (e.g. soil moisture for farmers, and streamflow for controllers of hydroelectric and thermoelectric plants).

Droughts can cover areas that range from a few thousand squared miles to large portions of a continent and can last anywhere from weeks to multiple years. Normally they start after a prolonged period of below-normal precipitation, sometimes in combination with increased evaporation due to high temperatures. This then causes a reduction in water availability in the soil, which can lead to lower groundwater and river levels as a result of decreased water recharge from groundwater aquifers into rivers. Snowfall is another important factor because it adds a steady release of water resources into streams throughout the Spring. When most of the precipitation comes as rain, it will wash out fast, leaving the Spring with dry conditions once again. The evolution of a drought through the water cycle is called drought propagation and normally takes multiple weeks to several months to take place.

So far this season, El Niño has been bringing some relief to the California drought. The current snow accumulation is above normal which is good news for this drought stricken region. The forecasts for the upcoming months look hopeful and it is likely that California will see some relief of the drought in the coming months. Nevertheless, it will take multiple years before groundwater and reservoir levels are back to their normal conditions, so the drought and its impacts will still remain for at least the coming years.

season_drought
Figure 1. U.S. Seasonal Drought Outlook provided by NOAA.

Droughts’ impacts on society

Extensive and long-lasting droughts can accumulate huge costs for the regions affected over time. For example, the ongoing California drought caused $2.2 billion in damage for the year 2014 alone. This is only an estimate of the damage to society in monetary terms, while the severe impacts on the region’s ecosystems are difficult to measure and quantify. As a result of the drought conditions, reservoir storages in most of California are at record low levels and strict water conservation policies have been implemented.

The severity of a drought’s impacts, however, depends greatly on the wealth, vulnerability, and resiliency of the region affected, including the degree to which the local economy and services rely on water. Despite the huge costs of the California drought, the U.S. is more capable of mitigating its effects and eventually recovering from it given the country’s general financial strength compared to many developing nations. According to reports by the United Nations and the Inter-Agency Standing Committee, an estimated 50,000 to 260,000 people lost their lives in the severe 2011 drought in the Horn of Africa, due to the fact that the financial means to provide food aid were not present and outside help started too late.

To have better tools to deal with these extreme events, several government agencies and institutes around the world have created drought monitors to track current drought conditions and to forecast their evolution. Examples are the Princeton Flood and Drought Monitors for Latin America and Africa, the U.S. Drought Monitor and the European Drought Observatory. These websites provide information on current drought conditions, which can be used to take preventive measures by governments and other stakeholders. Additionally, they can be used to inform the general public on current conditions and the need for preventive measures, such as conservation.

Latin American and African Drought Monitors developed at Princeton University
Figure 2. Latin American and African Flood and Drought Monitors developed at Princeton University. Credit: Terrestrial Hydrology Research Group at Princeton University.

The power to affect a drought

Traditionally, droughts have only been thought of as a natural phenomena that we have to endure from time to time. However, a recent commentary in Nature Geoscience that included two Princeton contributors argued that we can no longer ignore how humans affect drought occurrences. For example, when conditions get drier from lack of rainfall, people are more likely to use water from the ground, rivers and channels for irrigation. These actions can impact the water cycle over large areas, affecting the water resources of communities downstream and of the local communities in the near future. In the case of California, the severe drop in groundwater levels has escalated in the last three years due to a combination of the extreme drought conditions and the resulting heavy pumping for irrigating crops. The extra water that becomes available from pumping of groundwater is only a temporary and unsustainable solution that will alleviate the drought conditions in the soil locally for a short period of time. Most of the irrigated water will evaporate and only a small portion will return into the groundwater. In the long run, these depleted groundwater resources need to be replenished to recharge rivers and reservoirs – a process that can take multiple years to decades. Furthermore, extracting groundwater in large amounts can lead to subsidence – a lowering of the ground levels – that can sometimes be irreversible and have permanent effects on future water availability in the region. Thus, through our actions we have the power to affect how a drought develops, making it necessary to rethink the concept of a drought to include our role in enhancing and mitigating it.

Figure 3. On the left: Measurement of recent subsidence in San Joaquin Valley, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and Jan. 22, 2015 by satellite, Photo Credit: NASA
Figure 3. On the left: Measurement of subsidence (i.e. lowering of the ground levels) in the San Joaquin Valley during the past three decades, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and January 22, 2015 by satellite, Photo Credit: NASA.

But it’s not all bad news. Last year I carried out a study with my collaborator, Dr. Yoshihide Wada, that found that sometimes human interventions can have a positive effect on the impact of natural drought conditions. This is most clear when we look at reservoirs that are built in many river systems around the world. It is shown that by building these structures the river discharge is more equally spread throughout the year. High flows or floods can be dampened by storing some of the water in the reservoirs, while this water can be used in the dry season or during a drought event to reduce the impact of low flows. This in itself opens up opportunities for regional water management that can help reduce the region’s vulnerability to droughts. Three limitations of the reservoirs are that they increase the amount of evaporation by having large surface areas, their benefits are limited in prolonged drought conditions simply because their storage is not infinite, and finally, they have a large impact on plants and animals in the downstream ecosystems (e.g. migrating fish species that need to swim upstream).

HumanDrought
Figure 4. Impact of human intervention on future hydrological drought, as a result of irrigation, reservoir operations and groundwater pumping. Darker colors indicate higher levels of confidence (Figure adapted from Wanders and Wada, 2015).

Drought in the future

Scientist have carried out many studies to explore what will happen to the characteristics and impacts of droughts in the future. Multiple research publications show that droughts will most likely increase in severity compared to the current conditions in many of the world’s regions with projected increases in human water demand, painting a stressful future. This then requires an adjustment in the way we deal with drought conditions, how we monitor and forecast these extremes, and how we consume water in general.

A short-term solution is trying to improve our monitoring and forecasting of these events so that we are better prepared. For example, additional improvements in meteorological and hydrological forecasts for conditions 3-6 months in advance would help operators manage their reservoirs in a way that would reduce the impact of upcoming drought events. These improvements require scientists to become more aware of the impact that humans have on the water cycle, which is a growing area of interest in recent years, but is definitely not standard practice.

Apart from increasing our possibilities to forecast upcoming drought events, we could also change our response to ongoing drought conditions by trying to be more efficient with the remaining available water. This could be achieved by using more efficient irrigation systems, building separate sewage systems for rainwater (that could be used for drinking water) and domestic and industrial wastewater (that is only reusable after severe treatment), and not cultivating crops that have a high water demand in areas with a natural low water availability. All these measures require long-term planning and willing government agencies and societies that would like to push and achieve these goals. Often a severe event (with significant damage) is needed to create the necessary awareness to realize that these measures are a necessity, such as the case in California that has resulted in new water laws and in Australia a few years ago.

Humans and the natural water system are strongly intertwined, especially in hydrological extreme conditions. Our impact on the water cycle is significant and cannot be neglected, both in normal conditions and under extreme hydrological ones. It will be important in the coming decades for us to learn how to responsibly manage our valuable water resources within a changing environment.

 

wande001 klein formaat

Dr. Niko Wanders is a Postdoctoral Research Fellow in the Civil and Environmental Engineering Department at Princeton working together with Prof. Eric Wood. His research interests include the study of the physical processes behind droughts,  as well as the factors that influence their magnitude and impact on society. Niko received a NWO-Rubicon Fellowship to work on the development of a global sub-seasonal drought forecasting system. The aim of the project is to develop a system that cannot only forecast upcoming drought events, but also make reliable forecast on the drought impact on agricultural production, water demand and water availability for human activities.