Cooperation and Carbon Taxes: confronting the challenges of climate policy with the En-ROADS Climate Solutions Simulator and Princeton’s Energy and Climate Scholars

Written by Abigale Wyatt

The science of climate change is clear, so shouldn’t passing new climate policy be simple? The folks at Climate Interactive have a way for you to decide for yourself: the En-ROADS (Energy Rapid Overview and Decision-Support) climate solutions simulator. According to their website, En-ROADS is a “global climate simulator that allows users to explore the impact that dozens of policies— such as electrifying transport, pricing carbon, and improving agricultural practices— have on hundreds of factors like energy prices, temperature, air quality, and sea level rise.” Practically speaking, it’s an easy and intuitive way to look at future climate scenarios based on the latest climate science. 

Princeton Energy and Climate Scholars (PECS) members exploring the En-ROADS climate solutions simulator. Background left to right: Max Elikan, Maya Chung, Carli Kovel, Marco Rupp, Front: Aaron Leininger, Moriah Kunes.

In October, I introduced En-ROADS to the Princeton Energy and Climate Scholars (PECS), a diverse group of graduate students interested in climate issues from departments all across the university (politics, chemistry, engineering, geosciences, atmosphere and oceanic sciences, ecology and evolutionary biology, and more). I have led these En-ROADS events for the past three years and was excited to share with a group of Princeton students who were really invested in climate mitigation. For this event, I led a role-playing game where players act as climate stakeholders in a simulated emergency climate summit organized by the United Nations (UN). Acting as the UN secretary-general, I called on the group to propose a climate action plan that would keep total global warming under 2 degrees Celsius by the year 2100. Over the course of three rounds of play, the PECS attendees— role-playing interest groups like “Agriculture, forestry and land use” or “Clean Tech”— suggested policy or mitigation strategies to reduce net greenhouse gas emissions that could be implemented in the En-ROADS simulator. 

We started the first round with the baseline scenario in En-ROADS, which results in 3.3o C of warming by the year 2100 (Figure 1). This baseline assumes social and technological progress continues at the current rate, without additional climate policies or action, similar to the “Current Policies Scenario” of the Integrated Assessment Models or the IPCC reports

Figure 1: Dashboard of the En-ROADS simulator, displaying the baseline scenario resulting in 3.3o C of warming by the end of the twenty-first century.

In the first round, each stakeholder group unilaterally proposed one mitigation strategy that aligned with their own priorities, but quickly found issues with this approach. One group proposed subsidizing renewable energy sources and another subsidizing nuclear energy, both with the hope of reducing fossil fuel emissions from the energy sector. Adding a $0.03 /kWh subsidy to renewable energy expanded the energy market share of renewables, thus reducing the more fossil-fuel intensive sources like coal and oil. This change resulted in reducing the projected 2100 warming by a modest tenth of a degree to 3.2o C. However, adding a subsidy for nuclear not only reduced fossil-fuel intensive energy sources as intended, but also ate into the market share of renewable energy. This is because nuclear and renewable energy compete with each other as much as with coal, oil and gas (Figure 1). As a result, adding the nuclear subsidy on top of the renewable energy subsidy had a negligible effect, keeping projected 2100 warming at 3.2o C. This surprising result demonstrated a need for more synergistic mitigation strategies, fueling negotiations between stakeholder groups in the next round. 

Round two saw periods of intense debate between stakeholder groups, as individual actors formed coalitions with two blocs emerging. One coalition, led by “Conventional Energy”, which included “Clean Tech”, “Industry and Commerce” and the “Developed Nations”, proposed investment in technological carbon removal, or carbon capture, to offset emissions. This would prop up the “Industry and Commerce” and “Clean Tech” sectors by increasing funding for research and development without negatively impacting the status quo of “Conventional Energy” and “Developed nations”. Alternatively, the second coalition led by the “Climate Justice Hawks” in coordination with the “Agriculture, forestry and land use” and the “Developing Nations”  focused on reducing methane, nitrous oxide and other greenhouse gas emissions that were more relevant to their sectors. This two-pronged approach resulted in a total 2100 temperature increase of 2.7o C, a significant improvement from the 3.3o C starting baseline, but short of the 2o C goal set out at the beginning of the game.

By the start of the third round, no one had proposed any mitigation strategies that would directly reduce CO2 emissions from fossil fuel burning. This was, in part, to accommodate the strong personalities of the “Conventional Energy” team, who had aggressively lobbied for anything else. However, in the third round, the PECS role-players were encouraged to open the En-ROADS simulator themselves and quickly discovered the power of a substantial carbon tax. 

“We just need the carbon tax to be $100 [per ton CO2]” said one member of the “Agriculture, forestry and land use” team.

“Who’s going to pay the carbon tax?” asked a member of “Conventional Energy”.

The conversation quickly escalated as the groups sparred over which mitigation strategies should be implemented in the final round. In short order, the two blocs became one, with every PECS member gathered around a single table, advocating for their stakeholder interests. Ultimately, with a compromise on the carbon tax, increased energy efficiency, a tax on oil, investment in carbon removal technologies and decreased deforestation, the group argued and laughed their way to the targeted 2o C of warming by 2100.

The event was a great success, and the players walked away with a new appreciation for the various perspectives at play when discussing our climate future. “We need to invest more in research; we need to put money into new technologies,” said one student who thought that negative emissions were a powerful factor in their final scenario. Another student said, “This really validates the “all of the above” energy policy of the Obama era” noting that there are still ways to the 2o C threshold without slashing the fossil fuel industry. In the long run, the PECS members all agreed that the solution was complex, requiring cooperation across sectors and a variety of mitigation strategies, with no “one-size fits all” answer. The En-ROADS Climate Action Simulation worked; it facilitated an informative, engaging, and fun discussion of climate change mitigation and policy.

The roleplaying game is my favorite way to introduce the En-ROADS simulator but Climate Interactive also has other ways of engaging. If you’re interested in bringing En-ROADS to your classroom, group or workshop, check out: https://www.climateinteractive.org/en-roads/ or reach out to Abigale Wyatt in the Geosciences department for more information. 


Abigale Wyatt is a PhD Candidate in Geosciences working with Prof. Laure Resplandy to study how ocean physics and climate variability impact surface ocean biogeochemistry. After completing her degree, Abigale will join the team at [C]Worthy, an initiative investigating the efficacy of marine-based carbon dioxide removal, as a postdoctoral research scientist.

Project Plastic develops technology to combat marine microplastics

Project Plastic is the brainchild of two Princeton Architecture students, Nathaniel Banks and Yidian Liu, who sought an innovative solution to the problem of aquatic plastic pollution. Since their launch in 2021, they have accumulated various awards and commendations such as: People’s Choice Award at the Princeton Entrepreneurs’ Network Startup Pitch Competition, 1st place at the Princeton Startup Bootcamp, 1st place at the BASE Accelerator Program in Philadelphia, and a $50,000 grant from NSF I-Corps. Their work awarded 1st place at Shenzhen Biennale focused on urbanization in Shenzhen, China. Highwire Earth is excited to see Project Plastic continue to grow and make a positive impact on aquatic environmental pollution. 

More information about Project Plastic and their team can be found at: https://www.projectplastic.site/.

***

Written by Amy Amatya.

Originally posted on July 23, 2022 at www.projectplastic.site. This article was republished with permission from Project Plastic and the original author.

Some of humanity’s greatest threats — pandemics, climate change, water contamination — are invisible. They escalate because we don’t see them coming, or we ignore the data that help us see them.

Microplastics are an emerging example. Defined as any plastic smaller than five millimeters in diameter, microplastics pose a big problem to the environment and ourselves. They are easily ingested, potentially toxic, and everywhere. In fact, microplastics are found in nearly every corner of the world, right down to the tissues of living organisms. 

Microplastics exist via two pathways: they are mass-produced to be this size (‘primary’), or they come from the degradation of larger plastics (‘secondary’). Primary microplastics are difficult to target because production is controlled by industries including textiles, cosmetics, and household items. Considerable persuasion of regulators and corporations is necessary to reduce microplastic production. Some progress has been made on this front, as the Microbead-Free Waters Act of 2015 prohibits microplastic use in wash-off cosmetics. However, secondary microplastics are difficult to target because a lot of plastic already exists in the world. The sheer difference in scale between microplastics and the landscapes they inhabit prohibit remediation. Even if we ceased all plastic production today, there are still 200 million tons of plastic circulating in our oceans.

Despite their huge threat, there are no consistent protocols available for the accurate and

systematic recording of microplastic pollutant concentrations in water. There is also no existing technology available to sequester all microplastics from tributaries, effluent streams, reservoirs and lakes. There are three approaches to reducing microplastic pollution. We can:

1) Produce less plastic, 

2) Prevent existing plastic from entering the environment, and 

3) Remove microplastics directly from the environment.

Project Plastic was moved by the third approach to develop the world’s first portable, affordable, and environmentally friendly microplastic measuring and sequestration device. Project Plastic is a team of chemists and architectural designers, but we aren’t just a filtration technology company. Driven foremost by the microplastic problem, we follow microplastics to the end of their aquatic lifetime. We strive to collaborate with riverkeepers, water treatment companies, and private bottled water companies to monitor, collect, and upcycle microplastic pollution from waterways.

Our device utilizes a patented ‘artificial root’ technology that acts as a filter to remove small debris (including microplastics) from the upper water column, where most plastic pollutants accumulate. Our root technology is modeled after organic aquatic plant roots. Long fibrous filaments are suspended in water and sediments physically adhere to the dense fibers on each root. Naturally-occurring biofilms accumulate on the ‘artificial root’ network over time, which further traps small particles. By applying an array of ‘artificial roots’ to the underside of a flotational frame, our device can entrap large quantities of microplastics while allowing aquatic wildlife to swim below or between our filter. Each biofilter is attached to a removable pad, making it easy to swap biofilters once they become saturated. Each pad is housed within a hydrodynamic flotation frame for application in rivers, streams, and reservoirs.

The Plastic Hunter has a key advantage over conventional filtration technologies: it has no mechanical components, meaning it can operate passively with no electricity and minimal maintenance. This makes the device far cheaper to produce, deploy and maintain compared to any existing microplastic filtration system.

Lastly, our team is currently working on establishing protocols for the separation and purification of contaminated sediments from our filter media. In doing so, our team hopes to extract relatively pure microplastic sediments from our devices to be forwarded to our research collaborators at the University of Washington in St Louis. The aim is to develop a method of converting microplastics into chemical compounds like carotene. If successful, our team may be able to upcycle microplastics into useful chemicals for other industries like pharmaceuticals, turning harmful waste into sustainable resources. This, we hope, will avoid environmentally harmful storage or processing of contaminated sediment through incineration, and instead propagate a circular economy for microplastic waste. 

Inside a Solar Energy Company

Written by Molly Chaney

Finding an internship as a Ph.D. student is hard. Finding one at a company you have legitimate interest in is even harder. In search of a more refined answer to the dreaded question, “so what do you want to do after you get your Ph.D.?” I started looking for opportunities in what is very broadly and vaguely referred to as “industry.” I stepped into Dillon gym on a muggy August day in the only pair of dress pants I own and looked around. Finance, biotech, management consulting, and oil & gas companies filled the room with tables and recruiters.

After talking to what turned out to be a bunch of dead ends that didn’t excite me much, I decided to check out one last table before leaving. A far cry from the multi-table, multi-recruiter teams with tons of free swag to give away like Exxon and Shell, Momentum Solar had a table with some flyers, business cards, and one recruiter. I didn’t wait in line or crowd around like at the others, and immediately got to talking with Peter Clark. What I remember most was his message that they were simply looking for “intellectual horsepower,” something that the CFO would repeat to a group of students who went to their South Plainfield HQ for an information session later that school year. I came away from my conversation not exactly sure what I would be doing if I worked there, but excited about joining a small, quickly growing company founded in sustainability.

At that info session some months later, I was impressed that the CFO, Sung Lee, took the time out of his schedule to speak directly with the group of prospective interns, and gave us all some background about where Momentum has been, and where it’s going:

Momentum Solar is a residential solar power installation company that was founded in New Jersey in 2009 by Cameron Christensen and Arthur Souritzidis. In 2011, they had just four employees. In 2013, six. They were ranked on the Inc. 5000 most successful companies in 2016 (with 250 employees), Inc. 500 fastest growing companies in 2017 (700 employees), and Inc. 5000 most successful again in 2018 (950 employees). They doubled their revenue from 2017-2018, and doubled again 2018-2019. Currently, Momentum has operations in seven states, from California to Connecticut, and shows no signs of slowing down. The solar industry as a whole also shows promising trends: since 2008, solar installations in the US have grown 35-fold, and since 2014, the cost of solar panels has dropped by nearly 50%.

After hearing this pitch, we toured the office, which, while full of diligent employees in front of huge screens, also boasts two ping pong tables and a darts board. The energy in the space was palpable, and Sung’s enthusiasm was contagious: I was sold.

Fast forward a couple months, and I was about to have my first day there. I *still* didn’t know exactly what I would be doing. On day one, my supervisor presented me with a few different projects I could choose from. While I wasn’t using the specific skills related to my research area here at Princeton, I was using crucial skills I developed along the way during my PhD research: programming and exploratory data analysis. I jumped right in to their fast-paced, quick-turnaround style of work, and had check-ins with Sung nearly every day. He made a concerted effort to include me and all the other interns on calls and in meetings, even if it was just to observe. The main project I worked on was writing a program to optimize appointment scheduling and driving routes, with the goals of improving efficiency from both a time and a fossil fuel standpoint: a great example of a sustainability practice helping a company’s bottom line.

People had told me before starting my Ph.D. that, unless I was planning on taking the academic route, the most valuable things I would learn would not be in my dissertation, but skills developed along the way. This rang true during my first professional experience in industry. Problem solving and independence were probably the two most valuable qualities that a graduate student can bring to an internship. Somewhat unexpectedly, teaching skills proved useful as well: it wasn’t enough to prove a point through a certain statistical test; it was crucial that a room full of people with diverse backgrounds understood what a certain figure or result meant.

Momentum continues to grow, regularly setting and breaking records. To date, Momentum has installed 174 MW of residential solar energy, enough capacity to power the equivalent of more than 33,000 average American homes. I know my experience was unique: I was treated as an equal, was mentored thoughtfully and intentionally, and had regular interaction with corporate-level executives. Working there was rewarding, and Momentum’s success is a glimmer of hope during an ever-worsening climate crisis. 

Graduate and undergraduate students who are interested in internship opportunities with Momentum Solar should contact Peter Clark, Director of Talent Acquisition, at pclark@momentumsolar.com.

Sources: energy.gov

Molly Chaney is a fifth year Ph.D. candidate in Civil & Environmental Engineering. Advised by Jim Smith, her research focuses on the use of polarimetric radar to study tropical cyclones and other extreme weather events. Originally from Chicago, she is a die-hard Cubs (and deep dish pizza) fan. In her spare time she enjoys cuddling her dog, playing videogames, and indulging in good food and wine with her friends and family. If you have more questions about her experience at Momentum Solar you can contact her at mchaney@princeton.edu.

Integrating Renewable Energy Part 2: Electricity Market & Policy Challenges

Written by Kasparas Spokas

The rising popularity and falling capital costs of renewable energy make its integration into the electricity system appear inevitable. However, major challenges remain. In part one of our ‘integrating renewable energy’ series, we introduced key concepts of the physical electricity system and some of the physical challenges of integrating variable renewable energy. In this second instalment, we introduce how electricity markets function and relevant policies for renewable energy development.

Modern electricity markets were first mandated by the Federal Energy Regulatory Commission (FERC) in the United States at the turn of millennium to allow market forces to drive down the price of electricity. Until then, most electricity systems were managed by regulated vertically-integrated utilities. Today, these markets serve two-thirds of the country’s electricity demand (Figure 1) and the price of wholesale electricity in these regions is historically low due to cheap natural gas prices and subsidized renewable energy deployment.

The primary objective of electricity markets is to provide reliable electricity at least cost to consumers. This objective can be further broken down into several sub-objectives. The first is short-run efficiency: making the best of the existing electricity infrastructure. The second is long-run efficiency: ensuring that the market provides the proper incentives for investment in electricity system infrastructure to guarantee to satisfy electricity demand in the future. Other objectives are fairness, transparency, and simplicity. This is no easy task; there is uncertainty in both supply and demand of electricity and many physical constraints need to be considered.

While the specific structure of electricity markets varies slightly by region, they all provide a competitive market structure where electricity generators can compete to sell their electricity. The governance of these markets can be broken down into several actors: the regulator, the board, participant committees, an independent market monitor, and a system operator. FERC is the regulator for all interstate wholesale electricity markets (all except ERCOT in Texas). In addition, reliability standards and regulations are set by the North American Electric Reliability Council (NERC), which FERC gave authority in 2006. Lastly, markets are operated by independent system operators (ISOs) or Regional Transmission Operators (RTOs) (Figure 1). In tandem, regulations set by FERC, NERC, and system operators drive the design of wholesale markets.

Wholesale energy market ISO/RTO locations (colored areas) and vertically-integrated utilities (tanned area). Source: https://isorto.org/

Before we get ahead of ourselves, let’s first learn about how electricity markets work. A basic electricity market functions as such: electricity generators (i.e. power plants) bid to generate an amount of electricity into a centralized market. In a perfectly competitive market, the price of these bids is based on the costs of an individual power plant to generate electricity. Generally, costs are grouped by technology and organized along a “supply stack” (Figure 2). Once all bids are placed, the ISO/RTO accepts the cheapest assortment of generation bids that satisfies electricity demand while also meeting physical system and reliability constraints (Figure 2a). The price of the most expensive accepted bid becomes the market-clearing price and sets the price of electricity that all accepted generators receive as compensation (Figure 2a). In reality it is a bit more complicated: the ISO/RTOs operate day-ahead, real-time, and ancillary services markets and facilitate forward contract trading to better orchestrate the system and lower physical and financial risks.

Figure 2. Schematics of electricity supply stacks (a) before low natural gas prices, (b) after natural gas prices declined, (c) after renewable deployment.

Because real electricity markets are not completely efficient and competitive (due to a number of reasons), some regions have challenges providing enough incentives for the long-run investment objective. As a result, several ISO/RTOs have designed an additional “capacity market.” In capacity markets, power plants bid for the ability to generate electricity in the future (1-3 years ahead). If the generator clears this market, it will receive extra compensation for the ability to generate electricity in the future (regardless of whether it is called upon to generate electricity) or will face financial penalties if it cannot. While experts continue to debate the merits of these secondary capacity markets, some ISO/RTOs argue capacity markets provide the necessary additional financial incentives to ensure a reliable electricity system in the future.

Sound complicated? It is! Luckily, ISO/RTOs have sophisticated tools to continuously model the electricity system and orchestrate the purchasing and transmission of wholesale electricity. Two key features of electricity markets are time and location. First, market clearing prices are time dependent because of continuously changing demand and supply. During periods of high electricity demand, prices can rise because more expensive electricity generators are needed to meet demand, which increases the settlement price (Figure 2a). In extreme cases, these are referred to as price spikes. Second, market-clearing prices are regional because of electricity transmission constraints. In regions where supply is low and the transmission capacity to import electricity from elsewhere is limited, electricity prices can increase even more.

Several recent developments have complicated the economics of generating electricity in wholesale markets. First, low natural gas prices and the greater efficiency of combined cycle power plants have resulted in low electricity bids, restructuring the supply stack and lowering market settlement prices (Figure 2b). Second, the introduction of renewable power plants, which have almost-zero operating costs, introduce almost-zero electricity market bids. As such, renewables fall at the beginning of the supply stack and push other technologies towards the right (higher-demand periods that are less utilized), further depressing settlement prices (Figure 2c). A recent study by the National Renewable Energy Laboratory expects these trends to continue with increasing renewable deployment.

In combination, these developments have reduced revenues and challenged the operation of less competitive generation technologies, such as coal and nuclear energy, and elicited calls for government intervention to save financial investments. While the shutdown of coal plants is welcome news for climate advocates, nuclear power provided 60% of the U.S. carbon-free electricity in 2016. Several states have already instated credits or subsidies to prevent these low-emission power plants from going bankrupt. However, some experts argue that the retirement of uneconomic resources is a welcome indication that markets are working properly.

As traditional fossil-fuel power plants struggle to remain in operation, the development of new renewable energy continues to thrive. This development has been aided by both capital cost reductions and federal- and state-level policies that provide out-of-market economic benefits. To better achieve climate goals, some have argued that states need to write policies that align with wholesale market structures. Proposed mechanisms include in-market carbon pricing, such as a carbon tax or stronger cap-and-trade programs, and additional clean-energy markets. Until now however, political economy constraints have limited policies to weak cap-and-trade programs, investment and production tax credits, and renewable portfolio standards.

While renewable energy advocates support such policies, system operators and private investors argue these out-of-market policies could potentially distort wholesale electricity markets by suppressing prices and imposing regulatory risks on investors. Importantly, they argue that this leads to inefficient resource investment decisions and reduced competition that ultimately increases costs for consumers. As a result, several ISO/RTOs are attempting to reform electricity capacity market rules to satisfy these complaints but are having difficulty finding a solution that satisfies all stakeholders. How future policies will be dealt with by FERC, operators and stakeholders remains to be resolved.

As states continue to instate new renewable energy mandates and technologies yet to be well-integrated with wholesale markets, such as battery storage, continue to evolve and show promise, wholesale market structures and policies will need to adapt. In the end, the evolution of electricity market rules and policies will depend on a complex interplay between technological innovation, stakeholder engagement, regulation, and politics. Exciting!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Integrating Renewable Energy Part 1: Physical Challenges

Written by Kasparas Spokas

Meeting climate change mitigation targets will require rapidly reducing greenhouse gas emissions from electricity generation, which is responsible for a quarter of all U.S. greenhouse gas emissions. The prospect of electrifying other sectors, such as transportation, further underscores the necessity to reduce electricity emissions to meet climate goals. To address this, much attention and political capital have been spent on developing renewable energy technologies, such as wind or solar power. This is partly because recent reductions of the capital costs of these technologies and government incentives have made this strategy cost-effective. Another reason is simply that renewable energy technologies are popular. Today, news articles about falling renewable energy costs and increasing renewable mandates are not uncommon.

While capital cost reductions and popularity are key to driving widespread deployment of renewables, there remain significant challenges for integrating renewables into our electricity system. This two-part series introduces key concepts of electricity systems and identifies the challenges and opportunities of integrating renewables.

Figure 1. Schematic of the physical elements of electricity systems. Source: https://www.eia.gov/energyexplained/index.php?page=electricity_delivery

What are electricity systems? Physically, they are composed of four main interacting elements: electricity generation, transmission grids, distribution grids, and end users (Figure 1). In addition to the physical elements, regulatory and governance structures guide the operation and evolution of electricity systems (these are the focus of part two in this series). These include the U.S. Federal Regulatory Commission (FERC), the North American Electric Reliability Council (NERC), and numerous state-level policies and laws. The interplay between the physical and regulatory elements has guided electricity systems to where they are today.

In North America, the electricity system is segmented into three interconnected regions (Figure 2). These regions are linked by only a few low-capacity transmission wires and often operate independently. These regions are then further segmented into areas where independent organizations operate wholesale electricity markets and areas where federally-regulated vertically-integrated utilities manage all the physical elements (Figure 2). Roughly two-thirds of the U.S. electricity demand is now located in wholesale electricity markets. Lastly, some of these broad areas are further subdivided into smaller balancing authorities that are responsible for supplying electricity to meet demand under regulations set by FERC and NERC.

Figure 2. Left: North American Electric Reliability Corporation Interconnections. Right: Wholesale market areas (colored area) and vertically-integrated utilities areas (tanned area). Source: https://www.energy.gov/sites/prod/files/oeprod/DocumentsandMedia/NERC_Interconnection_1A.pdf & https://isorto.org/

Electricity systems’ main objective is to orchestrate electricity generation, transmission and distribution to maintain instantaneous balance of supply and continuously changing demand. To maintain this balance, the coordination of electricity system operations is vital. Electricity systems need to provide electricity where and when it is needed.

Historically, electricity systems have been built to suit conventional electricity generation technologies, such as coal, oil, natural gas, nuclear, and hydropower. These technologies rely on fuel that can be transported to power plants, allowing them to be sited in locations where electricity demand is present. The one exception is hydropower, which requires that plants are sited along rivers. In addition, the timing of electricity generation at these power plants can be controlled. The ability to control where and when electricity is generated simplifies the process by which an electricity system is orchestrated.

Enter solar and wind power. These technologies lack the two features of conventional electricity generation technologies, the ability to control where and when to generate electricity, and make the objective of instantaneously balancing supply and demand even more challenging. For starters, solar and wind technologies are dependent on natural resources, which can limit where they are situated. The areas that are best for sun and wind do not always coincide with where electricity demand is highest. As an example, the most productive region for on-shore wind stretches along a “wind-belt” through the middle of U.S. (Figure 3). For solar, the sparsely populated southwest region presents the most attractive sunny skies (Figure 3). As of now, long-distance transmission infrastructure to transport electricity from renewable resource-rich regions to high electricity demand regions is limited.

Figure 3. Maps of wind speed (left) and solar energy potential (right) in the U.S. Source: https://www.nrel.gov/

In addition, the timing of electricity generation from wind and solar cannot be controlled: solar panels only produce electricity when the sun is shining and wind turbines only function when the wind is blowing. Therefore, the scaling up of renewables alone would result in instances where supply of renewables does not equal customer demand (Figure 4). When renewable energy production suddenly drops (due to cloud cover or a lull in wind), the electricity system is required to coordinate other generators to quickly make up the difference. In the inverse situation where renewable energy generation suddenly increases, electricity generators often curtail the electricity to avoid dealing with the variability. The challenge of forecasting how much sun and wind there will be in the future adds more uncertainty to the enterprise.

Figure 4. Electricity demand and wind generation in Texas. The wind generation is scaled up to 100% of demand to emphasize possible supply-demand mismatches. Source: http://www.ercot.com/gridinfo/generation

A well-known challenge in solar-rich regions is the “duck-curve” (Figure 5). The typical duck-curve (named after the fact that the curve resembles a duck) depicts the electricity demand after subtracting the amount of solar generation at each hour of the day. In other words, the graph depicts the electricity demand that needs to be met with power plants other than solar, called “net-load.” During the day, the sun shines and solar panels generate electricity, resulting in low net-loads. However, as the sun sets and people turn on electric appliances after returning home from work, the net load increases quickly. Electricity systems often respond by calling upon natural gas power plants to quickly ramp up their generation. Unfortunately, natural gas power plants that can quickly increase their output are less efficient and have higher emission rates than slower natural gas power plants.

 

Figure 5. The original duck-curve presented by the California Independent System Operator. Source: http://www.caiso.com/

These challenges result in economic costs. A study about California concluded that increasing renewable deployment could result in only modest emission reductions at very high abatement costs ($300-400/ton of CO2). This is because the added variability and uncertainty of more renewables will require higher-emitting and quickly-ramping natural gas power plants to balance sudden electricity demand and supply imbalances. In addition, more renewable power will be curtailed in order to maintain stability (Figure 6), reducing the return on investment and increasing costs.

Figure 6. Renewable curtailment (MWh) and cumulative solar photovoltaic (PV) and wind power capacity in California from 2014 to 2018. Source: CAISO

Although solar and wind power do pose these physical challenges, technological advances and electricity system design enhancements can facilitate their integration. Several key strategies for integrating renewables will be: the development of economic energy storage that can store energy for later use, demand response technologies that can help consumers reduce electricity demand during periods of high net-load, and expansion of long-distance electricity transmission to transport electricity from natural resource (sun and wind) rich areas to electricity demand areas (cities). Which solutions succeed will depend on the interplay of future innovation, state and federal incentives, and electricity market design and regulation improvements. As an example, regulations that facilitate long-distance electricity transmission could significantly reduce technical challenges of integrating renewables using current-day technologies. To ensure efficient integration of renewable energy, regulatory and energy market reform will likely be necessary. For more about this topic, check out part two of our series here!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Carbon Capture and Sequestration: A key player in the climate fight

Written by Kasparas Spokas and Ryan Edwards

The world faces an urgent need to drastically reduce climate-warming CO2 emissions. At the same time, however, reliance on the fossil fuels that produce CO2 emissions appears inevitable for the foreseeable future. One existing technology enables fossil fuel use without emissions: Carbon Capture and Sequestration (CCS). Instead of allowing CO2 emissions to freely enter the atmosphere, CCS captures emissions at the source and disposes of them at a long-term storage site. CCS is what makes “clean coal” – the only low-carbon technology promoted in President Donald Trump’s new Energy Plan – possible. The debate around the role of CCS in our energy future often includes questions such as: why do we need CCS? Can’t we simply replace fossil fuels with renewables? Where can we store CO2? Is storage safe? Is the technology affordable and available?

Source: https://saferenvironment.wordpress.com/2008/09/05/coal-fired-power-plants-and-pollution/

The global climate-energy problem

The Paris Agreement called the globe to action: limit global warming to 2°C above pre-industrial temperatures. To reach this goal, CO2 and other greenhouse gas emissions need to be reduced by at least 50% in the next 40 years and reach zero later this century (see Figure 1). This is a challenging task, especially since global emissions continue to increase, and existing operating fossil fuel wells and mines contain more than enough carbon to exceed the emissions budget set by the 2°C target.

Fossil fuels are abundant, cheap, and flexible. They currently fuel around 80% of the global energy supply and create 65% of greenhouse gas emissions. While renewable energy production from wind and solar has grown rapidly in recent years, these sources still account for less than 2.1% of global energy supply. Wind and solar also face challenges in replacing fossil fuels, such as cost and intermittency, and cannot replace all fossil fuel-dependent processes. The other major low-carbon energy sources, nuclear and hydropower, face physical, economic, and political constraints that make major expansion unlikely. Thus, we find ourselves in a dilemma: fossil fuels will likely remain integral to our energy supply for the foreseeable future.

Figure 1: Global CO2 emissions (billion tonnes of CO2 per year): historical emissions, the emission pathway implied by the current Paris Agreement pledges, and a 2°C emissions pathway (RCP2.6) (Sources: IIASA & CDIAC; MIT & UNFCCC; IIASA)

CO2 storage and its role in the energy transition

CCS captures CO2 emissions from industrial sources (e.g. electric power plants) and transports them, usually by pipeline, to long-term storage sites. The ideal places for CO2 sequestration are porous rock formations more than half a mile below the surface. (Target rocks are filled with water, but don’t worry, it’s saltwater, not freshwater!) Chosen formations are overlain, or “capped,” by impermeable caprocks that do not allow fluid to flow through them. The caprocks effectively trap buoyant CO2 in the target rocks (see Figure 2).

Figure 2: Diagram of a typical geological CO2 storage site (Source: Global CCS Institute)

Scientists estimate that suitable rock formations have the potential to store more than 1,600 billion tonnes of CO2. This amounts to 70 years of storage for current global emissions from capturable sources (which are 50% of all emissions). Large-scale CCS could serve as a “bridge,” buying time for carbon-free energy technologies to develop to the stage where they are economically and technically ready to replace fossil fuels. CCS could even help us increase the amount of intermittent renewable energy by providing a flexible and secure “back-up” with low emissions. Bioenergy combined with CCS (BECCS) can also deliver “negative emissions” that may be needed to stabilize the climate. Furthermore, industrial processes such as steel, cement, and fertilizer production have significant CO2 emissions and few options besides CCS to reduce them.

In short, CCS is a crucial tool for mitigating the worst effects of global warming while minimizing disruption to our existing energy infrastructure and buying time for renewables to improve. Most proposed global pathways to achieve our targets include large-scale CCS, and the United States’ recently released 2050 decarbonization strategy includes CCS as a key component.

While our summary makes CCS seem like an obvious technology to implement, important questions about safety, affordability, and availability remain.

 

Is CCS Safe?

For CCS to contribute substantially to global emissions reduction, huge amounts of emissions must be stored underground for hundreds to thousands of years. That’s a long time, which means the storage must be very secure. Some worry that CO2 might leak upward through caprock formations and infiltrate aquifers or escape to the atmosphere.

But evidence shows that CO2 can be safely and securely stored underground. For example, the Sleipner project has injected almost 1 million tonnes of CO2 per year under the North Sea for the past 20 years. (For scale, that’s roughly a quarter of the emissions from a large coal power plant.) The oil industry injects even larger amounts of CO2 approximately 20 million tonnes per year – into various geological formations in the United States without issue in enhanced oil recovery operations to increase oil production. Indeed, the oil and gas deposits we currently exploit demonstrate how buoyant fluids (like CO2) can be securely stored in the subsurface for a very long time.

Still, there are risks and uncertainties. Trial CO2 injections operate at much lower rates than will be needed to meet our climate targets. Higher injection rates require pressure management to prevent the caprock from fracturing and, consequently, the CO2 from leaking. The CO2 injection wells and any nearby oil and gas wells also present possible leakage pathways from the subsurface to the atmosphere (although studies suggest this is likely to be negligible). Leading practices in design and maintenance can minimize well leakage risks.

Subsurface CO2 storage has risks, but experience suggests the risks can be mitigated. So, if CCS has such promise for addressing our climate-energy problem, why has it not been widely implemented?

 

The current state of CCS

CCS development has lagged, and deployment remains far from the scale required to meet our climate targets. Only a handful of projects have been built over the past decade. Why? High costs and a lack of economic incentives.

Adding CCS to coal and gas-fired electricity generation plants has large costs (approximately doubling the upfront cost of a new plant using current technology). Greenhouse gases are free (or cheap) to emit in most of the world, which means emitters have no reason to make large investments to capture and store their emissions. In order to incentivize industry to invest in CCS, we would need to implement a strong carbon price, which is politically unpopular in many countries. (There are exceptions – Norway’s carbon tax incentivized the Sleipner project.) In the United States, the main existing economic incentive for capturing CO2 is for enhanced oil recovery operations. However, the demand for CO2 from these operations is relatively small, geographically localized, and fluctuates with the oil price.

Inconsistent and insufficient government policies have thwarted significant development of CCS (the prime example being the UK government’s last-minute cancellation of CCS funding). Another challenge will be ownership and liability of injected CO2. Storage must be guaranteed for long timeframes. Government regulations clarifying liability, long-term responsibility for stored CO2, and monitoring and verification measures will be required to satisfy investors.

 

The future of CCS

The ambitious target of the Paris Agreement will require huge cuts in CO2 emissions in the coming decades. The targets are achievable, but probably not without CCS. Thus, incentives must increase, and costs must decrease, for CCS to be employed on a large scale.

As with most new technologies, CCS costs will decrease as more projects are built. For example, the Petra Nova coal plant retrofit near Houston, a commercial CCS project for enhanced oil recovery that was recently completed on time and on budget, is promising for future success. New technologies also have great potential: a pilot natural gas electricity generation technology promises to capture CO2 emissions at no additional cost. A technology that could capture CO2 from power plant emissions while also generating additional electricity is also in the works.

Despite its current troubles, CCS is an important part of solving our energy and climate problem. The recent United States election has created much uncertainty about future climate policy, but CCS is one technology that could gain support from the new administration. In July 2016, a bipartisan group of senators introduced a bill to support CCS development. If passed, this bill would satisfy Republican goals to support the future of fossil fuel industries while helping the United States achieve its climate goals. Strong and stable supporting policies must be enacted by Congress – and governments around the world – to help CCS play its key role in the climate fight.

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department at Princeton University studying carbon storage and enhanced oil recovery environments. More broadly, he is interested in studying the challenges of developing low-carbon energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas

 

Ryan Edwards is a 5th year PhD candidate in Princeton’s Department of Civil & Environmental Engineering. His research focuses on questions related to geological carbon storage, hydraulic fracturing, and shale gas. He is interested in finding technical and policy solutions to the energy-climate problem. Follow him on Twitter @ryanwjedwards.

Energy Efficient Buildings: The forgotten children of the clean energy revolution

Written by Victor Charpentier

The world’s population will increasingly become urbanized. In the 2014 revision of the World Urbanization Prospects, the United Nations (UN) estimate that the urban population will rise from 54% today to 66% of the global population by 2050. Therefore it is no surprise that cities and buildings are at the heart of the 11th Sustainable Development Goal of the UN: “Make cities and human settlements inclusive, safe, resilient and sustainable”. With an ambitious time objective of 2030, the goal is set to improve the sustainability of cities and the efficient use of their resources.

The impact of buildings on the energy consumption

Energy consumption is often described in terms of primary energy – that is, untransformed raw forms of energy such as coal, wind energy, or biomass. Buildings represent an incredible 40% of the total primary energy consumption in most western countries, according to the International Energy Agency (IAE). A growing awareness of energy issues in the United States led the Department of Energy (DOE) to create building energy codes and standards for new construction and renovation projects setting requirements for reduction of energy consumption (e.g. revised ASHRAE Standard 90.1 2010 & 2013). The LEED certification created in 1994 by the non-profit US Green Building Council for the American building industry has proven that there is a private sector interest to recognize the quality of new buildings. The DOE’s building energy codes mainly focus on space heating and cooling, lighting and ventilation, since these are the main energy consumers in buildings. Great energy savings can thus be reaped from improving the performance of new buildings and renovating existing ones in these categories. Refrigeration, cooking, electronic devices (featured in category ”others” in Figure 1) and water heating, related to the occupants’ activity, are comparatively minor.

Victor Fig 1
Figure 1. Operational energy consumption by end use in residential (left) and commercial (right) buildings. Source: DOE building energy data book.

Energy end-uses play a significant role in driving energy transitions

Despite the regulatory efforts that have been implemented in the past decade, significant improvements remain necessary to reach the ambitious goals set by the UN. To help achieve them in the US, the DOE has listed strategic energy objectives in its 2015 Quadrennial technology review. One of them reads: “Increasing Efficiency of Building Systems and Technologies”. This report notes that in the case of lighting technologies, for instance, 95% of the potential savings due to advanced solid-state lighting remains unrealized due to lack of technology diffusion. This underlines the need for implementation incentives in addition to research and development in the field of building technologies.

In contrast with this dire need for investments in end-use innovation, scientists showed in a 2012 study that the current investment levels in energy related innovation are largely dominated by energy-supply technologies. Energy-supply technologies are those that extract, process or transport energy resources, while end-use technologies are those that improve energy efficiencies and replace pollutant energy sources when feasible with clean sources (e.g. electric buses in cities). The discrepancy is high between supply and end-use investments. End-use technologies only represent about 2% of the total investments in energy innovations, as shown in Figure 2 below.

The consequences are that buildings technologies receive less investment to finance R&D than they should. In addition, the study suggests that end-use investments provide greater return-on-investments than energy-supply investments. The reason for this misalignment is mainly political as public and financial institutions, and policy makers tend to privilege the latter. The authors of the study suggest that this may be linked to a lack of coherent influence or lobbying for the end-use sector in great contrast with large energy supply companies such as oil or nuclear companies. Thus, to make longer strides in reducing our carbon footprint from the energy sector this needs to change.

Victor Fig 2
Figure 2. Investments (or mobilization of resources) for energy technologies, energy efficiency improvements in end-use technologies (green), energy resource extraction and conversion separated into fossil-fuel (brown), renewable (blue), and nuclear, network and storage (grey) technologies. Source: Nature Climate Change, 2(11), 780-788.

Building energy efficiencies: application to the design of better building skins

One way of improving energy efficiency in buildings is by focusing on the design of their skins or envelopes, which shelter their inside from the conditions outside. As interfaces between the controlled interior environment of buildings and the weather variations on the outside, building skins regulate the energy flow between these two environments. High insolation through windows, for instance, can result in large energy consumption needed for cooling the building. The extreme case imaginable would be in a skyscraper with an all-glass façade in a moderate or warm climate. In fact, balancing the heat coming from the sun (mainly through the windows) represents in average almost 50% of the cooling load in non-residential buildings and more than 50% in residential buildings. The warming that climate change will bring to many regions around the world will also make this worse.

Conventional shading devices such as fixed external louvers and Venetian blinds (see examples in Figure 3) can have a strong impact on the reduction of cooling loads. If they are controlled correctly and regularly adjusted, the building’s annual cooling load can be decreased by as much as 20%. 

 

Victor Fig 3
Figure 3. Current shading systems often combine external fixed louvers (left) and interior Venetian blinds (right). Source: Unicel Architecture – Blindtex.

One can add an additional level of performance by making these skins adaptable such that they provide benefits under varying conditions (weather, urban context, occupancy) through the physical change of their geometry. Implementations of such adaptive building skins have demonstrated reduction of energy demand by as much as 51% and high efficiency in moderate to hot climates. For instance, the Al Bahr twin towers (in Abu Dhabi), seen on Figure 4, are a good example of modern building skin implementation.

 

torres-sol-Aedas_Architects_1

Victor Fig 4
Figure 4. Dynamic façade of Al Bahr twin towers, Abu Dhabi, United Arab Emirates. GIF Source: CNN cited by https://www.papodearquiteto.com.br. Pictures’ Source: http://compositesandarchitecture.com/.

As those two systems demonstrate, there is great potential for these advanced shading systems and for building innovation in general, but their development is still slowed down by the lack of innovative policy and desire to invest in energy efficient building technologies.

Let’s get buildings on board with the energy revolution

Buildings do not get as much attention as automobiles or new technologies but they may be equally important in our long-term future. This is because the energy consumed for heating and cooling spaces, lighting, ventilation and others represents a very large part of our total energy consumption. However, there are solutions and fixes to this situation. Buildings have been greatly improved over the 20th century but we need to take them a step further to prepare better, more efficient homes and offices that will meet our new standards of living in a warming world. The facts call for stronger investment and political commitment. Let’s get buildings on board with the energy revolution!

 

charpentier_v-175x200

Victor is a second-year PhD student in the Department of Civil and Environmental Engineering advised by Professor S. Adriaenssens. His research interests lie in reducing energy consumption of buildings and elastic deformation of shell structures.