What can climate adaptation learn from what’s in Grandpa’s garage? A historical tale of two flood protection megastructures

When the U.S. Army Corps of Engineers started building coastal flood protection over 60 years ago, they weren’t thinking about climate change, but a PhD student at Princeton University shows that old Army Corps projects may hold valuable insights for future climate adaptation efforts.

By D.J. Rasmussen (STEP PhD student)

Grandpa’s flooded garage

On the third day of November 2012, Joseph D’Amelio sat on his front porch looking down 14th Street. For over five decades, he and his wife had shared coffee on countless Saturday mornings in that same spot. It normally would have been unusual for D’Amelio to be out on the porch this late into the year, but this particular morning was unusual. A cacophony of construction noises rang throughout the Broad Channel neighborhood of Queens. Diesel engines idled loudly. The back-up alarms on a half dozen dump trucks rang out simultaneously. Workers were attending to two homes in the neighborhood that had collapsed into Jamaica Bay just days earlier. Joseph D’Amelio sat and thought about the families who lived in those two homes, the once-in-a-lifetime storm that caused their destruction, and something he had discovered in his garage just a few hours ago.

For those who have never visited Broad Channel, they might be surprised by what they see. This corner of New York City resembles little of what someone might think of when they imagine the Big Apple. Instead of skyscrapers, taxis, and bright lights, visitors of Broad Channel are greeted by something resembling more of a small, quiet New England fishing community.

Broad Channel is situated in the middle of Jamaica Bay, an 18,000-acre wetland estuary filled with islands, meadowlands, and a labyrinth of waterways. Jamaica Bay is also home to several Queens neighborhoods and John F. Kennedy International Airport, the busiest airport on the east coast. D’Amelio often took his fishing skiff out to troll for striped bass on the edge of the bay’s marsh banks, sometimes with his three grandchildren.

The aftermath of Hurricane Sandy in the Broad Channel neighborhood of Queens (New York City). Photo by Richard York/The Forum Newsgroup

Broad Channel is also one of New York City’s lowest lying neighborhoods. To prevent severe flood damage, the first floor of many of the residences in Broad Channel sit atop either concrete pilings or a single car garage. Joseph D’Amelio is no stranger to floods. Over the past half century, he had personally witnessed many of the strongest nor’easters and hurricanes, including Hurricane Gloria in 1985, the December 1992 nor’easter, and Hurricane Irene in 2011. Each time his property had survived with little damage. But all that changed a few days ago. Earlier that week, the D’Amelio residence’s garage was flooded by Hurricane Sandy.

The D’Amelio’s home sat atop a garage just big enough to squeeze in a 1950’s Chevrolet. But instead of accommodating a car, the D’Amelio’s garage now served as a storage unit filled with fishing gear, dusty equipment from Joseph’s firefighting career, and boxes of old forgotten junk.

Joseph D’Amelio had spent the past few days cleaning up the aftermath of the flood in his garage. In the process, something peculiar popped out at him. It was a small booklet, nearly fifty years old. The paper of the booklet had browned over the years, giving it a distinct vintage look. In it, a proposal was described for a massive barrier across the Rockaway Inlet, the western entrance to Jamaica Bay. In 1960, Hurricane Donna had reminded New Yorkers how exposed the Jamaica Bay region was to flood waters produced by coastal storms.

The report D’Amelio had found was written by the U.S. Army Corps of Engineers, the primary government agency tasked with managing flood risks. The intention of the Army Corps proposal was to keep storm water out of the Bay. The plan had been distributed in the mid 1960s, just after the D’Amelios had first moved into their 14th Street home. At the time, Joseph and his wife hadn’t paid much attention to the plan; they were too busy taking care of their three young children. After years of planning and deliberation, the barrier was never built. The exact reason why remains a mystery.

Clasping his coffee close to his chest, Joseph D’Amelio sat quietly on his front porch, staring at nothing in particular, and wondered how the past few days would have been different had the Jamaica Bay flood barrier been built.

A pamphlet from the U.S. Army Corps of Engineers describing a proposal for a flood barrier across the Rockaway Inlet, the western entrance to Jamaica Bay, the same pamphlet that Joseph D’Amelio found in his garage. Photo by the author.

The status of coastal flood adaptation today: multiple plans, but little action

Just like Joseph D’Amelio, many Americans on the coasts are increasingly finding themselves cleaning up garages, basements, and living rooms that have been flooded by coastal storms. Studies using observational data have shown that coastal cities around the U.S. are experiencing an increasing frequency of high-water events. These events can lead to costly and deadly floods when water over tops natural and engineered defenses, such as sand dunes or seawalls. The increase in the frequency of high-water events is largely a result of rising mean sea levels due to global warming. The problem is only going to get worse. Through our research, colleagues and I have found that local sea levels in New York City are expected to rise by almost a foot by the middle of this century.

Over the past decade, several U.S. cities around the country have begun devising plans for flood protection. Following Hurricane Sandy, New York City began to investigate several proposals in earnest, including a flood structure across the Rockaway Inlet, much like the one Joseph D’Amelio found in his garage. However, after roughly eight years of deliberation, many of these plans again failed to progress beyond the drawing board, including the second attempt at the Jamaica Bay Barrier.

An artist’s rendering of the most recent proposal for a storm surge barrier at the Rockaway Inlet/Jamaica Bay (2019). Source: New York District of the U.S. Army Corps of Engineers (New York, New York).

New York City’s experience highlights a disturbing trend for coastal climate adaption efforts nationwide. Projects have continually struggled to gain political traction. Years of detailed planning go wasted, as does valuable time (project construction can take multiple decades). As the cliché goes, the can gets kicked down the road. Nothing gets done, and the risk of a major flood disaster remains.

Detectives on the case: enter two curious Princetonians

Over the course of a few meetings in my advisor’s office, he and I started to contemplate the reasons why technically feasible Army Corps projects rarely progressed beyond the drawing board. Specifically, we wondered what non-technical factors might explain the variation in whether projects got built or not. Was it simply a question of having the money at the right time and place? Or is it more complicated than that? For example, were politics and social conflict the reason? If so, what specific factors were contributing? We wanted to know the answers.

It slowly became apparent that these questions had not been studied before. A multi-month literature search turned up little information on the subject. I called experts around the country and wrote letters to the Army Corps, but no one seemed to know the reasons. Freedom of Information Act requests turned up nothing, sometimes after waiting an entire year for a response. Frustrated with the little success I had, I contemplated more drastic measures, including visiting archives (cue the ominous music).

One could describe archives as repositories for all types of media (letters, memos, newspaper articles, photographs, microfilm, etc.) that provide information about the function of an individual person, a group of people, or an organization. I figured that archives might have more information about past projects, such as the failed Jamaica Bay Barrier pamphlet that Joseph D’Amelio had found in his garage.

The answers are in the archives

I began contacting archives across the country to see if they had any information on historical coastal flood protection projects. After calling and visiting roughly a half-dozen locations, I finally got some promising news. An archivist called to tell me that there was a handful of archives in Massachusetts and Rhode Island that had several boxes of documents relating to two coastal flood protection megaprojects. These two megaprojects had emerged simultaneously in Rhode Island following Hurricane Carol in 1954, which at the time was the third major hurricane to hit the region in 16 years.

One of the two projects was the Fox Point hurricane barrier, a closeable flood gate intended to protect downtown Providence, Rhode Island from major floods. It was completed in 1966 and has been deployed several times over the years. The other project was a series of storm surge barriers across the mouth of Narragansett Bay. The Bay barriers ultimately failed to progress beyond the planning stage, after more than ten years of study and deliberation. I thought that if I was able to understand the political and social reasons for these project outcomes, it might offer hope for understanding why some contemporary climate adaptation efforts break ground and why others do not.

The completed Fox Point Hurricane Barrier in March 1966 (Providence, Rhode Island). Photo taken by the New England Division of the U.S. Army Corps of Engineers (Waltham, Massachusetts).
A map showing the Army Corps’ proposal for storm surge barriers at the entrance to Narragansett Bay, Rhode Island. Source: New England Division of the U.S. Army Corps of Engineers (Waltham, Massachusetts).

While archives may contain treasure troves of information about a subject, it is often not in a coherent format than can be easily interpreted. There are no published books or summaries. Each document is like a random puzzle piece but not necessarily a part of a puzzle that you’re trying to put together. To make matters more complicated, you may not even know what puzzle it is you’re trying to put together! Information is chaotically scattered. One must sift through thousands of documents in order to produce a coherent story. Ultimately, you never really know what you will find in an archive. While it is time consuming, in some ways the uncertainty creates an air of suspense that makes digging through boxes all day a bit more bearable.

A photo showing a sampling of a few letters from local residents in opposition to the Narragansett Bay Barriers. Hundreds of letters were sent to elected officials and the U.S. Army Corps of Engineers, nearly all in opposition to the project. Photo by the author.

What can modern climate adaptation efforts learn from the two Rhode Island projects?

After months of collecting and analyzing materials from the archives, I was able to piece together the first ever historical account of the two Rhode Island projects, from the point of their inception all the way through their ultimate fates (completion and cancellation). Informing the historical account were hundreds of newspaper articles, memos between the Army Corps and elected officials, and letters from businesses and residents from around Rhode Island.

I learned that the Fox Point barrier progressed beyond the planning stage as a result of minimal environmental concerns and strong, sustained support from both the public and Rhode Island’s elected officials. At the time, the waterways around Providence were so polluted that there wasn’t much natural life in the water to protect. I also concluded that the Narragansett Bay barrier project failed to break ground due to strong public opposition. The public opposition was related, in part, to projected increases in channel currents that had the potential to complicate maritime navigation and cause uncertain impacts on marine life. Unlike the waterways around Providence, Narragansett Bay was a rich ecological system on which many communities depended. Further hampering the Bay barrier’s chances was an almost decade long planning period during which the public’s flooding concerns declined.

While these findings had historical significance, by themselves they were not obviously relevant to modern-day projects. Much has changed since the 1960s. Most notably, the emergence of environmental laws has made it much easier to legally challenge government projects that have the potential to impact the natural environment. After assessing a handful of coastal climate adaptation projects that have recently been considered by the Army Corps, some common political and social factors emerged. Most notably, these factors included 1) environmental protection concerns (including modern environmental laws that elevate oppositional viewpoints), 2) a lack of leadership and support from elected officials to help carry projects forward, 3) the allure of alternative options that are more aesthetically pleasing to residents and also cheaper and faster to implement (think beach nourishment and buried levees with promenades on top for biking and walking), and 4) lengthy and complex decision-making procedures that coincide with fading memories of floods.

Looking backwards to go forward

As sea levels continue to rise in coastal cities around the country, the urgency grows for protecting populations and both the built and natural environment. Since the mid-20th century, several years of planning have gone into a number of proposed coastal flood protection projects around the country. While some of these projects have gotten built, many have failed to advance beyond the drawing board. My work has shown that archival research is a viable option to better understanding the political complexity faced by contemporary climate adaptation projects. The more we learn about this complexity, the more we can improve the efficiency with which coastal risk reduction strategies are deployed. For example, instead of investing years of planning into massive, environmentally harmful, and financially risky megaprojects, scarce planning resources could be allocated towards projects that are deemed more palatable by the public, elected officials, and organized interests.

Thanks to the archives, we now have some answers. But the question remains whether the Army Corps and governments will learn from the past and change their behavior. Hopefully Joseph D’Amelio’s grandchildren will one day live in a better prepared version of New York City, one in which Army Corps projects don’t end up as forgotten relics in flooded garages.

D.J. Rasmussen is an engineer, climate scientist, and policy scholar. He studies coastal floods, sea-level rise, and public works strategies for managing their economic and social costs. His research has informed the UN’s Intergovernmental Panel on Climate Change and has been published in Science Magazine as well as other academic journals. He is completing his PhD in the Science, Technology, and Environmental Policy (STEP) Program at the School of Public and International Affairs at Princeton University. A portfolio of his research can be viewed at https://www.djrasmussen.co. He can be reached at dmr2@princeton.edu

Offsetting your greenhouse gas emissions can impact more than just your carbon footprint

By Tim Treuer

This Giving Tuesday, I decided to offset my 2020 carbon footprint. And help protect endangered biodiversity. And help eliminate poverty. And improve air, water, and soil quality. And support gender equality. And empower historically marginalized communities. And maybe even decrease the risk of killer diseases like COVID-19 and malaria.

But I only made one donation. And its price tag was the equivalent of about a dollar a day.

How? I’m donating to an organization that will use the funds to restore tropical rainforests. I may be biased as a restoration ecologist, but in my mind there are few ways to offset your emissions that carry as many co-benefits to nature and society as regrowing rainforests. More on that below, but first I want to address the elephant in the room when it comes to carbon offsetting: most don’t offset anything.

Seedling nursery

Per a 2016 European Commission report, 85% of carbon offsets fail to offset carbon. A big problem is scam organizations that simply do less than they promise. But even well-intentioned, reputable groups can fall short. The two main problems are failures to account for ‘additionality’ and ‘leakage’. Additionality means that the carbon that is pulled out of the atmosphere wouldn’t have been pulled out anyway. Some organizations offer to do things like establish tree plantations in areas that would otherwise be recovering forest–forest that would, in many cases, store more carbon than the tree plantation!

Leakage becomes an issue when a group’s actions to draw down greenhouse gases from the atmosphere lead to increased emissions elsewhere. This is a pernicious problem with many efforts, even ones that have huge positive local benefits. Protecting stands of old-growth forest or using farms to produce biofuels can be really great in theory, but if you don’t address the demand side of the equation, economics dictates that you’ll end up with compensatory logging or farming elsewhere. (Side note: one pet peeve of mine is that biofuel studies sometimes come up with rosy predictions because they simply assume we will produce less food and eat fewer calories in the future.)

If you pick carefully enough, however, tropical forest restoration projects often evade these two pitfalls. Many (if not most) involve jumpstarting the recovery of land that would not heal on its own because of challenges like invasive vegetation that chokes out seedlings, absent seed sources because of widespread forest clearing, or heavily degraded soils from overgrazing or nutrient depletion. So you can go ahead and tick that box for additionality. And so long as the restoration activities take place in protected settings like national parks or community forest, you shouldn’t see compensatory carbon emissions elsewhere. Ergo, no more leakage.

But carbon offsetting is only the tip of the iceberg when it comes to tropical forest restoration. Pound for pound there are more species in tropical forests than any other ecosystem on Earth. In places like Madagascar or lowland Borneo, many of those species are in serious danger of disappearing forever because of past habitat loss. We are talking millions of species hanging on by a thread. Most don’t even have scientific names yet. Their only lifeline is the resurrection of lost habitat. The biodiversity benefits of forest restoration alone can and do justify restoration projects across the tropics.

Caterpillar at reforestation site

Forest restoration through planting seedlings and controlling weeds is a super labor-intensive activity. 22-23 year old me can definitely attest to that fact after spending two seasons co-managing a reforestation project in Gunung Palung National Park on the Indonesian side of Borneo. But the required blood, sweat, and tears is a feature, not a bug. Labor means employment opportunities, and in impoverished tropical communities, that means poverty alleviation. One of the top requests from the villages where I worked was for more opportunities to be paid to reforest. And it’s equitable work too. It was common at our site to see planting teams led by women from an indigenous ethnic group– women who would be less likely to get jobs with the commercial oil palm plantations, the main local employers.

Reforestation site

There’s another reason those communities love reforestation work. They know healthy forests mean less smoke and haze from invasive grass fires, less flooding, and more consistent and cleaner water in the streams running out of the forest. The water perks span the wet and dry seasons. Forests decrease rainy season flooding via increased and root-mediated groundwater infiltration into the water table, which then feeds streams during drier periods.

At this point I’m starting to feel like Billy Mays: “But wait! There’s more!” I think it’s safe to say that most people would relish the opportunity to kick an anthropomorphized version of this global pandemic right in the nu… uhh, somewhere really painful. Tropical forest restoration is actually the next best thing. One of the big insights of the scientists in the emerging discipline of eco-epidemiology is that unhealthy ecosystems tend to yield greater risk of wildlife diseases crossing into human populations. There’s a whole slew of mechanisms for this–from stressed animals like bats shedding more virus to many malaria-spreading mosquitoes preferring open habitat to closed canopy forest– but the punchline is that when scientists looked at how to prevent the next pandemic, halving deforestation made the list of cost-effective preventative measures. My current research looks at how reforestation could protect against malaria in Madagascar, and past work I’ve been a part of showed that deforestation upstream of impoverished rural communities leads to more cases of diarrhea in kids and infants.

There are many organizations that need your support for their tropical forest restoration work, and many online tools for calculating your annual carbon footprint. I’m choosing to donate to the organization I worked with in Borneo. Not only do I know from firsthand experience that they are doing truly additional and leakage-free offsetting, but they also are super transparent about how they calculate and track their offsets.

There are tons of great organizations out there, though. You could even pick one in a country you plan on visiting once the pandemic is over– maybe they’d even show you the forest you helped replant. Just make sure you are asking three questions: will the trees you help plant cause forest clearing elsewhere? Would the replanted forest recover on its own anyway? And finally, how are they calculating their emissions reductions?

If you’re happy with your answers, congratulations! You’ve found a way to give that really does keep on giving.

Tim completed his PhD at Princeton in Ecology and Evolutionary Biology (*18), where he studied large-scale tropical forest restoration. He was a 2018 AAAS Mass Media Fellow and currently a Gund Postdoctoral Fellow at the University of Vermont, where he studies whether and how reforestation can be used as a tool for combatting malaria in Madagascar. You can find him on Twitter (@treuer) and at www.timothytreuer.com.  

Sustainability: That Ain’t Country?

Written by Ashford King

In the US, the fight against climate change often looks more like a fight to achieve the public recognition that climate change is real. Flat out denial of science by the dominant strain of conservative politics and the reticence to take bold action on the part of moderates, combined with the self-interested, well-funded and short-sighted survivalist instinct of the fossil fuel industry, continues to hamper sustainable development in our country. We stagnate at home even as we attempt to export models for sustainable development to other parts of the world. 

In our national culture, broadly speaking, we still uphold the rugged cowboy individual as the model for how to exist in the world. Recently, researchers at the University of Virginia pointed out the degree to which Americans’ individualism hindered our collective response to the coronavirus. Lately, science and individualism haven’t seemed able to get along.

A good cultural marker for this is country music. In the US, recent years have given us country songs like “Coal Keeps the Lights On” by Jimmy Rose (championing a phrase that has been used widely in the coal industry’s propaganda campaign) and “Coal Town” by Taylor Ray Holbrook (the music video for which was produced in partnership with the United Mine Workers of America). It is worth noting that these artists are rather marginal country artists, both little known and both hailing from Appalachia, but have taken on specific significance in the debate around the political and cultural value of coal. More widely popular country music artists, at least those that produce popular music that is marketed as “country,” eschew the specifically political in favor of a few main themes: booze, romance, and general patriotism (guns, religion, troops, sports, farming, hunting, the paterfamilias, etc.). The wildly popular band Florida Georgia Line, in their summer 2020 hit “I Love my Country”, exalts the use of styrofoam plates while rattling off a list of American stuff: “Barbecue, steak fries / styrofoam plate date night.” It seems that, regarding sustainability, American country music either takes a hard pro-fossil fuels stance, or nonchalantly implies approval of the status quo. As far as the market is concerned, apathy towards climate change reigns. This is not entirely surprising, given the political climate.

What is surprising is how the analogous genre in Mexico, música regional, compares. Many of the themes heard in contemporary American country music are still present, both the good (importance of family, romantic love), the bad (binge drinking, misogyny) and the more complicated (guns, dogmatic religion). Mexican country music is even starting to incorporate Latin hip-hop and pop into their music, similar to how bands like Florida Georgia Line imitate rap lyricism in their own vocals. This all makes sense; to paint with broad strokes, it’s safe to say that Mexican society and cowboy culture developed in a manner parallel to the development of their American counterparts, and pop musical trends, such as the increasing relevance of hip-hop forms across the boundaries of genre, are increasingly global phenomena. However, Mexican country music, despite its conservatism, finds it within itself to engage with climate change. 

At approximately the same time Florida Georgia Line was working on “I Love my Country”, Edén Muñoz, the lead singer of the Mexican group Calibre50 (“50-Caliber”) was working with fellow artists Alfredo Olivas, la Arrolladora Banda el Limón (“the Irresistible Lemon Band”), Pancho Barraza and C-Kan on a song called “Corazón Verde” (“Green Heart”). The song amounts to an impassioned plea for the listener to become conscious of climate change, understand how it is detrimental to human society, and actually do something about it. Pancho Barraza sings: “Estamos cavando nuestra propia tumba / y no es por asustarlos, viene lo peor” (We are digging our own grave / and not to scare you, but the worst is yet to come”).  He goes on: “Falta de conciencia y no es coincidencia / que todos los días haga más calor” (“[There is] lack of awareness, and it’s no coincidence / that every day it gets a little bit hotter”). Tough solutions are not proposed, just tough rhetoric about what is happening right now. The music video shows the artists planting trees (which is more symbolically important than it is effective as a long-term strategy). Still, it’s a fine start, at least rhetorically. 

Perhaps most importantly, the artists express concern for future generations: “¿Para qué esperarnos? Limpiemos el mundo / y cuidemos la casa. O ya se preguntaron / a tus hijos y a los míos / ¿qué les vamos a dejar?” (“What are we waiting for? Let’s clean up the world and take care of our home. Or have you already asked yourselves what’s going to be left for your children and mine?”). In the words of the singers, recognizing and fighting climate change is an urgent civic duty. The fact that this urgency is absent from cultural representations of American patriotism is baffling. 

I mention this song not to hold up Mexico as an exemplar of environmental or cultural sustainability, or as an example of a society that always leverages science to increase the public good. Certainly, in the context of the coronavirus pandemic, Mexico has hardly stood out as successful in its response. What’s more, these artists don’t exactly have a blank check to claim the moral high ground on whatever topic they choose. The same artists that here sing about making cultural and political shifts to fight climate change also sing in a glorifying way about guns, corruption and cheating on their wives and partners. I make this comparison between American and Mexican country music to illustrate that, outside of the US, even politically conservative cultures and ideologies elsewhere pass the very low bar of urgently believing in science. It is a bar that the US needs to pass soon. Coal may “keep the lights on” for now, but it will eventually burn down the house.

Ashford King is a PhD student in Spanish and Portuguese at Princeton University. He is also a musician and poet. He is originally from Kentucky.

It’s Past Time for Princeton to Divest from Fossil Fuels

Written by Ryan Warsing of Divest Princeton

If you’re reading this, you probably don’t need to be persuaded that the planet is on fire, and we need to do something to put it out fast.  We see evidence all around us:  California is again in the throes of a record wildfire seasonglaciers the size of Manhattan are sliding into the sea, and in some of the most densely populated parts of the world, massive cities are being swallowed by the tide.  There is little dispute that these disasters stem from our burning of fossil fuels, and that by most any measure, we are failing to prevent the worst.

(Sources: Nik Gaffney / Flickr; Pixabay; Don Becker, USGS / Flickr; CraneStation / Flickr – Creative Commons)

Meanwhile, in balmy Princeton, New Jersey, the university’s Carbon Mitigation Initiative (CMI) and Andlinger Center for Energy and the Environment have signed splashy agreements with BP and Exxon (respectively) to fund research into renewable fuels, carbon capture and storage, and other climate innovations.  Since 2000, these companies have pumped over $30 million into CMI and the Andlinger Center, with the latter recently extending its Exxon contract for another five years.  

To put it politely, we of Divest Princeton say these partnerships do more harm than good.  True, they may create new and valuable knowledge, but that isn’t really why they exist.  In one leaked exchange from 1998, Exxon representatives strategized about the need to “identify and establish cooperative relationships with all major scientists whose research in the field supports our position,” and to “monitor and serve as an early warning system for scientific development with the potential to impact on the climate science debate, pro and con.”

Taking this statement literally — and why shouldn’t we? — BP and Exxon’s support for Princeton is more than simple altruism.  It’s more than good PR.  Rather, it’s part of a years-long effort not to aid, but to manage climate research toward ends not in conflict with their extractive business model.  Tellingly, these do-gooder oil companies plan to increase production 35% by 2030.  This would be cataclysmic.

Their schemes are made possible by funding and power gifted by Princeton.  We cannot tolerate, let alone enable these activities any longer.  Not when they pose such obvious conflicts with our university’s core values and threaten our fellow students and faculty working around the world.  Princeton must stand up for itself.  How better than by divesting from fossil fuels?

The divestment movement has grown rapidly in recent years, with institutions like Georgetown UniversityBrown, Cornell, and Oxford recently joining its ranks.  Collective actions have taken a toll — Goldman Sachs says that divestment is partly to blame for widespread credit de-ratings in the coal industry, and Shell is on-record saying divestment will present “a material adverse effect on the price of our securities and our ability to access equity capital markets.”  Essentially, divestment works.

We argue that the moral imperative of divestment should be compelling enough on its own; if Princeton moved to divest and the markets didn’t budge an inch, at least then our conscience would be clean.  At least then we could call ourselves “sustainable” with a straight face and live honestly by our motto: “in the nation’s service, and the service of humanity.”  

Detractors maintain that any “demands” on Princeton’s endowment would constrain its ability to earn huge returns, depriving students of the financial support they need to prosper.  This is absurd.  Billion-dollar endowments like the Rockefeller Brothers Fund have demonstrated that divestment can be a net positive.  Fossil fuel stocks have also been declining for years.  It looks increasingly clear that an investor gains little “diversifying” in fossil fuel, and that the risks of divestment have been well overblown.  Shareholders — especially shareholders with a fiduciary responsibility like Princeton’s — should be looking for the exit.

In order to remain within 1.5°C of global warming by mid-century — the threshold at which the IPCC and Princeton’s own Sustainability Action Plan say “catastrophic consequences” will be unavoidable — the fossil fuel industry’s ambitious exploration and development will need to be mothballed.  Undrilled oil fields and unmined coal will become stranded assets, or dead weight on their companies’ books.  To have faith in these investments, Princeton must think stranded assets will actually go to use, in which case, Princeton ignores its own scientists and legitimizes the activities central to our climate crisis.

Video showing a progression of changing global surface temperature anomalies from 1951-2019. The average temperatures over 1951-1980 are the baseline shown in white. Higher than normal temperatures are shown in red and lower than normal temperatures are shown in blue. The final frame represents the 5 year global temperature anomalies from 2015-2019. Scale in degrees Celsius. (Source: Lori Perkins / NASA and NOAA)

Others have argued that regardless of donors’ ulterior motives, divesting would only leave good money and research on the table.  To these people, the “greenwashing” corporations seek from partnering with elite institutions is both inevitable and of little consequence compared to the novel scholarship their funding provides.  The catch here is that quality research and a morally invested endowment are not mutually exclusive.  There isn’t a rule saying our research must be funded by BP or Exxon — if Princeton truly valued this knowledge, it would channel its creative energies toward finding funding elsewhere.

“Elsewhere” could very easily be the university’s own wallet.  Princeton is quick to remind us it holds the biggest per-student endowment in the country.  The endowment today is a bit larger than $26 billion, roughly the size of Iceland’s GDP and larger than GDPs of half the world’s countries.  In the last ten years alone, Princeton’s endowment has more than doubled.  In this light, the money needed to sustain current research is practically a rounding error.  If just a few Trustees put their donations together, they could recoup Exxon’s latest $5 million donation in under five seconds!

We tried to anticipate these doubts in our divestment proposal, which was given to Princeton’s administration last February.  Since then, we have met with Princeton’s Resources Committee and invited experts — former Committee Member Shannon Osaka, President of the Rockefeller Brothers Fund Stephen Heintz, and Stanford researcher Dr. Ben Franta — to help present our case.  Discussions will continue through the end of 2020, culminating in a forum with 350.org’s Bill McKibben in November.

As a reward for our persistence, the Resources Committee has indicated it might decide on our proposal by Christmas.  If it approves, the proposal goes to the Board of Trustees, and the clock starts over.  This, dear readers, is the “fast track.”

It has been demoralizing to watch Princeton, one of the world’s great centers of higher learning and a temple to empirical evidence, run interference for companies that have scorned the truth, knowingly endangered billions, and literally confessed to their ill intent.  From its byzantine system for proposing divestments to its arbitrary requirement saying divestment must take the form of complete dissociation (a prohibitively high bar), Princeton’s strategy is to frustrate and outlast causes like ours.  Most of the time, it succeeds.

But our cause is different from the others.  With climate change, waiting is simply not an option.  The immovable object will meet an unstoppable force, and the unstoppable force will win.

The longer we delay, the longer we allow fossil fuel companies to weaponize Princeton’s gravitas, spreading disinformation and quack science while purporting to be part of “the solution.”  Until Princeton inevitably divests from these bad actors, we will continue to withhold our donations, continue to protest, and continue to organize, fighting fire with fire.


Divest Princeton is a volunteer movement of Princeton students, alumni, parents, faculty, and staff.  Sign their “No Donations Until Divestment” petition and learn more here.

Inside a Solar Energy Company

Written by Molly Chaney

Finding an internship as a Ph.D. student is hard. Finding one at a company you have legitimate interest in is even harder. In search of a more refined answer to the dreaded question, “so what do you want to do after you get your Ph.D.?” I started looking for opportunities in what is very broadly and vaguely referred to as “industry.” I stepped into Dillon gym on a muggy August day in the only pair of dress pants I own and looked around. Finance, biotech, management consulting, and oil & gas companies filled the room with tables and recruiters.

After talking to what turned out to be a bunch of dead ends that didn’t excite me much, I decided to check out one last table before leaving. A far cry from the multi-table, multi-recruiter teams with tons of free swag to give away like Exxon and Shell, Momentum Solar had a table with some flyers, business cards, and one recruiter. I didn’t wait in line or crowd around like at the others, and immediately got to talking with Peter Clark. What I remember most was his message that they were simply looking for “intellectual horsepower,” something that the CFO would repeat to a group of students who went to their South Plainfield HQ for an information session later that school year. I came away from my conversation not exactly sure what I would be doing if I worked there, but excited about joining a small, quickly growing company founded in sustainability.

At that info session some months later, I was impressed that the CFO, Sung Lee, took the time out of his schedule to speak directly with the group of prospective interns, and gave us all some background about where Momentum has been, and where it’s going:

Momentum Solar is a residential solar power installation company that was founded in New Jersey in 2009 by Cameron Christensen and Arthur Souritzidis. In 2011, they had just four employees. In 2013, six. They were ranked on the Inc. 5000 most successful companies in 2016 (with 250 employees), Inc. 500 fastest growing companies in 2017 (700 employees), and Inc. 5000 most successful again in 2018 (950 employees). They doubled their revenue from 2017-2018, and doubled again 2018-2019. Currently, Momentum has operations in seven states, from California to Connecticut, and shows no signs of slowing down. The solar industry as a whole also shows promising trends: since 2008, solar installations in the US have grown 35-fold, and since 2014, the cost of solar panels has dropped by nearly 50%.

After hearing this pitch, we toured the office, which, while full of diligent employees in front of huge screens, also boasts two ping pong tables and a darts board. The energy in the space was palpable, and Sung’s enthusiasm was contagious: I was sold.

Fast forward a couple months, and I was about to have my first day there. I *still* didn’t know exactly what I would be doing. On day one, my supervisor presented me with a few different projects I could choose from. While I wasn’t using the specific skills related to my research area here at Princeton, I was using crucial skills I developed along the way during my PhD research: programming and exploratory data analysis. I jumped right in to their fast-paced, quick-turnaround style of work, and had check-ins with Sung nearly every day. He made a concerted effort to include me and all the other interns on calls and in meetings, even if it was just to observe. The main project I worked on was writing a program to optimize appointment scheduling and driving routes, with the goals of improving efficiency from both a time and a fossil fuel standpoint: a great example of a sustainability practice helping a company’s bottom line.

People had told me before starting my Ph.D. that, unless I was planning on taking the academic route, the most valuable things I would learn would not be in my dissertation, but skills developed along the way. This rang true during my first professional experience in industry. Problem solving and independence were probably the two most valuable qualities that a graduate student can bring to an internship. Somewhat unexpectedly, teaching skills proved useful as well: it wasn’t enough to prove a point through a certain statistical test; it was crucial that a room full of people with diverse backgrounds understood what a certain figure or result meant.

Momentum continues to grow, regularly setting and breaking records. To date, Momentum has installed 174 MW of residential solar energy, enough capacity to power the equivalent of more than 33,000 average American homes. I know my experience was unique: I was treated as an equal, was mentored thoughtfully and intentionally, and had regular interaction with corporate-level executives. Working there was rewarding, and Momentum’s success is a glimmer of hope during an ever-worsening climate crisis. 

Graduate and undergraduate students who are interested in internship opportunities with Momentum Solar should contact Peter Clark, Director of Talent Acquisition, at pclark@momentumsolar.com.

Sources: energy.gov

Molly Chaney is a fifth year Ph.D. candidate in Civil & Environmental Engineering. Advised by Jim Smith, her research focuses on the use of polarimetric radar to study tropical cyclones and other extreme weather events. Originally from Chicago, she is a die-hard Cubs (and deep dish pizza) fan. In her spare time she enjoys cuddling her dog, playing videogames, and indulging in good food and wine with her friends and family. If you have more questions about her experience at Momentum Solar you can contact her at mchaney@princeton.edu.

Integrating Renewable Energy Part 2: Electricity Market & Policy Challenges

Written by Kasparas Spokas

The rising popularity and falling capital costs of renewable energy make its integration into the electricity system appear inevitable. However, major challenges remain. In part one of our ‘integrating renewable energy’ series, we introduced key concepts of the physical electricity system and some of the physical challenges of integrating variable renewable energy. In this second instalment, we introduce how electricity markets function and relevant policies for renewable energy development.

Modern electricity markets were first mandated by the Federal Energy Regulatory Commission (FERC) in the United States at the turn of millennium to allow market forces to drive down the price of electricity. Until then, most electricity systems were managed by regulated vertically-integrated utilities. Today, these markets serve two-thirds of the country’s electricity demand (Figure 1) and the price of wholesale electricity in these regions is historically low due to cheap natural gas prices and subsidized renewable energy deployment.

The primary objective of electricity markets is to provide reliable electricity at least cost to consumers. This objective can be further broken down into several sub-objectives. The first is short-run efficiency: making the best of the existing electricity infrastructure. The second is long-run efficiency: ensuring that the market provides the proper incentives for investment in electricity system infrastructure to guarantee to satisfy electricity demand in the future. Other objectives are fairness, transparency, and simplicity. This is no easy task; there is uncertainty in both supply and demand of electricity and many physical constraints need to be considered.

While the specific structure of electricity markets varies slightly by region, they all provide a competitive market structure where electricity generators can compete to sell their electricity. The governance of these markets can be broken down into several actors: the regulator, the board, participant committees, an independent market monitor, and a system operator. FERC is the regulator for all interstate wholesale electricity markets (all except ERCOT in Texas). In addition, reliability standards and regulations are set by the North American Electric Reliability Council (NERC), which FERC gave authority in 2006. Lastly, markets are operated by independent system operators (ISOs) or Regional Transmission Operators (RTOs) (Figure 1). In tandem, regulations set by FERC, NERC, and system operators drive the design of wholesale markets.

Wholesale energy market ISO/RTO locations (colored areas) and vertically-integrated utilities (tanned area). Source: https://isorto.org/

Before we get ahead of ourselves, let’s first learn about how electricity markets work. A basic electricity market functions as such: electricity generators (i.e. power plants) bid to generate an amount of electricity into a centralized market. In a perfectly competitive market, the price of these bids is based on the costs of an individual power plant to generate electricity. Generally, costs are grouped by technology and organized along a “supply stack” (Figure 2). Once all bids are placed, the ISO/RTO accepts the cheapest assortment of generation bids that satisfies electricity demand while also meeting physical system and reliability constraints (Figure 2a). The price of the most expensive accepted bid becomes the market-clearing price and sets the price of electricity that all accepted generators receive as compensation (Figure 2a). In reality it is a bit more complicated: the ISO/RTOs operate day-ahead, real-time, and ancillary services markets and facilitate forward contract trading to better orchestrate the system and lower physical and financial risks.

Figure 2. Schematics of electricity supply stacks (a) before low natural gas prices, (b) after natural gas prices declined, (c) after renewable deployment.

Because real electricity markets are not completely efficient and competitive (due to a number of reasons), some regions have challenges providing enough incentives for the long-run investment objective. As a result, several ISO/RTOs have designed an additional “capacity market.” In capacity markets, power plants bid for the ability to generate electricity in the future (1-3 years ahead). If the generator clears this market, it will receive extra compensation for the ability to generate electricity in the future (regardless of whether it is called upon to generate electricity) or will face financial penalties if it cannot. While experts continue to debate the merits of these secondary capacity markets, some ISO/RTOs argue capacity markets provide the necessary additional financial incentives to ensure a reliable electricity system in the future.

Sound complicated? It is! Luckily, ISO/RTOs have sophisticated tools to continuously model the electricity system and orchestrate the purchasing and transmission of wholesale electricity. Two key features of electricity markets are time and location. First, market clearing prices are time dependent because of continuously changing demand and supply. During periods of high electricity demand, prices can rise because more expensive electricity generators are needed to meet demand, which increases the settlement price (Figure 2a). In extreme cases, these are referred to as price spikes. Second, market-clearing prices are regional because of electricity transmission constraints. In regions where supply is low and the transmission capacity to import electricity from elsewhere is limited, electricity prices can increase even more.

Several recent developments have complicated the economics of generating electricity in wholesale markets. First, low natural gas prices and the greater efficiency of combined cycle power plants have resulted in low electricity bids, restructuring the supply stack and lowering market settlement prices (Figure 2b). Second, the introduction of renewable power plants, which have almost-zero operating costs, introduce almost-zero electricity market bids. As such, renewables fall at the beginning of the supply stack and push other technologies towards the right (higher-demand periods that are less utilized), further depressing settlement prices (Figure 2c). A recent study by the National Renewable Energy Laboratory expects these trends to continue with increasing renewable deployment.

In combination, these developments have reduced revenues and challenged the operation of less competitive generation technologies, such as coal and nuclear energy, and elicited calls for government intervention to save financial investments. While the shutdown of coal plants is welcome news for climate advocates, nuclear power provided 60% of the U.S. carbon-free electricity in 2016. Several states have already instated credits or subsidies to prevent these low-emission power plants from going bankrupt. However, some experts argue that the retirement of uneconomic resources is a welcome indication that markets are working properly.

As traditional fossil-fuel power plants struggle to remain in operation, the development of new renewable energy continues to thrive. This development has been aided by both capital cost reductions and federal- and state-level policies that provide out-of-market economic benefits. To better achieve climate goals, some have argued that states need to write policies that align with wholesale market structures. Proposed mechanisms include in-market carbon pricing, such as a carbon tax or stronger cap-and-trade programs, and additional clean-energy markets. Until now however, political economy constraints have limited policies to weak cap-and-trade programs, investment and production tax credits, and renewable portfolio standards.

While renewable energy advocates support such policies, system operators and private investors argue these out-of-market policies could potentially distort wholesale electricity markets by suppressing prices and imposing regulatory risks on investors. Importantly, they argue that this leads to inefficient resource investment decisions and reduced competition that ultimately increases costs for consumers. As a result, several ISO/RTOs are attempting to reform electricity capacity market rules to satisfy these complaints but are having difficulty finding a solution that satisfies all stakeholders. How future policies will be dealt with by FERC, operators and stakeholders remains to be resolved.

As states continue to instate new renewable energy mandates and technologies yet to be well-integrated with wholesale markets, such as battery storage, continue to evolve and show promise, wholesale market structures and policies will need to adapt. In the end, the evolution of electricity market rules and policies will depend on a complex interplay between technological innovation, stakeholder engagement, regulation, and politics. Exciting!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Integrating Renewable Energy Part 1: Physical Challenges

Written by Kasparas Spokas

Meeting climate change mitigation targets will require rapidly reducing greenhouse gas emissions from electricity generation, which is responsible for a quarter of all U.S. greenhouse gas emissions. The prospect of electrifying other sectors, such as transportation, further underscores the necessity to reduce electricity emissions to meet climate goals. To address this, much attention and political capital have been spent on developing renewable energy technologies, such as wind or solar power. This is partly because recent reductions of the capital costs of these technologies and government incentives have made this strategy cost-effective. Another reason is simply that renewable energy technologies are popular. Today, news articles about falling renewable energy costs and increasing renewable mandates are not uncommon.

While capital cost reductions and popularity are key to driving widespread deployment of renewables, there remain significant challenges for integrating renewables into our electricity system. This two-part series introduces key concepts of electricity systems and identifies the challenges and opportunities of integrating renewables.

Figure 1. Schematic of the physical elements of electricity systems. Source: https://www.eia.gov/energyexplained/index.php?page=electricity_delivery

What are electricity systems? Physically, they are composed of four main interacting elements: electricity generation, transmission grids, distribution grids, and end users (Figure 1). In addition to the physical elements, regulatory and governance structures guide the operation and evolution of electricity systems (these are the focus of part two in this series). These include the U.S. Federal Regulatory Commission (FERC), the North American Electric Reliability Council (NERC), and numerous state-level policies and laws. The interplay between the physical and regulatory elements has guided electricity systems to where they are today.

In North America, the electricity system is segmented into three interconnected regions (Figure 2). These regions are linked by only a few low-capacity transmission wires and often operate independently. These regions are then further segmented into areas where independent organizations operate wholesale electricity markets and areas where federally-regulated vertically-integrated utilities manage all the physical elements (Figure 2). Roughly two-thirds of the U.S. electricity demand is now located in wholesale electricity markets. Lastly, some of these broad areas are further subdivided into smaller balancing authorities that are responsible for supplying electricity to meet demand under regulations set by FERC and NERC.

Figure 2. Left: North American Electric Reliability Corporation Interconnections. Right: Wholesale market areas (colored area) and vertically-integrated utilities areas (tanned area). Source: https://www.energy.gov/sites/prod/files/oeprod/DocumentsandMedia/NERC_Interconnection_1A.pdf & https://isorto.org/

Electricity systems’ main objective is to orchestrate electricity generation, transmission and distribution to maintain instantaneous balance of supply and continuously changing demand. To maintain this balance, the coordination of electricity system operations is vital. Electricity systems need to provide electricity where and when it is needed.

Historically, electricity systems have been built to suit conventional electricity generation technologies, such as coal, oil, natural gas, nuclear, and hydropower. These technologies rely on fuel that can be transported to power plants, allowing them to be sited in locations where electricity demand is present. The one exception is hydropower, which requires that plants are sited along rivers. In addition, the timing of electricity generation at these power plants can be controlled. The ability to control where and when electricity is generated simplifies the process by which an electricity system is orchestrated.

Enter solar and wind power. These technologies lack the two features of conventional electricity generation technologies, the ability to control where and when to generate electricity, and make the objective of instantaneously balancing supply and demand even more challenging. For starters, solar and wind technologies are dependent on natural resources, which can limit where they are situated. The areas that are best for sun and wind do not always coincide with where electricity demand is highest. As an example, the most productive region for on-shore wind stretches along a “wind-belt” through the middle of U.S. (Figure 3). For solar, the sparsely populated southwest region presents the most attractive sunny skies (Figure 3). As of now, long-distance transmission infrastructure to transport electricity from renewable resource-rich regions to high electricity demand regions is limited.

Figure 3. Maps of wind speed (left) and solar energy potential (right) in the U.S. Source: https://www.nrel.gov/

In addition, the timing of electricity generation from wind and solar cannot be controlled: solar panels only produce electricity when the sun is shining and wind turbines only function when the wind is blowing. Therefore, the scaling up of renewables alone would result in instances where supply of renewables does not equal customer demand (Figure 4). When renewable energy production suddenly drops (due to cloud cover or a lull in wind), the electricity system is required to coordinate other generators to quickly make up the difference. In the inverse situation where renewable energy generation suddenly increases, electricity generators often curtail the electricity to avoid dealing with the variability. The challenge of forecasting how much sun and wind there will be in the future adds more uncertainty to the enterprise.

Figure 4. Electricity demand and wind generation in Texas. The wind generation is scaled up to 100% of demand to emphasize possible supply-demand mismatches. Source: http://www.ercot.com/gridinfo/generation

A well-known challenge in solar-rich regions is the “duck-curve” (Figure 5). The typical duck-curve (named after the fact that the curve resembles a duck) depicts the electricity demand after subtracting the amount of solar generation at each hour of the day. In other words, the graph depicts the electricity demand that needs to be met with power plants other than solar, called “net-load.” During the day, the sun shines and solar panels generate electricity, resulting in low net-loads. However, as the sun sets and people turn on electric appliances after returning home from work, the net load increases quickly. Electricity systems often respond by calling upon natural gas power plants to quickly ramp up their generation. Unfortunately, natural gas power plants that can quickly increase their output are less efficient and have higher emission rates than slower natural gas power plants.

 

Figure 5. The original duck-curve presented by the California Independent System Operator. Source: http://www.caiso.com/

These challenges result in economic costs. A study about California concluded that increasing renewable deployment could result in only modest emission reductions at very high abatement costs ($300-400/ton of CO2). This is because the added variability and uncertainty of more renewables will require higher-emitting and quickly-ramping natural gas power plants to balance sudden electricity demand and supply imbalances. In addition, more renewable power will be curtailed in order to maintain stability (Figure 6), reducing the return on investment and increasing costs.

Figure 6. Renewable curtailment (MWh) and cumulative solar photovoltaic (PV) and wind power capacity in California from 2014 to 2018. Source: CAISO

Although solar and wind power do pose these physical challenges, technological advances and electricity system design enhancements can facilitate their integration. Several key strategies for integrating renewables will be: the development of economic energy storage that can store energy for later use, demand response technologies that can help consumers reduce electricity demand during periods of high net-load, and expansion of long-distance electricity transmission to transport electricity from natural resource (sun and wind) rich areas to electricity demand areas (cities). Which solutions succeed will depend on the interplay of future innovation, state and federal incentives, and electricity market design and regulation improvements. As an example, regulations that facilitate long-distance electricity transmission could significantly reduce technical challenges of integrating renewables using current-day technologies. To ensure efficient integration of renewable energy, regulatory and energy market reform will likely be necessary. For more about this topic, check out part two of our series here!

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department and a policy-fellow in the Woodrow Wilson School of Public & International Affairs at Princeton University. Broadly, he is interested in the challenge of developing low-emissions energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas.

Evaluating the geoengineering treatment

Written by Xin Rong Chua

Might there be a remedy for the worldwide temperature and rainfall changes caused by humanity’s emissions? If so, what would the cure cost? We watch as Mr. Human grapples with these questions with the help of Dr. Planet.

Dr. Planet was about to put an end to a long, hard day of work when the distress call came in.

“Dr. Planet! Dr. Planet! Our planet Earth needs your help!”

Dr. Planet quickly boarded his medical spaceship and sped towards the solar system. As the ship passed through Earth’s atmosphere, his instruments began to gather the planet’s climate records. The temperature indicator began to blink red. Then the indicator for circulation changes in its atmosphere and oceans. Then the sea ice indicator.

The moment Mr. Human boarded his spaceship, Dr. Planet knew why the planet was ill.

Mr. Human was holding a long, black cigar labelled ‘Fossil Fuels’. It was still smoking at the tip. In front of him, the reading on his carbon dioxide indicator soared.

“I advise you to cut down on your emissions,” said Dr. Planet. “Otherwise, your planet will experience sea level rise, ocean acidification, and stronger storms.”

“We know that,” said Mr. Human. He sounded as if he had not slept for days. “We’ve known about it for decades. I was so excited after the Paris meeting, when the world first agreed on concrete pledges to cut down emissions. Then we did our sums and realized that even if every country fulfilled its promised reductions, global mean temperatures were still set to increase by more than 2 degrees Celsius come 2100. And then the United States announced that they would pull out of the agreement, which was…”

Mr. Human’s gaze fell as he trailed off. He then straightened and looked Dr. Planet in the eye. “Dr. Planet, you are a renowned planetary climate surgeon. Do you have a geoengineering treatment that might be able to cure our Earth?”

Mr. Human took out a few geoengineering brochures and laid them on Dr. Planet’s desk. They had been produced by the hospital’s marketing department.

Dr. Planet resolved to have a chat with the marketing department about a more moderate portrayal. He was getting tired of patients either believing that geoengineering was a panacea or cursing him for attempting to play God. In fact, the carbon dioxide removal and solar geoengineering tools he possessed only allowed for a limited range of outcomes. More importantly, all of the choices involved tradeoffs and risks. However, experience had taught him that it was best to begin by explaining the science.

Schematic depiction of climate engineering methods (Source: Climate Central)

Carbon dioxide removal

Dr. Planet picked up the first brochure. It was about Canadian entrepreneur Russ George, who in 2012  dumped a hundred tons of iron into the ocean to trigger a massive plankton bloom. There were record hauls of salmon right after the fertilization. George also pointed out that the plankton removed carbon dioxide from the air as they grew.

“It’s easy to remove carbon dioxide from the atmosphere,” began Dr. Planet. “The problem is keeping the carbon dioxide out. If the fish is harvested and used as food, the carbon makes its way back into the air. Also, when the plankton respire, or are eaten by organisms higher up the food chain, most of that carbon is released once again. In addition, the immediate phytoplankton growth triggered by fertilization robs the iron or phosphorous that might have been used by other organisms. If you are looking for a long-term solution, don’t get tricked into looking only at the initial gains.”

“Besides, iron fertilization can’t be the only solution. In the most optimistic scenarios, the bulk of the carbon uptake would be used to form the shells of marine organisms such as diatoms. Since the shells would eventually fall to the bottom of the ocean, there would be a net removal of carbon from the surface. But based on the availability of iron-deficient waters around your planet, I estimate that iron fertilization can sequester at most 10% of human annual emissions.”

“Our clinic also has some options to store carbon underground by pumping it into porous rock,” said Dr. Planet, taking a brochure from a nearby shelf and handing it over. “However, the technology is still experimental and expensive.”

Mr. Human brightened as he saw that this technology could store about 1,600 billion tonnes of carbon dioxide. If humanity continued emitting at 2014 levels, this would lock up about 45 years of carbon dioxide emissions. When he came to the section on costs, his jaw dropped. “Double the cost of our existing power plants?” He took out his bulging wallet and removed a stack of bills. Dr. Planet wondered if Mr. Human considered this so cheap that he was willing to pay upfront.

Mr. Human waved the bills. “Look at all the IOUs! There is no way we can afford that cost. I’ll bet the aerosol plan is cheaper than that.”

Solar radiation management

Mr. Human pointed to a printout explaining how particles called aerosols could be placed high in the atmosphere. Choosing aerosols that reflected solar radiation would help cool the Earth’s surface.

Dr. Planet understood why Mr. Human liked the aerosol plan. It made sense to place the aerosols far above the surface. That way, it would take many months before the aerosols settled below the clouds, where rain could flush the particles from the air. Furthermore, after the eruption of Mount Pinatubo in 1991, global-mean temperatures in the Northern hemisphere fell by half a degree Celsius. With such a natural analog in mind, it was no wonder that Mr. Human thought he knew what to expect. He even was correct on the costs. Starting from 2040, dedicating 6700 flights a day to sulfate injection would keep global-mean warming to 2 degrees Celsius. This would involve a mass of sulfates roughly similar to that of the Pinatubo eruption and would cost about $US20 billion per year.

Volcanic ash after the eruption of Mount Pinatubo in 1991 (Source: USGS )

“It would be cheaper,” agreed Dr. Planet. “But tell me, is global mean surface temperature all you care about?”

“Of course not,” said Mr. Human. “Rainfall is important too. Also, I want to make sure we keep the West Antarctic Ice Sheet, and reduce…”

“Then I should let you know that using aerosols means making a choice between overcorrecting for temperature or precipitation,” said Dr. Planet. He used the same serious tone a human doctor might use to explain that chemotherapy might remove the tumor, but would also cause you to vomit and lose all your hair.

Mr. Human folded his arms. He looked most unconvinced.

As Dr. Planet cast about for a good explanation, his eyes fell on Mr. Human’s wallet. It was still on the table and still full of the IOUs. He picked up a stack of name cards from his table.

“What if I asked you to place all of the cards into your wallet?”

Mr. Human frowned at the thick wad of paper. “I would have to remove some of my old receipts, or the wallet wouldn’t close.”

“Think of the Earth’s surface as the full wallet,” Dr. Planet said. “If we put in energy from increasing sunlight, your Earth has to throw out some energy. Because we’re trying to keep the temperature unchanged, the surface can’t radiate more longwave radiation by warming. It therefore has to transport heat, which mostly happens through evaporation. In the atmosphere, what comes up must come back down eventually, so increasing evaporation increases rainfall.”

“So, increasing radiation towards the surface increases rainfall,” said Mr. Human. “Don’t sunlight and carbon dioxide both do that?”

“They do,” said Dr. Planet. “But the atmosphere is mostly transparent to solar radiation and mostly opaque to longwave radiation from carbon dioxide. Energy entering via solar radiation thus has a stronger impact on the surface and rainfall. Hence, trying to correct for the change in temperature from carbon dioxide by stratospheric aerosols is expected to lead to an overcorrection in precipitation .”

Mr. Human was silent for a while, before he perked up. “Well, a slight change in the weather we’re used to isn’t that bad, especially if it avoids a worse outcome. Besides, you’ve only talked about the global-mean. With some fine-tuning, I’m sure we could come up with an aerosol distribution that delivers a good balance.”

“We have produced hypothetical simulations that investigate a range of outcomes. As a case in point, tests on a virtual Earth show that we can control the global-mean surface temperature, as well as the temperature differences between the North and South hemispheres and from the equator to pole. This was achieved by injecting sulfate aerosols at four different locations in a computer simulation.”

“However, given the lack of rigorous clinical trials on planets like your Earth, I must warn you that it will remain a highly uncertain procedure,” said Dr. Planet. “For one, we will encounter diminishing marginal returns as we attempt to increase the sulfate load to achieve cooling. The increased amount of sulfate in the atmosphere could form bigger particles that reflect sunlight less efficiently rather than create new ones.”

“The treatment procedure of sustaining the thousands of aerosol-injection flights will require the commitment and coordination of all the peoples of your planet. A disruption due to conflicts could be catastrophic. If the aerosol concentrations are not maintained, the decades’ worth of change from greenhouse gases that they are holding back would manifest in a couple of years. The change would be so sudden that there would be little time for you to adapt.”

Mr. Human paled. Countries might well balk at paying the geoengineering bill. After all, that was money that could go to feeding the poor or to reducing a budget deficit. A rogue country might threaten to disrupt the injections unless sanctions were lifted. Or a country that might benefit from warming could sabotage the flights…

“I think you already know what I’m about to say,” said Dr. Planet as Mr. Human buried his face in his hands. “There’s no magic pill here. There never has been. I can help perform some stopgap surgery by removing carbon dioxide or provide some symptomatic relief through solar radiation management. Ultimately, though, your species has to stop lighting up in the way it has.”

Mr. Human sighed; he had to deliver the sobering news that geoengineering was riskier and more complicated than his colleagues they had expected. As he rose from his chair, he realized that he was still holding his smoking carbon cigarette. The numbers on Dr. Planet’s carbon dioxide detector were still rising. He watched the readout as it went past 400ppm, then 410ppm. With a regretful sigh, he ground the lit end of his cigar into an ashtray and stepped out to continue the long journey ahead.

Acknowledgments: This article was inspired by a group discussion with Dr. Simone Tilmes at the 2017 Princeton Atmospheric and Oceanic Sciences Workshop on Climate Engineering. Katja Luxem and Ben Zhang read an early draft and helped improve the clarity of the article.

Xin is a PhD candidate in Princeton’s Program in Atmospheric and Oceanic Sciences, a collaboration between the Department of Geosciences and the NOAA Geophysical Fluid Dynamics Laboratory. She combines high-resolution models and theory to better understand the changes in tropical rainfall extremes as the atmosphere warms. She is also interested in innovative approaches to science communication.

 

Carbon Capture and Sequestration: A key player in the climate fight

Written by Kasparas Spokas and Ryan Edwards

The world faces an urgent need to drastically reduce climate-warming CO2 emissions. At the same time, however, reliance on the fossil fuels that produce CO2 emissions appears inevitable for the foreseeable future. One existing technology enables fossil fuel use without emissions: Carbon Capture and Sequestration (CCS). Instead of allowing CO2 emissions to freely enter the atmosphere, CCS captures emissions at the source and disposes of them at a long-term storage site. CCS is what makes “clean coal” – the only low-carbon technology promoted in President Donald Trump’s new Energy Plan – possible. The debate around the role of CCS in our energy future often includes questions such as: why do we need CCS? Can’t we simply replace fossil fuels with renewables? Where can we store CO2? Is storage safe? Is the technology affordable and available?

Source: https://saferenvironment.wordpress.com/2008/09/05/coal-fired-power-plants-and-pollution/

The global climate-energy problem

The Paris Agreement called the globe to action: limit global warming to 2°C above pre-industrial temperatures. To reach this goal, CO2 and other greenhouse gas emissions need to be reduced by at least 50% in the next 40 years and reach zero later this century (see Figure 1). This is a challenging task, especially since global emissions continue to increase, and existing operating fossil fuel wells and mines contain more than enough carbon to exceed the emissions budget set by the 2°C target.

Fossil fuels are abundant, cheap, and flexible. They currently fuel around 80% of the global energy supply and create 65% of greenhouse gas emissions. While renewable energy production from wind and solar has grown rapidly in recent years, these sources still account for less than 2.1% of global energy supply. Wind and solar also face challenges in replacing fossil fuels, such as cost and intermittency, and cannot replace all fossil fuel-dependent processes. The other major low-carbon energy sources, nuclear and hydropower, face physical, economic, and political constraints that make major expansion unlikely. Thus, we find ourselves in a dilemma: fossil fuels will likely remain integral to our energy supply for the foreseeable future.

Figure 1: Global CO2 emissions (billion tonnes of CO2 per year): historical emissions, the emission pathway implied by the current Paris Agreement pledges, and a 2°C emissions pathway (RCP2.6) (Sources: IIASA & CDIAC; MIT & UNFCCC; IIASA)

CO2 storage and its role in the energy transition

CCS captures CO2 emissions from industrial sources (e.g. electric power plants) and transports them, usually by pipeline, to long-term storage sites. The ideal places for CO2 sequestration are porous rock formations more than half a mile below the surface. (Target rocks are filled with water, but don’t worry, it’s saltwater, not freshwater!) Chosen formations are overlain, or “capped,” by impermeable caprocks that do not allow fluid to flow through them. The caprocks effectively trap buoyant CO2 in the target rocks (see Figure 2).

Figure 2: Diagram of a typical geological CO2 storage site (Source: Global CCS Institute)

Scientists estimate that suitable rock formations have the potential to store more than 1,600 billion tonnes of CO2. This amounts to 70 years of storage for current global emissions from capturable sources (which are 50% of all emissions). Large-scale CCS could serve as a “bridge,” buying time for carbon-free energy technologies to develop to the stage where they are economically and technically ready to replace fossil fuels. CCS could even help us increase the amount of intermittent renewable energy by providing a flexible and secure “back-up” with low emissions. Bioenergy combined with CCS (BECCS) can also deliver “negative emissions” that may be needed to stabilize the climate. Furthermore, industrial processes such as steel, cement, and fertilizer production have significant CO2 emissions and few options besides CCS to reduce them.

In short, CCS is a crucial tool for mitigating the worst effects of global warming while minimizing disruption to our existing energy infrastructure and buying time for renewables to improve. Most proposed global pathways to achieve our targets include large-scale CCS, and the United States’ recently released 2050 decarbonization strategy includes CCS as a key component.

While our summary makes CCS seem like an obvious technology to implement, important questions about safety, affordability, and availability remain.

 

Is CCS Safe?

For CCS to contribute substantially to global emissions reduction, huge amounts of emissions must be stored underground for hundreds to thousands of years. That’s a long time, which means the storage must be very secure. Some worry that CO2 might leak upward through caprock formations and infiltrate aquifers or escape to the atmosphere.

But evidence shows that CO2 can be safely and securely stored underground. For example, the Sleipner project has injected almost 1 million tonnes of CO2 per year under the North Sea for the past 20 years. (For scale, that’s roughly a quarter of the emissions from a large coal power plant.) The oil industry injects even larger amounts of CO2 approximately 20 million tonnes per year – into various geological formations in the United States without issue in enhanced oil recovery operations to increase oil production. Indeed, the oil and gas deposits we currently exploit demonstrate how buoyant fluids (like CO2) can be securely stored in the subsurface for a very long time.

Still, there are risks and uncertainties. Trial CO2 injections operate at much lower rates than will be needed to meet our climate targets. Higher injection rates require pressure management to prevent the caprock from fracturing and, consequently, the CO2 from leaking. The CO2 injection wells and any nearby oil and gas wells also present possible leakage pathways from the subsurface to the atmosphere (although studies suggest this is likely to be negligible). Leading practices in design and maintenance can minimize well leakage risks.

Subsurface CO2 storage has risks, but experience suggests the risks can be mitigated. So, if CCS has such promise for addressing our climate-energy problem, why has it not been widely implemented?

 

The current state of CCS

CCS development has lagged, and deployment remains far from the scale required to meet our climate targets. Only a handful of projects have been built over the past decade. Why? High costs and a lack of economic incentives.

Adding CCS to coal and gas-fired electricity generation plants has large costs (approximately doubling the upfront cost of a new plant using current technology). Greenhouse gases are free (or cheap) to emit in most of the world, which means emitters have no reason to make large investments to capture and store their emissions. In order to incentivize industry to invest in CCS, we would need to implement a strong carbon price, which is politically unpopular in many countries. (There are exceptions – Norway’s carbon tax incentivized the Sleipner project.) In the United States, the main existing economic incentive for capturing CO2 is for enhanced oil recovery operations. However, the demand for CO2 from these operations is relatively small, geographically localized, and fluctuates with the oil price.

Inconsistent and insufficient government policies have thwarted significant development of CCS (the prime example being the UK government’s last-minute cancellation of CCS funding). Another challenge will be ownership and liability of injected CO2. Storage must be guaranteed for long timeframes. Government regulations clarifying liability, long-term responsibility for stored CO2, and monitoring and verification measures will be required to satisfy investors.

 

The future of CCS

The ambitious target of the Paris Agreement will require huge cuts in CO2 emissions in the coming decades. The targets are achievable, but probably not without CCS. Thus, incentives must increase, and costs must decrease, for CCS to be employed on a large scale.

As with most new technologies, CCS costs will decrease as more projects are built. For example, the Petra Nova coal plant retrofit near Houston, a commercial CCS project for enhanced oil recovery that was recently completed on time and on budget, is promising for future success. New technologies also have great potential: a pilot natural gas electricity generation technology promises to capture CO2 emissions at no additional cost. A technology that could capture CO2 from power plant emissions while also generating additional electricity is also in the works.

Despite its current troubles, CCS is an important part of solving our energy and climate problem. The recent United States election has created much uncertainty about future climate policy, but CCS is one technology that could gain support from the new administration. In July 2016, a bipartisan group of senators introduced a bill to support CCS development. If passed, this bill would satisfy Republican goals to support the future of fossil fuel industries while helping the United States achieve its climate goals. Strong and stable supporting policies must be enacted by Congress – and governments around the world – to help CCS play its key role in the climate fight.

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department at Princeton University studying carbon storage and enhanced oil recovery environments. More broadly, he is interested in studying the challenges of developing low-carbon energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas

 

Ryan Edwards is a 5th year PhD candidate in Princeton’s Department of Civil & Environmental Engineering. His research focuses on questions related to geological carbon storage, hydraulic fracturing, and shale gas. He is interested in finding technical and policy solutions to the energy-climate problem. Follow him on Twitter @ryanwjedwards.

How Do Scientists Know Human Activities Impact Climate? A brief look into the assessment process

Written by Levi Golston

On the subject of climate change, one of the most widely cited numbers is that humans have increased the net radiation balance of the Earth’s lower atmosphere by approximately 2.3 W m-2 (Watts per square meter) since pre-industrial times, as determined by the Intergovernmental Panel on Climate Change (IPCC) in their most recent Fifth Assessment Report (AR5). This change is termed radiative forcing and represents a basic physical driver of higher average surface temperatures resulting from human activities. In short, it elegantly captures the intensity of climate change in a single number – the higher the radiative forcing, the larger the human influence on climate and the higher the rate of increasing surface temperatures. Radiative forcing is also significant because it forms the basis of equivalence metrics used in international environmental treaties, defines the endpoint of the future scenarios commonly used for climate change simulations, and is physically simple enough that it should be possible to calculate without relying on global climate models.

Given its widespread use, it is important to understand where estimates of radiative forcing come from. Answering this question is not straightforward because AR5 is a lengthy report published in three separate volumes. Chapter 8 of Volume 1, more than any other, quantitatively describes why climate change is occurring due to natural and anthropogenic causes and is, therefore, the primary source for how radiative forcing is assessed by the IPCC. One of the key figures is reproduced below, illustrating that the basic drivers of climate change are human-driven changes to aerosols (particles suspended in the air) and greenhouse gases, along with their relative strengths and uncertainties:

LeviFig1
Fig. 1: Assessments of aerosol, greenhouse gas, and total anthropogenic forcing evaluated between 1750 and 2011. Lines at the top show the 5-95% confidence range, with a slight change in definition from AR4 to AR5 [Source: Figure 8.16 in IPCC AR5].
This post seeks to answer two questions: how is the 2.3 W m-2 best estimate determined in AR5? And further, why is total anthropogenic forcing not known more precisely than shown in Figure 1 given the numerous observations currently available?

1. Variations on the meaning of radiative forcing

Fundamental laws of physics say that if the Earth is in equilibrium, then average temperature of the Earth is such that there is a balance between the energy that the Earth receives and the energy that it radiates. When this balance is disturbed, the climate will respond due to the additional energy in the system and will continue to change until the forcing has fully propagated through the climate system at which point a new equilibrium (average temperature) is reached. This response is controlled by processes with a range of timescales (e.g. the surface ocean over several years and glaciers over many hundreds of years), so radiative forcing depends on when exactly it is calculated. This leads to several subtly differing definitions. While the IPCC distinguishes between radiative forcing and effective radiative forcing, I do not attempt to distinguish between the two definitions here and refer to both as radiative forcing.

Figure 2 shows the general framework for assessing human drive change used by the IPCC, which is divided into four major components. Firstly, the direct impact of human activities through the release (emission) of greenhouse gases and particulates into the atmosphere is estimated, along with changes to the land surface through construction and agriculture. These changes cause the accumulation of long-lived gases in the atmosphere including carbon dioxide, the indirect formation of gases through chemical reactions, and an increase in number of aerosols in the atmosphere (abundance). Each of these agents influences the radiation balance of the Earth (forcing) and over time causes warming near the surface (climate response).

LeviFig2
Fig. 2: Linear [uncoupled] framework for modeling climate change shown with solid arrows. Dashed arrows indicate climate feedback mechanisms driven by future changes in temperature. Credit: Levi Golston

2. Individual drivers of change

The two major agents (aerosols and greenhouse gases) are further sub-divided by the IPCC as shown below. Each of the components are assessed independently, then summed together using various statistical techniques to produce the best estimate and range shown in Figure 1.

LeviFig3
Fig. 3: Estimates of radiative forcing (dotted lines) and effective radiative forcing (solid lines) for each anthropogenic and natural agent considered in AR5. [Source: Figure 8.15 in IPCC AR5].
Since the report itself is an assessment, each of the estimates in Figure 3 were derived directly from the peer-reviewed literature and are not the result of new model runs or observations. I have recently identified the specific sources incorporated in this figure elsewhere if one wants to know exactly how any of the individual bars were calculated. More generally, it can be seen that the level of confidence varies for each agent, with the most uncertainty for aerosol-radiation and aerosol-cloud interactions. Positive warming is driven most strongly by carbon dioxide followed by other greenhouse gases and ozone. It can also be seen that changes in solar intensity are accounted for by the IPCC, but are believed to be small compared to changes from the human-driven processes.

3. Can net radiative forcing be more directly calculated?

Besides adding together individual processes, is it also possible to independently assess the total forcing itself, at least over recent decades where satellite and widespread ground-based observations are available?  In principle, changes in the Earth’s energy balance, primarily seen as reduced thermal radiation escaping to space and as heat uptake by the oceans, should relate back to the net forcing causing these changes in a way that provides an alternate means of calculating human’s influence on the climate. To use this approach, one would need a good idea of how sensitive the Earth’s response will be in response to a given level of forcing. However, this sensitivity is equally or more uncertain than the forcing itself, making it difficult to improve on the process-by-process result. The ability to account for the Earth’s overall energy balance and then quantify radiative imbalances over time also remains a challenge. Longer data records and improved knowledge of climate sensitivity may eventually advance the ability to directly determine total radiative forcing going forward.

Still-North_America-1280x720_full
Fig. 4: Simulation of CO2 concentrations over North America on Feb 12th, 2006 by an ultra-high-resolution computer model developed by NASA. Photo Credit: NASA

4. Summary

The most widely cited number is based on an abundance-based perspective with step changes for each forcing agent from 1750 to 2011, resulting in an estimated total forcing of 2.3 W m-2. This number does not come from an average of global climate models, as might be imagined, but instead is the sum of eight independent components (seven human-driven, one natural), each derived and assessed from selected recent sources in the peer-reviewed literature.

Radiative forcing is complex and requires models to translate how abundances of greenhouse gases and aerosols actually affect global climate. For gases like carbon dioxide, documented records are available going back to pre-industrial times and earlier, but in other cases additional modelling is needed to help determine the natural state of the land surface and atmosphere. The total human-driven radiative forcing (Figure 1) is still surprisingly poorly constrained in AR5 (1.1 to 3.3 W  m-2 with 90% confidence), which is a reminder that while we are certain human activities are causing more energy to be retained by the atmosphere, continued work is needed on the physical science of climate change to determine by exactly how much.

Levi

Levi Golston is a PhD candidate in Princeton’s Environmental Engineering and Water Resources program. His research develops laser-based sensors coupled with new atmospheric measurement techniques for measuring localized sources, particularly for methane and reactive nitrogen from livestock, along with methane and ethane from natural gas systems. He blogs at lgsci.wordpress.com/