We might be running a Ponzi Scheme for groundwater extraction

This blog is compiled and edited by Shashank Kumar Anand (Associate Editor, Highwire Earth; Graduate Student, Department of Civil and Environmental Engineering, Princeton University). It is based on the work of Sara Cerasoli and Amilcare Porporato that appeared in the Journal of Hydrology early this year (See the peer-reviewed article here)

In the 1920s, Charles Ponzi gained notoriety for running a fraudulent scheme that now bears his name. Imagine a magic trick where someone promises to turn a small amount of money into a fortune in no time. This sounds amazing, right? Well, that is the trick a Ponzi scheme plays – except it is not magic; it is deception. 

Here is how it works: Imagine you meet someone who says they have a fantastic investment opportunity. They say that if you give them $100, they will double it in just a few weeks. That sounds like an incredible deal, doesn’t it? You might think, “Why not? I will get back $200 so quickly!” 

Now, to make this trick seem real, the person might even show you some “proof” by giving you $200 after a few weeks. But here is the trick – that $200 did not come from any real investment or profit. Instead, they used money from new people who joined the scheme after you. So, when new folks give them $100 each, they use that money to pay you back. It looks like you made a profit, but they just used someone else’s money to do it. 

This is where the trick gets tricky. People who got paid back tell their friends and family about this amazing investment. More people join, each giving their money, and they are also paid with money from the newcomers. This cycle continues, with the person at the center collecting money from new members to pay off the earlier ones. It creates an illusion of profit and success, and more people become attracted, hoping to get rich quickly. 

Now, if one starts thinking about this cycle of the Ponzi scheme, it is unsustainable as it relies on an ever-growing number of new members to keep paying off the earlier ones. But as more people join, there comes a point where it is impossible to find enough new members to cover the payments. When that happens, the scheme collapses. The person running the scheme might disappear with whatever money is left, leaving many people who invested with nothing but empty pockets. This can be seen as “an example of market failure with an unsustainable economic trajectory”. 

Cerasoli & Porporato draw a parallel to this investment fraud with groundwater extraction, in which over-exploiters of groundwater from past generations play the role of the early investors, profiting from excessive overdraft and gaining in terms of higher agricultural productivity simply because more water input results in more agricultural output. But if one thinks about the next generation, this over-extraction of water will eventually leave more and more depleted aquifers for the next generation (the late investors). This means that if we only focus on the short-term benefits of maximizing agricultural output for a few years, in the long term, we might be on an unsustainable path that can only end with the exhaustion of the water resource. Using the case study of Kern County in California, the authors suggest that we might be in a Ponzi scheme of groundwater extraction. This finding resonates with the news that we have been hearing regarding the Central Valley groundwater depletion crisis (1,2). Just imagine how many more Kern County we might have around the world that we may not know of! 

The original Ponzi scheme collapsed after nine months. In that sense, we are fortunate that aquifer dynamics operate on much longer time scales, giving us time to take timely action. The question hence arises: what can be done to ensure that we get out of this dangerous path? The authors suggest that there can be two different policy approaches based on their optimality framework of groundwater extraction. 

The hard policy scenario assumes that state/federal regulations immediately reduce pumping rates to a value lower than the natural recharge. On the environmental side, the plus is a speedy recovery of water tables. The cumulative profit over several decades from the hard policy path is the highest. Still, the challenges are obviously human adaptation, which does not seem plausible, and at the same time, the investment to make that change now may not be very practical.  

The soft policy option is a more realistic approach to tackle the Ponzi scheme of groundwater extraction. Pumping rates are gradually decreased by some percentage, say 5% of current values each year until they reach the long-term sustainable pumping rate by some year, say 2040 so that we reach the long-term sustainable path slowly but eventually. 

Whether we will manage to escape this scheme is a question that can only be addressed if we first realize that the short-term gain can hurt in the long term. The challenge also arises because the best, or rather, the long-term optimal way of extracting groundwater will result in initial lower gains. To safeguard the interests of future generations, it will require us, to avoid being our own Ponzi

Authors of the scientific publication:

Sara Cerasoli is graduate student from the Department of Civil and Environmental Engineering at Princeton University

Prof. Amilcare Porporato is affiliated with the Department of Civil and Environmental Engineering and High Meadows Environmental Institute at Princeton University.

What can climate adaptation learn from what’s in Grandpa’s garage? A historical tale of two flood protection megastructures

When the U.S. Army Corps of Engineers started building coastal flood protection over 60 years ago, they weren’t thinking about climate change, but a PhD student at Princeton University shows that old Army Corps projects may hold valuable insights for future climate adaptation efforts.

By D.J. Rasmussen (STEP PhD student)

Grandpa’s flooded garage

On the third day of November 2012, Joseph D’Amelio sat on his front porch looking down 14th Street. For over five decades, he and his wife had shared coffee on countless Saturday mornings in that same spot. It normally would have been unusual for D’Amelio to be out on the porch this late into the year, but this particular morning was unusual. A cacophony of construction noises rang throughout the Broad Channel neighborhood of Queens. Diesel engines idled loudly. The back-up alarms on a half dozen dump trucks rang out simultaneously. Workers were attending to two homes in the neighborhood that had collapsed into Jamaica Bay just days earlier. Joseph D’Amelio sat and thought about the families who lived in those two homes, the once-in-a-lifetime storm that caused their destruction, and something he had discovered in his garage just a few hours ago.

For those who have never visited Broad Channel, they might be surprised by what they see. This corner of New York City resembles little of what someone might think of when they imagine the Big Apple. Instead of skyscrapers, taxis, and bright lights, visitors of Broad Channel are greeted by something resembling more of a small, quiet New England fishing community.

Broad Channel is situated in the middle of Jamaica Bay, an 18,000-acre wetland estuary filled with islands, meadowlands, and a labyrinth of waterways. Jamaica Bay is also home to several Queens neighborhoods and John F. Kennedy International Airport, the busiest airport on the east coast. D’Amelio often took his fishing skiff out to troll for striped bass on the edge of the bay’s marsh banks, sometimes with his three grandchildren.

The aftermath of Hurricane Sandy in the Broad Channel neighborhood of Queens (New York City). Photo by Richard York/The Forum Newsgroup

Broad Channel is also one of New York City’s lowest lying neighborhoods. To prevent severe flood damage, the first floor of many of the residences in Broad Channel sit atop either concrete pilings or a single car garage. Joseph D’Amelio is no stranger to floods. Over the past half century, he had personally witnessed many of the strongest nor’easters and hurricanes, including Hurricane Gloria in 1985, the December 1992 nor’easter, and Hurricane Irene in 2011. Each time his property had survived with little damage. But all that changed a few days ago. Earlier that week, the D’Amelio residence’s garage was flooded by Hurricane Sandy.

The D’Amelio’s home sat atop a garage just big enough to squeeze in a 1950’s Chevrolet. But instead of accommodating a car, the D’Amelio’s garage now served as a storage unit filled with fishing gear, dusty equipment from Joseph’s firefighting career, and boxes of old forgotten junk.

Joseph D’Amelio had spent the past few days cleaning up the aftermath of the flood in his garage. In the process, something peculiar popped out at him. It was a small booklet, nearly fifty years old. The paper of the booklet had browned over the years, giving it a distinct vintage look. In it, a proposal was described for a massive barrier across the Rockaway Inlet, the western entrance to Jamaica Bay. In 1960, Hurricane Donna had reminded New Yorkers how exposed the Jamaica Bay region was to flood waters produced by coastal storms.

The report D’Amelio had found was written by the U.S. Army Corps of Engineers, the primary government agency tasked with managing flood risks. The intention of the Army Corps proposal was to keep storm water out of the Bay. The plan had been distributed in the mid 1960s, just after the D’Amelios had first moved into their 14th Street home. At the time, Joseph and his wife hadn’t paid much attention to the plan; they were too busy taking care of their three young children. After years of planning and deliberation, the barrier was never built. The exact reason why remains a mystery.

Clasping his coffee close to his chest, Joseph D’Amelio sat quietly on his front porch, staring at nothing in particular, and wondered how the past few days would have been different had the Jamaica Bay flood barrier been built.

A pamphlet from the U.S. Army Corps of Engineers describing a proposal for a flood barrier across the Rockaway Inlet, the western entrance to Jamaica Bay, the same pamphlet that Joseph D’Amelio found in his garage. Photo by the author.

The status of coastal flood adaptation today: multiple plans, but little action

Just like Joseph D’Amelio, many Americans on the coasts are increasingly finding themselves cleaning up garages, basements, and living rooms that have been flooded by coastal storms. Studies using observational data have shown that coastal cities around the U.S. are experiencing an increasing frequency of high-water events. These events can lead to costly and deadly floods when water over tops natural and engineered defenses, such as sand dunes or seawalls. The increase in the frequency of high-water events is largely a result of rising mean sea levels due to global warming. The problem is only going to get worse. Through our research, colleagues and I have found that local sea levels in New York City are expected to rise by almost a foot by the middle of this century.

Over the past decade, several U.S. cities around the country have begun devising plans for flood protection. Following Hurricane Sandy, New York City began to investigate several proposals in earnest, including a flood structure across the Rockaway Inlet, much like the one Joseph D’Amelio found in his garage. However, after roughly eight years of deliberation, many of these plans again failed to progress beyond the drawing board, including the second attempt at the Jamaica Bay Barrier.

An artist’s rendering of the most recent proposal for a storm surge barrier at the Rockaway Inlet/Jamaica Bay (2019). Source: New York District of the U.S. Army Corps of Engineers (New York, New York).

New York City’s experience highlights a disturbing trend for coastal climate adaption efforts nationwide. Projects have continually struggled to gain political traction. Years of detailed planning go wasted, as does valuable time (project construction can take multiple decades). As the cliché goes, the can gets kicked down the road. Nothing gets done, and the risk of a major flood disaster remains.

Detectives on the case: enter two curious Princetonians

Over the course of a few meetings in my advisor’s office, he and I started to contemplate the reasons why technically feasible Army Corps projects rarely progressed beyond the drawing board. Specifically, we wondered what non-technical factors might explain the variation in whether projects got built or not. Was it simply a question of having the money at the right time and place? Or is it more complicated than that? For example, were politics and social conflict the reason? If so, what specific factors were contributing? We wanted to know the answers.

It slowly became apparent that these questions had not been studied before. A multi-month literature search turned up little information on the subject. I called experts around the country and wrote letters to the Army Corps, but no one seemed to know the reasons. Freedom of Information Act requests turned up nothing, sometimes after waiting an entire year for a response. Frustrated with the little success I had, I contemplated more drastic measures, including visiting archives (cue the ominous music).

One could describe archives as repositories for all types of media (letters, memos, newspaper articles, photographs, microfilm, etc.) that provide information about the function of an individual person, a group of people, or an organization. I figured that archives might have more information about past projects, such as the failed Jamaica Bay Barrier pamphlet that Joseph D’Amelio had found in his garage.

The answers are in the archives

I began contacting archives across the country to see if they had any information on historical coastal flood protection projects. After calling and visiting roughly a half-dozen locations, I finally got some promising news. An archivist called to tell me that there was a handful of archives in Massachusetts and Rhode Island that had several boxes of documents relating to two coastal flood protection megaprojects. These two megaprojects had emerged simultaneously in Rhode Island following Hurricane Carol in 1954, which at the time was the third major hurricane to hit the region in 16 years.

One of the two projects was the Fox Point hurricane barrier, a closeable flood gate intended to protect downtown Providence, Rhode Island from major floods. It was completed in 1966 and has been deployed several times over the years. The other project was a series of storm surge barriers across the mouth of Narragansett Bay. The Bay barriers ultimately failed to progress beyond the planning stage, after more than ten years of study and deliberation. I thought that if I was able to understand the political and social reasons for these project outcomes, it might offer hope for understanding why some contemporary climate adaptation efforts break ground and why others do not.

The completed Fox Point Hurricane Barrier in March 1966 (Providence, Rhode Island). Photo taken by the New England Division of the U.S. Army Corps of Engineers (Waltham, Massachusetts).
A map showing the Army Corps’ proposal for storm surge barriers at the entrance to Narragansett Bay, Rhode Island. Source: New England Division of the U.S. Army Corps of Engineers (Waltham, Massachusetts).

While archives may contain treasure troves of information about a subject, it is often not in a coherent format than can be easily interpreted. There are no published books or summaries. Each document is like a random puzzle piece but not necessarily a part of a puzzle that you’re trying to put together. To make matters more complicated, you may not even know what puzzle it is you’re trying to put together! Information is chaotically scattered. One must sift through thousands of documents in order to produce a coherent story. Ultimately, you never really know what you will find in an archive. While it is time consuming, in some ways the uncertainty creates an air of suspense that makes digging through boxes all day a bit more bearable.

A photo showing a sampling of a few letters from local residents in opposition to the Narragansett Bay Barriers. Hundreds of letters were sent to elected officials and the U.S. Army Corps of Engineers, nearly all in opposition to the project. Photo by the author.

What can modern climate adaptation efforts learn from the two Rhode Island projects?

After months of collecting and analyzing materials from the archives, I was able to piece together the first ever historical account of the two Rhode Island projects, from the point of their inception all the way through their ultimate fates (completion and cancellation). Informing the historical account were hundreds of newspaper articles, memos between the Army Corps and elected officials, and letters from businesses and residents from around Rhode Island.

I learned that the Fox Point barrier progressed beyond the planning stage as a result of minimal environmental concerns and strong, sustained support from both the public and Rhode Island’s elected officials. At the time, the waterways around Providence were so polluted that there wasn’t much natural life in the water to protect. I also concluded that the Narragansett Bay barrier project failed to break ground due to strong public opposition. The public opposition was related, in part, to projected increases in channel currents that had the potential to complicate maritime navigation and cause uncertain impacts on marine life. Unlike the waterways around Providence, Narragansett Bay was a rich ecological system on which many communities depended. Further hampering the Bay barrier’s chances was an almost decade long planning period during which the public’s flooding concerns declined.

While these findings had historical significance, by themselves they were not obviously relevant to modern-day projects. Much has changed since the 1960s. Most notably, the emergence of environmental laws has made it much easier to legally challenge government projects that have the potential to impact the natural environment. After assessing a handful of coastal climate adaptation projects that have recently been considered by the Army Corps, some common political and social factors emerged. Most notably, these factors included 1) environmental protection concerns (including modern environmental laws that elevate oppositional viewpoints), 2) a lack of leadership and support from elected officials to help carry projects forward, 3) the allure of alternative options that are more aesthetically pleasing to residents and also cheaper and faster to implement (think beach nourishment and buried levees with promenades on top for biking and walking), and 4) lengthy and complex decision-making procedures that coincide with fading memories of floods.

Looking backwards to go forward

As sea levels continue to rise in coastal cities around the country, the urgency grows for protecting populations and both the built and natural environment. Since the mid-20th century, several years of planning have gone into a number of proposed coastal flood protection projects around the country. While some of these projects have gotten built, many have failed to advance beyond the drawing board. My work has shown that archival research is a viable option to better understanding the political complexity faced by contemporary climate adaptation projects. The more we learn about this complexity, the more we can improve the efficiency with which coastal risk reduction strategies are deployed. For example, instead of investing years of planning into massive, environmentally harmful, and financially risky megaprojects, scarce planning resources could be allocated towards projects that are deemed more palatable by the public, elected officials, and organized interests.

Thanks to the archives, we now have some answers. But the question remains whether the Army Corps and governments will learn from the past and change their behavior. Hopefully Joseph D’Amelio’s grandchildren will one day live in a better prepared version of New York City, one in which Army Corps projects don’t end up as forgotten relics in flooded garages.

D.J. Rasmussen is an engineer, climate scientist, and policy scholar. He studies coastal floods, sea-level rise, and public works strategies for managing their economic and social costs. His research has informed the UN’s Intergovernmental Panel on Climate Change and has been published in Science Magazine as well as other academic journals. He is completing his PhD in the Science, Technology, and Environmental Policy (STEP) Program at the School of Public and International Affairs at Princeton University. A portfolio of his research can be viewed at https://www.djrasmussen.co. He can be reached at dmr2@princeton.edu

Offsetting your greenhouse gas emissions can impact more than just your carbon footprint

By Tim Treuer

This Giving Tuesday, I decided to offset my 2020 carbon footprint. And help protect endangered biodiversity. And help eliminate poverty. And improve air, water, and soil quality. And support gender equality. And empower historically marginalized communities. And maybe even decrease the risk of killer diseases like COVID-19 and malaria.

But I only made one donation. And its price tag was the equivalent of about a dollar a day.

How? I’m donating to an organization that will use the funds to restore tropical rainforests. I may be biased as a restoration ecologist, but in my mind there are few ways to offset your emissions that carry as many co-benefits to nature and society as regrowing rainforests. More on that below, but first I want to address the elephant in the room when it comes to carbon offsetting: most don’t offset anything.

Seedling nursery

Per a 2016 European Commission report, 85% of carbon offsets fail to offset carbon. A big problem is scam organizations that simply do less than they promise. But even well-intentioned, reputable groups can fall short. The two main problems are failures to account for ‘additionality’ and ‘leakage’. Additionality means that the carbon that is pulled out of the atmosphere wouldn’t have been pulled out anyway. Some organizations offer to do things like establish tree plantations in areas that would otherwise be recovering forest–forest that would, in many cases, store more carbon than the tree plantation!

Leakage becomes an issue when a group’s actions to draw down greenhouse gases from the atmosphere lead to increased emissions elsewhere. This is a pernicious problem with many efforts, even ones that have huge positive local benefits. Protecting stands of old-growth forest or using farms to produce biofuels can be really great in theory, but if you don’t address the demand side of the equation, economics dictates that you’ll end up with compensatory logging or farming elsewhere. (Side note: one pet peeve of mine is that biofuel studies sometimes come up with rosy predictions because they simply assume we will produce less food and eat fewer calories in the future.)

If you pick carefully enough, however, tropical forest restoration projects often evade these two pitfalls. Many (if not most) involve jumpstarting the recovery of land that would not heal on its own because of challenges like invasive vegetation that chokes out seedlings, absent seed sources because of widespread forest clearing, or heavily degraded soils from overgrazing or nutrient depletion. So you can go ahead and tick that box for additionality. And so long as the restoration activities take place in protected settings like national parks or community forest, you shouldn’t see compensatory carbon emissions elsewhere. Ergo, no more leakage.

But carbon offsetting is only the tip of the iceberg when it comes to tropical forest restoration. Pound for pound there are more species in tropical forests than any other ecosystem on Earth. In places like Madagascar or lowland Borneo, many of those species are in serious danger of disappearing forever because of past habitat loss. We are talking millions of species hanging on by a thread. Most don’t even have scientific names yet. Their only lifeline is the resurrection of lost habitat. The biodiversity benefits of forest restoration alone can and do justify restoration projects across the tropics.

Caterpillar at reforestation site

Forest restoration through planting seedlings and controlling weeds is a super labor-intensive activity. 22-23 year old me can definitely attest to that fact after spending two seasons co-managing a reforestation project in Gunung Palung National Park on the Indonesian side of Borneo. But the required blood, sweat, and tears is a feature, not a bug. Labor means employment opportunities, and in impoverished tropical communities, that means poverty alleviation. One of the top requests from the villages where I worked was for more opportunities to be paid to reforest. And it’s equitable work too. It was common at our site to see planting teams led by women from an indigenous ethnic group– women who would be less likely to get jobs with the commercial oil palm plantations, the main local employers.

Reforestation site

There’s another reason those communities love reforestation work. They know healthy forests mean less smoke and haze from invasive grass fires, less flooding, and more consistent and cleaner water in the streams running out of the forest. The water perks span the wet and dry seasons. Forests decrease rainy season flooding via increased and root-mediated groundwater infiltration into the water table, which then feeds streams during drier periods.

At this point I’m starting to feel like Billy Mays: “But wait! There’s more!” I think it’s safe to say that most people would relish the opportunity to kick an anthropomorphized version of this global pandemic right in the nu… uhh, somewhere really painful. Tropical forest restoration is actually the next best thing. One of the big insights of the scientists in the emerging discipline of eco-epidemiology is that unhealthy ecosystems tend to yield greater risk of wildlife diseases crossing into human populations. There’s a whole slew of mechanisms for this–from stressed animals like bats shedding more virus to many malaria-spreading mosquitoes preferring open habitat to closed canopy forest– but the punchline is that when scientists looked at how to prevent the next pandemic, halving deforestation made the list of cost-effective preventative measures. My current research looks at how reforestation could protect against malaria in Madagascar, and past work I’ve been a part of showed that deforestation upstream of impoverished rural communities leads to more cases of diarrhea in kids and infants.

There are many organizations that need your support for their tropical forest restoration work, and many online tools for calculating your annual carbon footprint. I’m choosing to donate to the organization I worked with in Borneo. Not only do I know from firsthand experience that they are doing truly additional and leakage-free offsetting, but they also are super transparent about how they calculate and track their offsets.

There are tons of great organizations out there, though. You could even pick one in a country you plan on visiting once the pandemic is over– maybe they’d even show you the forest you helped replant. Just make sure you are asking three questions: will the trees you help plant cause forest clearing elsewhere? Would the replanted forest recover on its own anyway? And finally, how are they calculating their emissions reductions?

If you’re happy with your answers, congratulations! You’ve found a way to give that really does keep on giving.

Tim completed his PhD at Princeton in Ecology and Evolutionary Biology (*18), where he studied large-scale tropical forest restoration. He was a 2018 AAAS Mass Media Fellow and currently a Gund Postdoctoral Fellow at the University of Vermont, where he studies whether and how reforestation can be used as a tool for combatting malaria in Madagascar. You can find him on Twitter (@treuer) and at www.timothytreuer.com.  

Evaluating the geoengineering treatment

Written by Xin Rong Chua

Might there be a remedy for the worldwide temperature and rainfall changes caused by humanity’s emissions? If so, what would the cure cost? We watch as Mr. Human grapples with these questions with the help of Dr. Planet.

Dr. Planet was about to put an end to a long, hard day of work when the distress call came in.

“Dr. Planet! Dr. Planet! Our planet Earth needs your help!”

Dr. Planet quickly boarded his medical spaceship and sped towards the solar system. As the ship passed through Earth’s atmosphere, his instruments began to gather the planet’s climate records. The temperature indicator began to blink red. Then the indicator for circulation changes in its atmosphere and oceans. Then the sea ice indicator.

The moment Mr. Human boarded his spaceship, Dr. Planet knew why the planet was ill.

Mr. Human was holding a long, black cigar labelled ‘Fossil Fuels’. It was still smoking at the tip. In front of him, the reading on his carbon dioxide indicator soared.

“I advise you to cut down on your emissions,” said Dr. Planet. “Otherwise, your planet will experience sea level rise, ocean acidification, and stronger storms.”

“We know that,” said Mr. Human. He sounded as if he had not slept for days. “We’ve known about it for decades. I was so excited after the Paris meeting, when the world first agreed on concrete pledges to cut down emissions. Then we did our sums and realized that even if every country fulfilled its promised reductions, global mean temperatures were still set to increase by more than 2 degrees Celsius come 2100. And then the United States announced that they would pull out of the agreement, which was…”

Mr. Human’s gaze fell as he trailed off. He then straightened and looked Dr. Planet in the eye. “Dr. Planet, you are a renowned planetary climate surgeon. Do you have a geoengineering treatment that might be able to cure our Earth?”

Mr. Human took out a few geoengineering brochures and laid them on Dr. Planet’s desk. They had been produced by the hospital’s marketing department.

Dr. Planet resolved to have a chat with the marketing department about a more moderate portrayal. He was getting tired of patients either believing that geoengineering was a panacea or cursing him for attempting to play God. In fact, the carbon dioxide removal and solar geoengineering tools he possessed only allowed for a limited range of outcomes. More importantly, all of the choices involved tradeoffs and risks. However, experience had taught him that it was best to begin by explaining the science.

Schematic depiction of climate engineering methods (Source: Climate Central)

Carbon dioxide removal

Dr. Planet picked up the first brochure. It was about Canadian entrepreneur Russ George, who in 2012  dumped a hundred tons of iron into the ocean to trigger a massive plankton bloom. There were record hauls of salmon right after the fertilization. George also pointed out that the plankton removed carbon dioxide from the air as they grew.

“It’s easy to remove carbon dioxide from the atmosphere,” began Dr. Planet. “The problem is keeping the carbon dioxide out. If the fish is harvested and used as food, the carbon makes its way back into the air. Also, when the plankton respire, or are eaten by organisms higher up the food chain, most of that carbon is released once again. In addition, the immediate phytoplankton growth triggered by fertilization robs the iron or phosphorous that might have been used by other organisms. If you are looking for a long-term solution, don’t get tricked into looking only at the initial gains.”

“Besides, iron fertilization can’t be the only solution. In the most optimistic scenarios, the bulk of the carbon uptake would be used to form the shells of marine organisms such as diatoms. Since the shells would eventually fall to the bottom of the ocean, there would be a net removal of carbon from the surface. But based on the availability of iron-deficient waters around your planet, I estimate that iron fertilization can sequester at most 10% of human annual emissions.”

“Our clinic also has some options to store carbon underground by pumping it into porous rock,” said Dr. Planet, taking a brochure from a nearby shelf and handing it over. “However, the technology is still experimental and expensive.”

Mr. Human brightened as he saw that this technology could store about 1,600 billion tonnes of carbon dioxide. If humanity continued emitting at 2014 levels, this would lock up about 45 years of carbon dioxide emissions. When he came to the section on costs, his jaw dropped. “Double the cost of our existing power plants?” He took out his bulging wallet and removed a stack of bills. Dr. Planet wondered if Mr. Human considered this so cheap that he was willing to pay upfront.

Mr. Human waved the bills. “Look at all the IOUs! There is no way we can afford that cost. I’ll bet the aerosol plan is cheaper than that.”

Solar radiation management

Mr. Human pointed to a printout explaining how particles called aerosols could be placed high in the atmosphere. Choosing aerosols that reflected solar radiation would help cool the Earth’s surface.

Dr. Planet understood why Mr. Human liked the aerosol plan. It made sense to place the aerosols far above the surface. That way, it would take many months before the aerosols settled below the clouds, where rain could flush the particles from the air. Furthermore, after the eruption of Mount Pinatubo in 1991, global-mean temperatures in the Northern hemisphere fell by half a degree Celsius. With such a natural analog in mind, it was no wonder that Mr. Human thought he knew what to expect. He even was correct on the costs. Starting from 2040, dedicating 6700 flights a day to sulfate injection would keep global-mean warming to 2 degrees Celsius. This would involve a mass of sulfates roughly similar to that of the Pinatubo eruption and would cost about $US20 billion per year.

Volcanic ash after the eruption of Mount Pinatubo in 1991 (Source: USGS )

“It would be cheaper,” agreed Dr. Planet. “But tell me, is global mean surface temperature all you care about?”

“Of course not,” said Mr. Human. “Rainfall is important too. Also, I want to make sure we keep the West Antarctic Ice Sheet, and reduce…”

“Then I should let you know that using aerosols means making a choice between overcorrecting for temperature or precipitation,” said Dr. Planet. He used the same serious tone a human doctor might use to explain that chemotherapy might remove the tumor, but would also cause you to vomit and lose all your hair.

Mr. Human folded his arms. He looked most unconvinced.

As Dr. Planet cast about for a good explanation, his eyes fell on Mr. Human’s wallet. It was still on the table and still full of the IOUs. He picked up a stack of name cards from his table.

“What if I asked you to place all of the cards into your wallet?”

Mr. Human frowned at the thick wad of paper. “I would have to remove some of my old receipts, or the wallet wouldn’t close.”

“Think of the Earth’s surface as the full wallet,” Dr. Planet said. “If we put in energy from increasing sunlight, your Earth has to throw out some energy. Because we’re trying to keep the temperature unchanged, the surface can’t radiate more longwave radiation by warming. It therefore has to transport heat, which mostly happens through evaporation. In the atmosphere, what comes up must come back down eventually, so increasing evaporation increases rainfall.”

“So, increasing radiation towards the surface increases rainfall,” said Mr. Human. “Don’t sunlight and carbon dioxide both do that?”

“They do,” said Dr. Planet. “But the atmosphere is mostly transparent to solar radiation and mostly opaque to longwave radiation from carbon dioxide. Energy entering via solar radiation thus has a stronger impact on the surface and rainfall. Hence, trying to correct for the change in temperature from carbon dioxide by stratospheric aerosols is expected to lead to an overcorrection in precipitation .”

Mr. Human was silent for a while, before he perked up. “Well, a slight change in the weather we’re used to isn’t that bad, especially if it avoids a worse outcome. Besides, you’ve only talked about the global-mean. With some fine-tuning, I’m sure we could come up with an aerosol distribution that delivers a good balance.”

“We have produced hypothetical simulations that investigate a range of outcomes. As a case in point, tests on a virtual Earth show that we can control the global-mean surface temperature, as well as the temperature differences between the North and South hemispheres and from the equator to pole. This was achieved by injecting sulfate aerosols at four different locations in a computer simulation.”

“However, given the lack of rigorous clinical trials on planets like your Earth, I must warn you that it will remain a highly uncertain procedure,” said Dr. Planet. “For one, we will encounter diminishing marginal returns as we attempt to increase the sulfate load to achieve cooling. The increased amount of sulfate in the atmosphere could form bigger particles that reflect sunlight less efficiently rather than create new ones.”

“The treatment procedure of sustaining the thousands of aerosol-injection flights will require the commitment and coordination of all the peoples of your planet. A disruption due to conflicts could be catastrophic. If the aerosol concentrations are not maintained, the decades’ worth of change from greenhouse gases that they are holding back would manifest in a couple of years. The change would be so sudden that there would be little time for you to adapt.”

Mr. Human paled. Countries might well balk at paying the geoengineering bill. After all, that was money that could go to feeding the poor or to reducing a budget deficit. A rogue country might threaten to disrupt the injections unless sanctions were lifted. Or a country that might benefit from warming could sabotage the flights…

“I think you already know what I’m about to say,” said Dr. Planet as Mr. Human buried his face in his hands. “There’s no magic pill here. There never has been. I can help perform some stopgap surgery by removing carbon dioxide or provide some symptomatic relief through solar radiation management. Ultimately, though, your species has to stop lighting up in the way it has.”

Mr. Human sighed; he had to deliver the sobering news that geoengineering was riskier and more complicated than his colleagues they had expected. As he rose from his chair, he realized that he was still holding his smoking carbon cigarette. The numbers on Dr. Planet’s carbon dioxide detector were still rising. He watched the readout as it went past 400ppm, then 410ppm. With a regretful sigh, he ground the lit end of his cigar into an ashtray and stepped out to continue the long journey ahead.

Acknowledgments: This article was inspired by a group discussion with Dr. Simone Tilmes at the 2017 Princeton Atmospheric and Oceanic Sciences Workshop on Climate Engineering. Katja Luxem and Ben Zhang read an early draft and helped improve the clarity of the article.

Xin is a PhD candidate in Princeton’s Program in Atmospheric and Oceanic Sciences, a collaboration between the Department of Geosciences and the NOAA Geophysical Fluid Dynamics Laboratory. She combines high-resolution models and theory to better understand the changes in tropical rainfall extremes as the atmosphere warms. She is also interested in innovative approaches to science communication.

 

Pulp Non-fiction

Written by Timothy Treuer

A story (but careful, there’s a twist):

In 1998, the Costa Rican Sala Cuarta (their highest judicial body) issued a ruling against a company that had dumped 12,000 tonnes of waste orange peels in one of the country’s flagship protected areas, Área de Conservación Guanacaste (ACG). The ruling came at the urging of some members of the Costa Rican environmental community, and studies had found elevated levels of d-limonene–a suspected carcinogen–in local waterways as a result of the company’s actions, raising tensions with neighboring Nicaragua over the possible pollution of their downstream eponymous lake. The court ruling demanded the immediate removal of the orange peels from where they lay–a site that some had labeled ‘an open air dump.’

A keen observer at the time would have noted one immediate hiccup with the court’s order: those 12,000 tonnes of orange waste? They didn’t exist anymore.

Six months of unfathomable ecstasy on the part of four species of flies had converted the mega pile o’ peels into several inches of black, loamy soil, smothering the invasive African grass that had previously dominated the heavily degraded corner of the national park. Oh, and d-limonene? Turns out it’s more of a cancer-fighter than a cancer-causer (See Asamoto et al. 2002 Mammary carcinomas induced in human c-Ha-ras proto-oncogene transgenic rats are estrogen-independent, but responsive to d-limonene treatment. Japanese Journal of Cancer Research), and can now be purchased on Amazon for $0.16/gram (note I do NOT endorse herbal supplements as a general rule–talk to your doctor if you or your transgenic rat suffer from mammary carcinomas).

See, the orange peel dumping was actually part of a grand plan hatched by rockstar ecologist turned conservationist, Dan Janzen (best known for his hit singles like ‘Herbivores and the Number of Tree Species in Tropical Forests’ and ‘Why Mountain Passes Are Higher in the Tropics’, but I prefer his deep tracks ‘How to be a fig’ and ‘Mice, big mammals, and seeds: it matters who defecates what where’). He and his partner Winnie Hallwachs had noted the following upon observing the development of a huge new orange juice processing facility on ACG’s northern border by a company called Del Oro: (1) most people don’t like peels in their orange juice, (2) megatonnes of orange peels probably weren’t the easiest thing to deal with on the cheap, and (3) of the 170,000+ species of creature in ACG’s forests, at least one probably would nosh some citrus rind. Upon discovering that Del Oro planned to construct a multi-million dollar plant to turn their waste into low-grade cattle feed, Dan and Winnie engineered the following plan:

  1. Dump orange peels on former cattle ranches recently incorporated into ACG.
  2. Fly orgy.
  3. Profit.

Amazingly this plan nearly worked perfectly! Del Oro was all over the idea of getting a little weird with ACG. After a promising test deposition of 100 truckloads of orange peels in 1996, Del Oro and ACG signed a contract wherein the park would provide waste disposal (and interestingly, formalized water provisioning and pest management ecosystem services that Del Oro enjoyed by virtue of being neighbors with a fat block of mountainous rain-, cloud- and dry forest) in exchange for donating a huge amount of still-forested land that they owned on the ACG border. Janzen threw in some ecological consultation and help in getting eco-friendly certifications as a sweetener. A seemingly beautiful win-win deal.

But of course, we can’t have nice things.

You may have already pieced together what happens next: after executing the first year of the contract wherein Del Oro trucked in ~12,000 metric tonnes of peels and pulp into a heavily degraded corner of ACG that was seemingly caught in a state of arrested succession, a rival orange juice company caught wind of the party, and did as one does when they get spurned by a guest list omission: they sued.

And won.

What seemed to get lost in the debates that raged at the time though, was what effect all these orange peels would have on the forest itself. Dan and Winnie had the intuition that killing off the fire-prone grass and adding nutrients to a plot of land that had been continuously trampled by bovid beasties for a couple hundred years would be a positive change for an aspiring forest, but that wasn’t a certainty.

In 1998, 1000 truckloads of orange peels were deposited in a degraded section of Costa Rica’s Área de Conservación Guanacaste (ACG). (Photo courtesy of Daniel Janzen and Winnie Hallwachs)

After the fallout from the lawsuit and the court ruling, it’s understandable that Dan, Winnie, and ACG’s staff didn’t want to draw too much attention to the site (a couple of ACG officials nearly were thrown in jail for failing to adhere to the court order). They visited a few times early on to photograph the progress, and sent a botanist in the very early years to write down what species of plants were occurring in the fertilized area and the surrounding pasture, but other than that the project was more or less consigned to the quirky annals of ACG history (alongside such fascinating historical tidbits as a starring role in the Iran-Contra Affair–read the book Green Phoenix by Bill Allen for the full fascinating history of the park).

The reason I’m relating this story is that some collaborators and I started revisiting this site a few years ago, and we were so blown away by what we saw that we had to tell the world. The area where the orange peels had been? It had become just about the lushest forest I’d ever seen. Literally, vines on vines on vines. And the surrounding pasture? Still pretty much looked the same as in old photos.

In the summer of 2014, I set up Princeton senior thesis student Jon Choi ‘15 at the site, and let me just say, he scienced the crap out of it. We set up some vegetation transects and developed a soil sampling regime, and then he went full Tasmanian Devil in a labcoat. We’re talking camera traps, audio recorders, pitfall traps, and theoretical modelling of ecological state transitions–the whole nine meters. It truly impresses me that he managed to say so much about what ultimately boils down to a very simple observation: orange peels jump-started forest recovery–where there would otherwise be a stunted savanna, there’s now forest so thick you literally have to hack your way through with a machete.

Images from early 2014 of the unfertilized, control site (left) and the site that had been fertilized with orange peels in the 1990s (right). (Photos courtesy of Timothy Treuer)

After a few years of trying to distill this work into something palatable to reviewers, journal editors, and our team of co-authors, we are proud to finally drop our LP: ‘Low-cost agricultural waste accelerates tropical forest regeneration,’ available exclusively from Restoration Ecology.

In all seriousness, I really do believe there’s an incredibly exciting idea at the core of this project: it wasn’t just a win-win initiative. It was win-win-WIN. Carbon was sucked out of the atmosphere, biodiversity was increased, and soil quality improved. All FOR A PROFIT! Despite this, we couldn’t find a single other example of ag waste being used to speed forest recovery. We hope that changes. The world really shouldn’t contain both nutrient-starved degraded lands and nutrient-rich waste streams.

Tim is a PhD candidate in Ecology and Evolutionary Biology studying large-scale tropical forest restoration. More broadly, he is interested in the effective communication of and policy solutions to complex environmental challenges in an era of global change. He’s on Twitter (@treuer) and tumblr (treuer.tumblr.com).

Carbon Capture and Sequestration: A key player in the climate fight

Written by Kasparas Spokas and Ryan Edwards

The world faces an urgent need to drastically reduce climate-warming CO2 emissions. At the same time, however, reliance on the fossil fuels that produce CO2 emissions appears inevitable for the foreseeable future. One existing technology enables fossil fuel use without emissions: Carbon Capture and Sequestration (CCS). Instead of allowing CO2 emissions to freely enter the atmosphere, CCS captures emissions at the source and disposes of them at a long-term storage site. CCS is what makes “clean coal” – the only low-carbon technology promoted in President Donald Trump’s new Energy Plan – possible. The debate around the role of CCS in our energy future often includes questions such as: why do we need CCS? Can’t we simply replace fossil fuels with renewables? Where can we store CO2? Is storage safe? Is the technology affordable and available?

Source: https://saferenvironment.wordpress.com/2008/09/05/coal-fired-power-plants-and-pollution/

The global climate-energy problem

The Paris Agreement called the globe to action: limit global warming to 2°C above pre-industrial temperatures. To reach this goal, CO2 and other greenhouse gas emissions need to be reduced by at least 50% in the next 40 years and reach zero later this century (see Figure 1). This is a challenging task, especially since global emissions continue to increase, and existing operating fossil fuel wells and mines contain more than enough carbon to exceed the emissions budget set by the 2°C target.

Fossil fuels are abundant, cheap, and flexible. They currently fuel around 80% of the global energy supply and create 65% of greenhouse gas emissions. While renewable energy production from wind and solar has grown rapidly in recent years, these sources still account for less than 2.1% of global energy supply. Wind and solar also face challenges in replacing fossil fuels, such as cost and intermittency, and cannot replace all fossil fuel-dependent processes. The other major low-carbon energy sources, nuclear and hydropower, face physical, economic, and political constraints that make major expansion unlikely. Thus, we find ourselves in a dilemma: fossil fuels will likely remain integral to our energy supply for the foreseeable future.

Figure 1: Global CO2 emissions (billion tonnes of CO2 per year): historical emissions, the emission pathway implied by the current Paris Agreement pledges, and a 2°C emissions pathway (RCP2.6) (Sources: IIASA & CDIAC; MIT & UNFCCC; IIASA)

CO2 storage and its role in the energy transition

CCS captures CO2 emissions from industrial sources (e.g. electric power plants) and transports them, usually by pipeline, to long-term storage sites. The ideal places for CO2 sequestration are porous rock formations more than half a mile below the surface. (Target rocks are filled with water, but don’t worry, it’s saltwater, not freshwater!) Chosen formations are overlain, or “capped,” by impermeable caprocks that do not allow fluid to flow through them. The caprocks effectively trap buoyant CO2 in the target rocks (see Figure 2).

Figure 2: Diagram of a typical geological CO2 storage site (Source: Global CCS Institute)

Scientists estimate that suitable rock formations have the potential to store more than 1,600 billion tonnes of CO2. This amounts to 70 years of storage for current global emissions from capturable sources (which are 50% of all emissions). Large-scale CCS could serve as a “bridge,” buying time for carbon-free energy technologies to develop to the stage where they are economically and technically ready to replace fossil fuels. CCS could even help us increase the amount of intermittent renewable energy by providing a flexible and secure “back-up” with low emissions. Bioenergy combined with CCS (BECCS) can also deliver “negative emissions” that may be needed to stabilize the climate. Furthermore, industrial processes such as steel, cement, and fertilizer production have significant CO2 emissions and few options besides CCS to reduce them.

In short, CCS is a crucial tool for mitigating the worst effects of global warming while minimizing disruption to our existing energy infrastructure and buying time for renewables to improve. Most proposed global pathways to achieve our targets include large-scale CCS, and the United States’ recently released 2050 decarbonization strategy includes CCS as a key component.

While our summary makes CCS seem like an obvious technology to implement, important questions about safety, affordability, and availability remain.

 

Is CCS Safe?

For CCS to contribute substantially to global emissions reduction, huge amounts of emissions must be stored underground for hundreds to thousands of years. That’s a long time, which means the storage must be very secure. Some worry that CO2 might leak upward through caprock formations and infiltrate aquifers or escape to the atmosphere.

But evidence shows that CO2 can be safely and securely stored underground. For example, the Sleipner project has injected almost 1 million tonnes of CO2 per year under the North Sea for the past 20 years. (For scale, that’s roughly a quarter of the emissions from a large coal power plant.) The oil industry injects even larger amounts of CO2 approximately 20 million tonnes per year – into various geological formations in the United States without issue in enhanced oil recovery operations to increase oil production. Indeed, the oil and gas deposits we currently exploit demonstrate how buoyant fluids (like CO2) can be securely stored in the subsurface for a very long time.

Still, there are risks and uncertainties. Trial CO2 injections operate at much lower rates than will be needed to meet our climate targets. Higher injection rates require pressure management to prevent the caprock from fracturing and, consequently, the CO2 from leaking. The CO2 injection wells and any nearby oil and gas wells also present possible leakage pathways from the subsurface to the atmosphere (although studies suggest this is likely to be negligible). Leading practices in design and maintenance can minimize well leakage risks.

Subsurface CO2 storage has risks, but experience suggests the risks can be mitigated. So, if CCS has such promise for addressing our climate-energy problem, why has it not been widely implemented?

 

The current state of CCS

CCS development has lagged, and deployment remains far from the scale required to meet our climate targets. Only a handful of projects have been built over the past decade. Why? High costs and a lack of economic incentives.

Adding CCS to coal and gas-fired electricity generation plants has large costs (approximately doubling the upfront cost of a new plant using current technology). Greenhouse gases are free (or cheap) to emit in most of the world, which means emitters have no reason to make large investments to capture and store their emissions. In order to incentivize industry to invest in CCS, we would need to implement a strong carbon price, which is politically unpopular in many countries. (There are exceptions – Norway’s carbon tax incentivized the Sleipner project.) In the United States, the main existing economic incentive for capturing CO2 is for enhanced oil recovery operations. However, the demand for CO2 from these operations is relatively small, geographically localized, and fluctuates with the oil price.

Inconsistent and insufficient government policies have thwarted significant development of CCS (the prime example being the UK government’s last-minute cancellation of CCS funding). Another challenge will be ownership and liability of injected CO2. Storage must be guaranteed for long timeframes. Government regulations clarifying liability, long-term responsibility for stored CO2, and monitoring and verification measures will be required to satisfy investors.

 

The future of CCS

The ambitious target of the Paris Agreement will require huge cuts in CO2 emissions in the coming decades. The targets are achievable, but probably not without CCS. Thus, incentives must increase, and costs must decrease, for CCS to be employed on a large scale.

As with most new technologies, CCS costs will decrease as more projects are built. For example, the Petra Nova coal plant retrofit near Houston, a commercial CCS project for enhanced oil recovery that was recently completed on time and on budget, is promising for future success. New technologies also have great potential: a pilot natural gas electricity generation technology promises to capture CO2 emissions at no additional cost. A technology that could capture CO2 from power plant emissions while also generating additional electricity is also in the works.

Despite its current troubles, CCS is an important part of solving our energy and climate problem. The recent United States election has created much uncertainty about future climate policy, but CCS is one technology that could gain support from the new administration. In July 2016, a bipartisan group of senators introduced a bill to support CCS development. If passed, this bill would satisfy Republican goals to support the future of fossil fuel industries while helping the United States achieve its climate goals. Strong and stable supporting policies must be enacted by Congress – and governments around the world – to help CCS play its key role in the climate fight.

 

Kasparas Spokas is a Ph.D. candidate in the Civil & Environmental Engineering Department at Princeton University studying carbon storage and enhanced oil recovery environments. More broadly, he is interested in studying the challenges of developing low-carbon energy systems from a techno-economic perspective. Follow him on Twitter @KSpokas

 

Ryan Edwards is a 5th year PhD candidate in Princeton’s Department of Civil & Environmental Engineering. His research focuses on questions related to geological carbon storage, hydraulic fracturing, and shale gas. He is interested in finding technical and policy solutions to the energy-climate problem. Follow him on Twitter @ryanwjedwards.

How Do Scientists Know Human Activities Impact Climate? A brief look into the assessment process

Written by Levi Golston

On the subject of climate change, one of the most widely cited numbers is that humans have increased the net radiation balance of the Earth’s lower atmosphere by approximately 2.3 W m-2 (Watts per square meter) since pre-industrial times, as determined by the Intergovernmental Panel on Climate Change (IPCC) in their most recent Fifth Assessment Report (AR5). This change is termed radiative forcing and represents a basic physical driver of higher average surface temperatures resulting from human activities. In short, it elegantly captures the intensity of climate change in a single number – the higher the radiative forcing, the larger the human influence on climate and the higher the rate of increasing surface temperatures. Radiative forcing is also significant because it forms the basis of equivalence metrics used in international environmental treaties, defines the endpoint of the future scenarios commonly used for climate change simulations, and is physically simple enough that it should be possible to calculate without relying on global climate models.

Given its widespread use, it is important to understand where estimates of radiative forcing come from. Answering this question is not straightforward because AR5 is a lengthy report published in three separate volumes. Chapter 8 of Volume 1, more than any other, quantitatively describes why climate change is occurring due to natural and anthropogenic causes and is, therefore, the primary source for how radiative forcing is assessed by the IPCC. One of the key figures is reproduced below, illustrating that the basic drivers of climate change are human-driven changes to aerosols (particles suspended in the air) and greenhouse gases, along with their relative strengths and uncertainties:

LeviFig1
Fig. 1: Assessments of aerosol, greenhouse gas, and total anthropogenic forcing evaluated between 1750 and 2011. Lines at the top show the 5-95% confidence range, with a slight change in definition from AR4 to AR5 [Source: Figure 8.16 in IPCC AR5].
This post seeks to answer two questions: how is the 2.3 W m-2 best estimate determined in AR5? And further, why is total anthropogenic forcing not known more precisely than shown in Figure 1 given the numerous observations currently available?

1. Variations on the meaning of radiative forcing

Fundamental laws of physics say that if the Earth is in equilibrium, then average temperature of the Earth is such that there is a balance between the energy that the Earth receives and the energy that it radiates. When this balance is disturbed, the climate will respond due to the additional energy in the system and will continue to change until the forcing has fully propagated through the climate system at which point a new equilibrium (average temperature) is reached. This response is controlled by processes with a range of timescales (e.g. the surface ocean over several years and glaciers over many hundreds of years), so radiative forcing depends on when exactly it is calculated. This leads to several subtly differing definitions. While the IPCC distinguishes between radiative forcing and effective radiative forcing, I do not attempt to distinguish between the two definitions here and refer to both as radiative forcing.

Figure 2 shows the general framework for assessing human drive change used by the IPCC, which is divided into four major components. Firstly, the direct impact of human activities through the release (emission) of greenhouse gases and particulates into the atmosphere is estimated, along with changes to the land surface through construction and agriculture. These changes cause the accumulation of long-lived gases in the atmosphere including carbon dioxide, the indirect formation of gases through chemical reactions, and an increase in number of aerosols in the atmosphere (abundance). Each of these agents influences the radiation balance of the Earth (forcing) and over time causes warming near the surface (climate response).

LeviFig2
Fig. 2: Linear [uncoupled] framework for modeling climate change shown with solid arrows. Dashed arrows indicate climate feedback mechanisms driven by future changes in temperature. Credit: Levi Golston

2. Individual drivers of change

The two major agents (aerosols and greenhouse gases) are further sub-divided by the IPCC as shown below. Each of the components are assessed independently, then summed together using various statistical techniques to produce the best estimate and range shown in Figure 1.

LeviFig3
Fig. 3: Estimates of radiative forcing (dotted lines) and effective radiative forcing (solid lines) for each anthropogenic and natural agent considered in AR5. [Source: Figure 8.15 in IPCC AR5].
Since the report itself is an assessment, each of the estimates in Figure 3 were derived directly from the peer-reviewed literature and are not the result of new model runs or observations. I have recently identified the specific sources incorporated in this figure elsewhere if one wants to know exactly how any of the individual bars were calculated. More generally, it can be seen that the level of confidence varies for each agent, with the most uncertainty for aerosol-radiation and aerosol-cloud interactions. Positive warming is driven most strongly by carbon dioxide followed by other greenhouse gases and ozone. It can also be seen that changes in solar intensity are accounted for by the IPCC, but are believed to be small compared to changes from the human-driven processes.

3. Can net radiative forcing be more directly calculated?

Besides adding together individual processes, is it also possible to independently assess the total forcing itself, at least over recent decades where satellite and widespread ground-based observations are available?  In principle, changes in the Earth’s energy balance, primarily seen as reduced thermal radiation escaping to space and as heat uptake by the oceans, should relate back to the net forcing causing these changes in a way that provides an alternate means of calculating human’s influence on the climate. To use this approach, one would need a good idea of how sensitive the Earth’s response will be in response to a given level of forcing. However, this sensitivity is equally or more uncertain than the forcing itself, making it difficult to improve on the process-by-process result. The ability to account for the Earth’s overall energy balance and then quantify radiative imbalances over time also remains a challenge. Longer data records and improved knowledge of climate sensitivity may eventually advance the ability to directly determine total radiative forcing going forward.

Still-North_America-1280x720_full
Fig. 4: Simulation of CO2 concentrations over North America on Feb 12th, 2006 by an ultra-high-resolution computer model developed by NASA. Photo Credit: NASA

4. Summary

The most widely cited number is based on an abundance-based perspective with step changes for each forcing agent from 1750 to 2011, resulting in an estimated total forcing of 2.3 W m-2. This number does not come from an average of global climate models, as might be imagined, but instead is the sum of eight independent components (seven human-driven, one natural), each derived and assessed from selected recent sources in the peer-reviewed literature.

Radiative forcing is complex and requires models to translate how abundances of greenhouse gases and aerosols actually affect global climate. For gases like carbon dioxide, documented records are available going back to pre-industrial times and earlier, but in other cases additional modelling is needed to help determine the natural state of the land surface and atmosphere. The total human-driven radiative forcing (Figure 1) is still surprisingly poorly constrained in AR5 (1.1 to 3.3 W  m-2 with 90% confidence), which is a reminder that while we are certain human activities are causing more energy to be retained by the atmosphere, continued work is needed on the physical science of climate change to determine by exactly how much.

Levi

Levi Golston is a PhD candidate in Princeton’s Environmental Engineering and Water Resources program. His research develops laser-based sensors coupled with new atmospheric measurement techniques for measuring localized sources, particularly for methane and reactive nitrogen from livestock, along with methane and ethane from natural gas systems. He blogs at lgsci.wordpress.com/

Conservation Crossroads in Ecuador: Tiputini Biodiversity Station and the Yasuní oil fields

Written by Justine Atkins

On an early morning boat, mist still rises off the water and the Amazonian air is thick with the characteristic dampness of tropical rainforests. We’re heading out in search of a nearby clay-lick where many parrot species congregate. In the partial slumber of any graduate student awake before 6 am, we sleepily scan the riverbank and tree line for any signs of life. It’s from this reluctantly awake state that our guide Froylan suddenly jolts us to the present and directs our gaze to a small clearing alongside the river. There, out in the open, a female jaguar sits in the grass near the river’s edge. By what seems like sheer luck, we have seen one of the most elusive Amazonian species, something our second guide José says he himself has only achieved five times in seven years.

This majestic female jaguar watches us closely from the safety of the grassy river bank, perhaps waiting for our boats to move on so she could continue on her route across the Tiputini river. Photo credit: Alex Becker

Of course, luck is only part of the story. The river we’re traveling down is the Tiputini River, which forms one edge of Ecuador’s Yasuní National Park — an area of some 3,800 square miles of pristine rainforest, historically left untouched by human development, that is practically overflowing with biodiversity. There are more species of plants, reptiles, insects, mammals and birds here than almost anywhere else in the Amazon and, by extension, the world.

Nestled in the dense array of kapok, ficus, Cecropias and Socratea or “walking palm” trees, is Tiputini Biodiversity Station (TBS). Established in 1993 and chosen specifically for its isolated location, the research station at Tiputini is a collaborative venture between Universidad San Francisco de Quito and Boston University. TBS supports ecological research at all levels, hosting everyone from visiting undergraduate students to PhD candidates to senior academics.

Almost everything about TBS and its surroundings reinforces the feeling that this is truly one of the most pristine and isolated centers of biodiversity in the world. As visitors to TBS for our Tropical Ecology field course, the first-year graduate students in Princeton’s Department of Ecology and Evolutionary Biology travelled by multiple planes, boats, buses and trucks over five hours from the nearest city (Coca) just to reach the field station itself. Photo credit: Alex Becker

Yet, as unfortunately seems inevitable whenever anyone talks about these last remaining ‘untouched’ areas, the pristine nature of TBS and Yasuní National Park comes with its own caveat. On our journey to the station, we are, probably naïvely, surprised to have to go through a security checkpoint run by the national oil company Petroamazonas. The mere presence of Petroamazonas indicates that the as yet undisturbed area surrounding TBS is up against a rapidly ticking clock. And with only a cursory glance over the basic facts of this situation, the sound of that ticking clock becomes deafening.

*     *     *

There are hundreds of millions, possibly billions, of barrels of Amazon crude oil lying beneath Yasuní National Park. For any nation, but particularly Ecuador — a relatively poor, developing country — the temptation to drill is immense. (Ecuador’s per capita GDP in 2013 was $6003, compared to the US GDP in the same year of $53,042.) For example, the government stood to make over $7 billion net profit (at 2007 prices) from the extraction and sale of 850 million barrels of oil from these reserves.

Yasuní had the potential to be a model for innovative environmental policy. It possesses unparalleled species’ richness, is located in a nation dependent on the extraction of non-renewable resources, and is home to the indigenous Waorani and two uncontacted groups, Tagaeri and Taromenane. In many ways, the variety of stakeholders and conflicts of interests and aims among them represents one of the most daunting conservation and sustainable development challenges the world faces today. How do we balance the needs of biodiversity maintenance, socioeconomic parity and protection of indigenous people when the goals of each seem to fundamentally misalign with one another?

The attempt to resolve this conflict was compellingly detailed in a National Geographic feature in January 2013. In 2007, President Correa proposed the so-called Yasuní-ITT Initiative (named after the three oil fields in the area it encompasses: Ishpingo, Tambococha, and Tiputini). The Yasuní-ITT sought $3.6 billion in compensation (to be contributed by international donors, both countries and corporations) in exchange for a complete ban on oil extraction and biodiversity protection for the ‘ITT block’ in the northeast corner of Yasuní.

With this initiative officially instated in 2010, Ecuador became one of the first nations to attempt sustainable development and action against climate change based on a model of truly worldwide cooperation. For this model to be successful, the government relied on other countries to recognize that an international desire to preserve the ecological value of Yasuní also meant an international responsibility to contribute to the opportunity cost of this preservation. There was a ground swell of support for this proposal within Ecuador and initially this was also met with enthusiasm abroad. However, by mid-2012, the Ecuadorian government had received only $200 million in pledges, contributions stalled and the Yasuní-ITT initiative was officially abandoned in August 2013.

Similar sustainability issues were at the forefront of the recent UN Climate Change Conference 2015 in Paris, also known as COP 21 (21st session of the Conference of Parties). Much of the prolonged negotiation and disagreement among the attending countries was based on the divergence of priorities among developed and developing nations. The former group was, by-and-large, pushing for uncompromising targets on emissions reduction and renewable energy use from the current highest emissions contributors, chief among which are developing nations like China and Brazil.

But developing nations felt strongly that they should not be excluded from the full benefits of industrialization, which developed nations have profited from in the past. One potential solution to this conflict, and one which led to part of the Paris Agreement, is for developed nations to support developing nations in the transition from fossil fuels to renewable, lower emission energy sources through financial compensation. Sound familiar? This was exactly the logic behind the Yasuní-ITT, so the failure of this initiative represents more than just a threat to Yasuní — it symbolically threatens action against climate change worldwide.

A closer look at the failure of the Yasuní-ITT reveals that there were in fact more complex considerations at play than simply a lack of pledged contributions. In an essay evaluating the decision to abandon the initiative, Ariana Keyman, an associate at the Busara Center for Behavioral Economics in Nairobi, assessed the particular political, economic and social factors that contributed to the Yasuní-ITT’s demise. Due to his dogged pursuit of a ‘New Latin American Left’, Ecuador’s President Correa was determined to increase spending on pro-poor socioeconomic development while also preserving the status of Ecuador’s environment and biodiversity. Unfortunately, as is often the case, something had to give and it was ultimately the environment that was compromised. This was only exacerbated by the historic dependency of this country on the oil industry and the ‘closed-door’ manner in which the Yasuní-ITT was both adopted and abandoned by the government. In this light, perhaps the case for international collaboration and economic cooperation on tackling the challenges of biodiversity conservation and climate change is not so hopeless, but it is still likely to be a bumpy road ahead.

*     *     *

Tiputini Biodiversity Station itself still seemed largely untouched during our trip in January 2016. Part of this was surely due to our unfamiliarity with the oil extraction process, but it’s clear that the continued tireless efforts of environmental groups are at least holding off the worst of the potential destruction for now. The founding director of TBS, Kelly Swing, wrote in a guest blog post in National Geographic in 2012 that the incursion of oil companies in this area has also in some ways helped scientists learn more about the incredible ecological communities in this region, thanks to increased funding and accessibility.

More than the literal isolation, the overwhelming presence of a brilliant array of mammals, birds, reptiles, amphibians and insects that seem to be almost dripping from the trees was a constant reminder of how far from urbanization we were and the sheer uniqueness of the location of TBS. Every morning, we awoke to the reverberating booms of howler monkeys and the screeching calls of caracara and macaws high above us. Walking to and from the dining area, we routinely spotted roosting bats, several species of anole lizards and learned to recognize the squeaking communications and rustling branches around us the local woolly monkey troop on their morning or evening commute. All of these wonderfully unique species (clockwise from top left: white-necked jacobin, motmot, woolly monkey, and tree frog) are threatened in some capacity by the oil industry. Photos credit: Alex Becker.

It appears, however, that the benefits are unlikely to outweigh the costs, particularly when the long-term consequences of the oil industry in Yasuní will be unknown for years to come. Swing was quick to point out that alread there are documented negative impacts — insects are being drawn to huge gas flares and eviscerated in large numbers, eliminating important food resources for frogs, birds and bats, and industrial noise pollution disrupting the communication channels of calling birds and primates, potentially limiting their ability to find mates, locate food, and avoid predators.

In establishing the research station along the Tiputini River, Swing said that their goal was “to be able to study and teach about nature itself, not human impacts on nature.” From our experience there, this goal was definitely realized in the most fantastic way possible, but how many other visitors who come after us that will be able to say the same thing we cannot say with any certainty. As global citizens, this is a concern that we should all be dedicated to addressing.

Justine is a first-year PhD student in the Ecology and Evolutionary Biology department at Princeton University. She is interested in the interaction between animal movement behavior and environmental heterogeneity, particularly in relation to individual and collective decision-making processes, as well as conservation applications.

Losing the Climate War to Methane? The role of methane emissions in the global warming puzzle

Written by Dr. Arvind Ravikumar

There is much to cheer about the recent climate agreement signed last December at the 21st Conference of Parties (COP 21) in Paris, France to reduce greenhouse gas emissions and limit global temperature rise to below 2° C. Whether countries will implement effective policies to achieve this agreement is a different question. Leading up to the Conference in Paris, countries proposed their intended nationally determined contributions (INDCs). These refer to the various targets and proposed policies pledged by all countries that signed the United Nations Framework Convention on Climate Change on their intended contribution to reduce global warming. The United States, among other things, is banking on the recently finalized Clean Power Plan by the Environmental Protection Agency (EPA) – this policy aims to reduce US greenhouse gas (GHG) emissions from the power sector by 26 to 28% in 2030, partly by replacing high-emitting coal fired power plants with low-emitting natural gas fired plants, and increased renewable generation (primarily wind and solar). Electricity production by natural gas fired plants is therefore expected to increase over the next few decades, acting as a ‘bridge-fuel’ to a carbon-free economy. Even though the US Supreme Court recently halted the implementation of the Clean Power Plan, the EPA anticipates that it will eventually be upheld.

A major component of natural gas is methane. This is a highly potent greenhouse gas whose global warming potential (i.e. ability to increase the Earth’s surface temperature through the greenhouse effect) is 36 times that of carbon dioxide in long-term (100-year impact) and over 80 in the near-term (20-year impact). Although carbon dioxide is a major component of US greenhouse gas emissions (see Fig. 1), it is estimated that methane contributes around 10% of the total emissions. Thus, given its significantly higher global warming potential, methane emissions and leakage can potentially erode the climate benefits of declining coal production.

Figure 1: US greenhouse gas inventory (2013) Data from EPA
Figure 1. US greenhouse gas inventory (2013). Source: EPA

Methane emissions are fairly diversified across natural and man-made sources. Figure 2 shows the sources of methane emissions in the US (2013) as estimated by the EPA through its GHG monitoring program. While 50% of emissions can be attributed to agriculture and waste-disposal activities, we can see that about 30% of methane emissions come from the oil and gas industry. Much of this can be attributed to the recent boom in non-conventional or shale gas production through fracking technology. The combination of low natural gas prices and higher demand from the power sector makes it imperative to reduce methane emissions as much as technologically feasible.

Fig 2
Figure 2. US methane emission by source (2013) . Source: EPA.

Currently, methane leaks occur at all stages of the natural gas infrastructure – from production and processing, transmission to distribution lines in major cities. While the global warming effects of higher methane concentrations are fairly well understood, there is currently little consensus on the magnitude of emissions from the natural gas infrastructure. For example, a recent study found that the average methane loss in all distribution pipelines around Boston was about 2.7%, significantly higher than the 1.1% reported in inventory estimates to the EPA. Another study that was published in the academic journal, Science, showed that various independent measurements of methane leakage rate across the US infrastructure varied from about 1% to over 6%. Climate benefits of switching from coal to natural gas fired power plants would critically depend on this leakage rate.

[…], detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions.

To better estimate methane leakage, the Environmental Defense Fund (EDF), a non-profit organization based in Washington, DC, organized and recently concluded a series of 16 studies to find and measure leaks in the US natural gas supply chain. While some of the results are currently being analyzed, much of the data show that conventional inventory estimates maintained by the EPA have consistently underestimated the leakage from various sources. It was shown that the Barnett shale region in Texas that produces about 7% of the nation’s natural gas, emitted 90% more methane compared to EPA estimates. To complicate matters further, until recently, estimates from atmospheric top-down data measured using satellites and aircrafts significantly exceeded land-based bottom-up measurements using methane sensors. On a similar note, detailed measurements from the Barnett shale region in Texas showed that just 2% of the facilities in the region account for 50% of all the methane emissions. Such a small fraction of large emission sources will further complicate direct measurements where typically only a small fraction of the facilities in a region are measured. While the EDF and other studies have been instrumental in our current understanding of methane leaks in the US and its contribution to greenhouse gas emissions, much work is required to understand sources, and most importantly, ways to cost-effectively monitor, detect and repair such leaks.

Aerial footage of the recent natural gas leak from a storage well in Aliso Canyon near LA. The leak is estimated to have released 96000 metric tons of methane, equivalent to about 900 million gallons of gasoline burnt and $15 million worth of natural gas. Source: Environmental Defense Fund, 2015.

Methane leakage in the context of global warming has only recently caught public attention – see here, here and here. In addition to greater awareness in business and policy circles, significant efforts are required to identify economically viable leak detection and repair programs. Currently, the industry standard to detect methane leaks include high-sensitivity but high-cost sensors, or low-cost but low-sensitivity infrared cameras. There is an immediate need to develop techniques that can be used to cost-effectively detect leaks over large areas (e.g. thousands of squared miles). From a regulatory perspective, EPA has released proposed regulations to limit methane leaks from the oil and gas industry. This comes on the heels of the goals set by the Obama administration’s Climate Action Plan to reduce methane emissions from the oil and gas sector by 40 to 45% from 2012 levels by 2025. These regulations require oil and gas companies involved in the entire natural gas life cycle to periodically undertake leak detection and repair procedures, depending on the overall leakage levels. The final rule is expected to be out sometime in 2016.

The success of the Clean Power Plan in reducing greenhouse gas emissions will significantly depend on the strength of the proposed regulations to curb methane leaks. We now have a better estimate of fugitive emissions (leaks) of methane from the US natural gas infrastructure. Concurrently, there should be a greater focus on developing cost-effective programs to detect and repair such leaks. It was recently reported that replacing old pipelines with newer ones in the gas distribution network in a city is effective in reducing leaks, and improving public safety. With a considerably higher global warming potential than carbon dioxide, methane has the potential to erode the climate benefits earned by switching from high emitting coal plants to low emitting natural gas power plants. Ensuring that does happen will take a coordinated effort and commitment from both the industry and government agencies.

35d1259

Arvind graduated with a PhD in Electrical Engineering from Princeton University in 2015 and is currently a postdoctoral researcher in Energy Resources Engineering at Stanford University. Somewhere later in grad school, he became interested in the topics of energy, climate change and policy. Arvind is an Associate Editor at Highwire Earth. You can read more about his work at his personal website.

Human Impacts on Droughts: How these hazards stopped being purely natural phenomena

Written by Dr. Niko Wanders

We often hear about droughts around the world including those recently in the U.S. and Brazil, which has threatened the water safety for this year’s Olympic Games. Despite their natural occurrence, there is still a lot that we do not understand fully about the processes that cause them and about how they impact our society and natural ecosystems. These topics are of great interest to scientists and engineers, and of great importance to policy makers and stakeholders.

The elusive definition of a drought

A drought can be broadly defined as a decrease in water availability below levels that are considered normal within a region. This means that droughts do not only occur in warm, sunny, dry countries but can take place essentially anywhere. What makes it hard to come up with a single, precise definition of a drought is that this below-normal water availability can be found at the different stages of the water cycle: precipitation, soil moisture (i.e. how much water there is in the soil), snow accumulation, groundwater, reservoirs and streamflow. Therefore, more useful definitions of drought conditions have to be tailored for specific sectors (e.g. agriculture or power generation) by focusing on the stage of the water cycle that is relevant for them (e.g. soil moisture for farmers, and streamflow for controllers of hydroelectric and thermoelectric plants).

Droughts can cover areas that range from a few thousand squared miles to large portions of a continent and can last anywhere from weeks to multiple years. Normally they start after a prolonged period of below-normal precipitation, sometimes in combination with increased evaporation due to high temperatures. This then causes a reduction in water availability in the soil, which can lead to lower groundwater and river levels as a result of decreased water recharge from groundwater aquifers into rivers. Snowfall is another important factor because it adds a steady release of water resources into streams throughout the Spring. When most of the precipitation comes as rain, it will wash out fast, leaving the Spring with dry conditions once again. The evolution of a drought through the water cycle is called drought propagation and normally takes multiple weeks to several months to take place.

So far this season, El Niño has been bringing some relief to the California drought. The current snow accumulation is above normal which is good news for this drought stricken region. The forecasts for the upcoming months look hopeful and it is likely that California will see some relief of the drought in the coming months. Nevertheless, it will take multiple years before groundwater and reservoir levels are back to their normal conditions, so the drought and its impacts will still remain for at least the coming years.

season_drought
Figure 1. U.S. Seasonal Drought Outlook provided by NOAA.

Droughts’ impacts on society

Extensive and long-lasting droughts can accumulate huge costs for the regions affected over time. For example, the ongoing California drought caused $2.2 billion in damage for the year 2014 alone. This is only an estimate of the damage to society in monetary terms, while the severe impacts on the region’s ecosystems are difficult to measure and quantify. As a result of the drought conditions, reservoir storages in most of California are at record low levels and strict water conservation policies have been implemented.

The severity of a drought’s impacts, however, depends greatly on the wealth, vulnerability, and resiliency of the region affected, including the degree to which the local economy and services rely on water. Despite the huge costs of the California drought, the U.S. is more capable of mitigating its effects and eventually recovering from it given the country’s general financial strength compared to many developing nations. According to reports by the United Nations and the Inter-Agency Standing Committee, an estimated 50,000 to 260,000 people lost their lives in the severe 2011 drought in the Horn of Africa, due to the fact that the financial means to provide food aid were not present and outside help started too late.

To have better tools to deal with these extreme events, several government agencies and institutes around the world have created drought monitors to track current drought conditions and to forecast their evolution. Examples are the Princeton Flood and Drought Monitors for Latin America and Africa, the U.S. Drought Monitor and the European Drought Observatory. These websites provide information on current drought conditions, which can be used to take preventive measures by governments and other stakeholders. Additionally, they can be used to inform the general public on current conditions and the need for preventive measures, such as conservation.

Latin American and African Drought Monitors developed at Princeton University
Figure 2. Latin American and African Flood and Drought Monitors developed at Princeton University. Credit: Terrestrial Hydrology Research Group at Princeton University.

The power to affect a drought

Traditionally, droughts have only been thought of as a natural phenomena that we have to endure from time to time. However, a recent commentary in Nature Geoscience that included two Princeton contributors argued that we can no longer ignore how humans affect drought occurrences. For example, when conditions get drier from lack of rainfall, people are more likely to use water from the ground, rivers and channels for irrigation. These actions can impact the water cycle over large areas, affecting the water resources of communities downstream and of the local communities in the near future. In the case of California, the severe drop in groundwater levels has escalated in the last three years due to a combination of the extreme drought conditions and the resulting heavy pumping for irrigating crops. The extra water that becomes available from pumping of groundwater is only a temporary and unsustainable solution that will alleviate the drought conditions in the soil locally for a short period of time. Most of the irrigated water will evaporate and only a small portion will return into the groundwater. In the long run, these depleted groundwater resources need to be replenished to recharge rivers and reservoirs – a process that can take multiple years to decades. Furthermore, extracting groundwater in large amounts can lead to subsidence – a lowering of the ground levels – that can sometimes be irreversible and have permanent effects on future water availability in the region. Thus, through our actions we have the power to affect how a drought develops, making it necessary to rethink the concept of a drought to include our role in enhancing and mitigating it.

Figure 3. On the left: Measurement of recent subsidence in San Joaquin Valley, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and Jan. 22, 2015 by satellite, Photo Credit: NASA
Figure 3. On the left: Measurement of subsidence (i.e. lowering of the ground levels) in the San Joaquin Valley during the past three decades, Photo Credit: USGS. On the right: Measured subsidence in the San Joaquin Valley between May 3, 2014 and January 22, 2015 by satellite, Photo Credit: NASA.

But it’s not all bad news. Last year I carried out a study with my collaborator, Dr. Yoshihide Wada, that found that sometimes human interventions can have a positive effect on the impact of natural drought conditions. This is most clear when we look at reservoirs that are built in many river systems around the world. It is shown that by building these structures the river discharge is more equally spread throughout the year. High flows or floods can be dampened by storing some of the water in the reservoirs, while this water can be used in the dry season or during a drought event to reduce the impact of low flows. This in itself opens up opportunities for regional water management that can help reduce the region’s vulnerability to droughts. Three limitations of the reservoirs are that they increase the amount of evaporation by having large surface areas, their benefits are limited in prolonged drought conditions simply because their storage is not infinite, and finally, they have a large impact on plants and animals in the downstream ecosystems (e.g. migrating fish species that need to swim upstream).

HumanDrought
Figure 4. Impact of human intervention on future hydrological drought, as a result of irrigation, reservoir operations and groundwater pumping. Darker colors indicate higher levels of confidence (Figure adapted from Wanders and Wada, 2015).

Drought in the future

Scientist have carried out many studies to explore what will happen to the characteristics and impacts of droughts in the future. Multiple research publications show that droughts will most likely increase in severity compared to the current conditions in many of the world’s regions with projected increases in human water demand, painting a stressful future. This then requires an adjustment in the way we deal with drought conditions, how we monitor and forecast these extremes, and how we consume water in general.

A short-term solution is trying to improve our monitoring and forecasting of these events so that we are better prepared. For example, additional improvements in meteorological and hydrological forecasts for conditions 3-6 months in advance would help operators manage their reservoirs in a way that would reduce the impact of upcoming drought events. These improvements require scientists to become more aware of the impact that humans have on the water cycle, which is a growing area of interest in recent years, but is definitely not standard practice.

Apart from increasing our possibilities to forecast upcoming drought events, we could also change our response to ongoing drought conditions by trying to be more efficient with the remaining available water. This could be achieved by using more efficient irrigation systems, building separate sewage systems for rainwater (that could be used for drinking water) and domestic and industrial wastewater (that is only reusable after severe treatment), and not cultivating crops that have a high water demand in areas with a natural low water availability. All these measures require long-term planning and willing government agencies and societies that would like to push and achieve these goals. Often a severe event (with significant damage) is needed to create the necessary awareness to realize that these measures are a necessity, such as the case in California that has resulted in new water laws and in Australia a few years ago.

Humans and the natural water system are strongly intertwined, especially in hydrological extreme conditions. Our impact on the water cycle is significant and cannot be neglected, both in normal conditions and under extreme hydrological ones. It will be important in the coming decades for us to learn how to responsibly manage our valuable water resources within a changing environment.

 

wande001 klein formaat

Dr. Niko Wanders is a Postdoctoral Research Fellow in the Civil and Environmental Engineering Department at Princeton working together with Prof. Eric Wood. His research interests include the study of the physical processes behind droughts,  as well as the factors that influence their magnitude and impact on society. Niko received a NWO-Rubicon Fellowship to work on the development of a global sub-seasonal drought forecasting system. The aim of the project is to develop a system that cannot only forecast upcoming drought events, but also make reliable forecast on the drought impact on agricultural production, water demand and water availability for human activities.