Climate Explained is a collaboration between The Conversation, Stuff and the New Zealand Science Media Centre to answer your questions about climate change.
If you have a question you’d like an expert to answer, please send it to email@example.com
I recently read an article stating the atomic bomb testing in the Pacific destroyed so much of the upper atmosphere that the US could no longer bounce communications off the atmosphere and had to deploy artificial satellites for communication. Is this true? And just how much damage did they do?
The article the question refers to doesn’t mention satellites, so let’s focus on the atmospheric damage part of the question. Indeed, surface and atmospheric (high-altitude) detonations of nuclear weapons can have short-term and long-term effects.
One short-term effect was a temporary blackout of long-distance high-frequency (HF) radio communication over the surrounding area. But this radio communication blackout was not a result of the nuclear explosions destroying the ionosphere.
On the contrary, the nuclear detonations temporarily increased the natural level of ionisation in the upper atmosphere.
The Earth’s ionosphere is a natural layer of charged particles at approximately 80-1,000km altitude. This ionised portion of the Earth’s upper atmosphere largely owes its existence to solar radiation, which strips electrons from neutral atoms and molecules.
The ionosphere consists of three major layers, known as D, E and F layers. The lower D and E layers typically exist only during daylight hours, while the highest F layer always exists.
These layers have distinct characteristics. The E and F layers are very reflective to HF radio waves. The D layer, on the other hand, is more like a sponge and absorbs HF waves.
In long-distance HF radio communications, the radio waves are bounced back and forth between the ionosphere and the Earth’s surface. This means you don’t need to establish a line of sight for HF radio communication.
But this radio communication scheme only works well when there is a reflective E or F layer, and when the absorbing D layer is not dominant.
During regular daytime hours, the D layer often becomes a nuisance because it weakens radio wave intensity in the lower HF spectrum. However, by changing to higher frequencies you can regain broken communication links.
The D layer may become even more dominant when intense X-ray emissions from solar flares or energetic particles are impacting the atmosphere. The absorbing D layer then breaks any HF communication links that traverse it.
Nuclear detonations also produce X-ray radiation, which leads to additional ionisation in all layers of the ionosphere. This makes the F layer more reflective to HF radio waves, but, alas, the D layer also becomes more absorptive.
This makes it difficult to bounce radio waves off the ionosphere for long-distance communication soon after a nuclear explosion, even though the ionosphere stays intact.
Beyond additional ionisation, shock waves from nuclear detonations produce waves and ripples in the upper atmosphere called “atmospheric gravity waves” (AGWs).
These waves travel in all directions, even reaching the ionosphere where they cause what are known as “travelling ionospheric disturbances” (TIDs), which can be observed for thousands of kilometres.
Bomb blasts are not the only things that cause disturbances in the atmosphere.
In September 1979, there were reports of bright flashes of light off the South African coast, igniting theories South Africa had nuclear weapon capabilities.
Analysis of ionospheric data from the Arecibo Observatory, in Puerto Rico, confirmed the presence of waves in the ionosphere that corroborated the theory of an atmospheric detonation. But whether the detonation was artificial or natural could not be determined.
The reason for the ambiguity is that meteor explosions and nuclear detonations in the atmosphere both generate AGWs with similar characteristics.
Volcanic eruptions, such at the 1980 Mount St Helens eruption in the US, and large earthquakes, such as the 2011 Tohoku earthquake in Japan, are other examples of energetic processes at the ground impacting the upper atmosphere.
Another well-known source of ionospheric disturbances is the geomagnetic storm, typically caused by coronal mass ejections from the Sun or solar wind disturbances impacting Earth’s magnetosphere.
In summary, nuclear detonations can impact the upper atmosphere in many ways, as do many other non-nuclear terrestrial and solar events that carry enormous energy. But the damage (so to speak) isn’t permanent.
Did the impact of these nuclear tests on the ionosphere specifically lead to the immediate launch of communications satellites? Not directly, because the impacts were temporary.
But in the Cold War setting, the potential for adversaries to even briefly interrupt over-the-horizon communications would certainly have been a motivating factor in developing communications satellites as backup.
While greenhouse gases are warming Earth’s surface, they’re also causing rapid cooling far above us, at the edge of space. In fact, the upper atmosphere about 90km above Antarctica is cooling at a rate ten times faster than the average warming at the planet’s surface.
Our new research has precisely measured this cooling rate, and revealed an important discovery: a new four-year temperature cycle in the polar atmosphere. The results, based on 24 years of continuous measurements by Australian scientists in Antarctica, were published in two papers this month.
The findings show Earth’s upper atmosphere, in a region called the “mesosphere”, is extremely sensitive to rising greenhouse gas concentrations. This provides a new opportunity to monitor how well government interventions to reduce emissions are working.
Our project also monitors the spectacular natural phenomenon known as “noctilucent” or “night shining” clouds. While beautiful, the more frequent occurrence of these clouds is considered a bad sign for climate change.
Since the 1990s, scientists at Australia’s Davis research station have taken more than 600,000 measurements of the temperatures in the upper atmosphere above Antarctica. We’ve done this using sensitive optical instruments called spectrometers.
These instruments analyse the infrared glow radiating from so-called hydroxyl molecules, which exist in a thin layer about 87km above Earth’s surface. This “airglow” allows us to measure the temperature in this part of the atmosphere.
Our results show that in the high atmosphere above Antarctica, carbon dioxide and other greenhouse gases do not have the warming effect they do in the lower atmosphere (by colliding with other molecules). Instead the excess energy is radiated to space, causing a cooling effect.
Our new research more accurately determines this cooling rate. Over 24 years, the upper atmosphere temperature has cooled by about 3℃, or 1.2℃ per decade. That is about ten times greater than the average warming in the lower atmosphere – about 1.3℃ over the past century.
Rising greenhouse gas emissions are contributing to the temperature changes we recorded, but a number of other influences are also at play. These include the seasonal cycle (warmer in winter, colder in summer) and the Sun’s 11-year activity cycle (which involves quieter and more intense solar periods) in the mesosphere.
One challenge of the research was untangling all these merged “signals” to work out the extent to which each was driving the changes we observed.
Surprisingly in this process, we discovered a new natural cycle not previously identified in the polar upper atmosphere. This four-year cycle which we called the Quasi-Quadrennial Oscillation (QQO), saw temperatures vary by 3-4℃ in the upper atmosphere.
Discovering this cycle was like stumbling across a gold nugget in a well-worked claim. More work is needed to determine its origin and full importance.
But the finding has big implications for climate modelling. The physics that drive this cycle are unlikely to be included in global models currently used to predict climate change. But a variation of 3-4℃ every four years is a large signal to ignore.
We don’t yet know what’s driving the oscillation. But whatever the answer, it also seems to affect the winds, sea surface temperatures, atmospheric pressure and sea ice concentrations around Antarctica.
Our research also monitors how cooling temperatures are affecting the occurrence of noctilucent or “night shining” clouds.
Noctilucent clouds are very rare – from Australian Antarctic stations we’ve recorded about ten observations since 1998. They occur at an altitude of about 80km in the polar regions during summer. You can only see them from the ground when the sun is below the horizon during twilight, but still shining on the high atmosphere.
The clouds appear as thin, pale blue, wavy filaments. They are comprised of ice crystals and require temperatures around minus 130℃ to form. While impressive, noctilucent clouds are considered a “canary in the coalmine” of climate change. Further cooling of the upper atmosphere as a result of greenhouse gas emissions will likely lead to more frequent noctilucent clouds.
There is already some evidence the clouds are becoming brighter and more widespread in the Northern Hemisphere.
Human-induced climate change threatens to alter radically the conditions for life on our planet. Over the next several decades – less than one lifetime – the average global air temperature is expected to increase, bringing with it sea level rise, weather extremes and changes to ecosystems across the world.
Long term monitoring is important to measure change and test and calibrate ever more complex climate models. Our results contribute to a global network of observations coordinated by the Network for Detection of Mesospheric Change for this purpose.
The accuracy of these models is critical to determining whether government and other interventions to curb climate change are indeed effective.
John French, Atmospheric physicist, University of Tasmania; Andrew Klekociuk, Principal Research Scientist, Australian Antarctic Division and Adjunct Senior Lecturer, University of Tasmania, and Frank Mulligan, , National University of Ireland Maynooth
But there is ongoing debate about whether to prioritise native or non-native plants to fight climate change. As our recent research shows, non-native plants often grow faster compared to native plants, but they also decompose faster and this helps to accelerate the release of 150% more carbon dioxide from the soil.
Our results highlight a challenging gap in our understanding of carbon cycling in newly planted or regenerating forests.
It is relatively easy to measure plant biomass (how quickly a plant grows) and to estimate how much carbon dioxide it has removed from the atmosphere. But measuring carbon release is more difficult because it involves complex interactions between the plant, plant-eating insects and soil microorganisms.
This lack of an integrated carbon cycling model that includes species interactions makes predictions for carbon budgeting exceedingly difficult.
There is uncertainty in our climate forecasting because we don’t fully understand how the factors that influence carbon cycling – the process in which carbon is both accumulated and lost by plants and soils – differ across ecosystems.
Carbon sequestration projects typically use fast-growing plant species that accumulate carbon in their tissues rapidly. Few projects focus on what goes on in the soil.
Non-native plants often accelerate carbon cycling. They usually have less dense tissues and can grow and incorporate carbon into their tissues faster than native plants. But they also decompose more readily, increasing carbon release back to the atmosphere.
Our research, recently published in the journal Science, shows that when non-native plants arrive in a new place, they establish new interactions with soil organisms. So far, research has mostly focused on how this resetting of interactions with soil microorganisms, herbivorous insects and other organisms helps exotic plants to invade a new place quickly, often overwhelming native species.
Invasive non-native plants have already become a major problem worldwide, and are changing the composition and function of entire ecosystems. But it is less clear how the interactions of invasive non-native plants with other organisms affect carbon cycling.
We established 160 experimental plant communities, with different combinations of native and non-native plants. We collected and reared herbivorous insects and created identical mixtures which we added to half of the plots.
We also cultured soil microorganisms to create two different soils that we split across the plant communities. One soil contained microorganisms familiar to the plants and another was unfamiliar.
Herbivorous insects and soil microorganisms feed on live and decaying plant tissue. Their ability to grow depends on the nutritional quality of that food. We found that non-native plants provided a better food source for herbivores compared with native plants – and that resulted in more plant-eating insects in communities dominated by non-native plants.
Similarly, exotic plants also raised the abundance of soil microorganisms involved in the rapid decomposition of plant material. This synergy of multiple organisms and interactions (fast-growing plants with less dense tissues, high herbivore abundance, and increased decomposition by soil microorganisms) means that more of the plant carbon is released back into the atmosphere.
In a practical sense, these soil treatments (soils with microorganisms familiar vs. unfamiliar to the plants) mimic the difference between reforestation (replanting an area) and afforestation (planting trees to create a new forest).
Reforested areas are typically replanted with native species that occurred there before, whereas afforested areas are planted with new species. Our results suggest planting non-native trees into soils with microorganisms they have never encountered (in other words, afforestation with non-native plants) may lead to more rapid release of carbon and undermine the effort to mitigate climate change.
The alarming rate of carbon dioxide flowing into our atmosphere is affecting plant life in interesting ways – but perhaps not in the way you’d expect.
Despite large losses of vegetation to land clearing, drought and wildfires, carbon dioxide is absorbed and stored in vegetation and soils at a growing rate.
This is called the “land carbon sink”, a term describing how vegetation and soils around the world absorb more carbon dioxide from photosynthesis than they release. And over the past 50 years, the sink (the difference between uptake and release of carbon dioxide by those plants) has been increasing, absorbing at least a quarter of human emissions in an average year.
So, to put it simply, humans are producing more carbon dioxide. This carbon dioxide is causing more plant growth, and a higher capacity to suck up carbon dioxide. This process is called the “carbon dioxide fertilisation effect” – a phenomenon when carbon emissions boost photosynthesis and, in turn, plant growth.
What we didn’t know until our study is just how much the carbon dioxide fertilisation effect contributes to the increase in global photosynthesis on land.
But don’t get confused, our discovery doesn’t mean emitting carbon dioxide is a good thing and we should pump out more carbon dioxide, or that land-based ecosystems are removing more carbon dioxide emissions than we previously thought (we already know how much this is from scientific measurements).
And it definitely doesn’t mean mean we should, as climate sceptics have done, use the concept of carbon dioxide fertilisation to downplay the severity of climate change.
Rather, our findings provide a new and clearer explanation of what causes vegetation around the world to absorb more carbon than it releases.
What’s more, we highlight the capacity of vegetation to absorb a proportion of human emissions, slowing the rate of climate change. This underscores the urgency to protect and restore terrestrial ecosystems like forests, savannas and grasslands and secure their carbon stocks.
And while more carbon dioxide in the atmosphere does allow landscapes to absorb more carbon dioxide, almost half (44%) of our emissions remain in the atmosphere.
Since the beginning of the last century, photosynthesis on a global scale has increased in nearly constant proportion to the rise in atmospheric carbon dioxide. Both are now around 30% higher than in the 19th century, before industrialisation began to generate significant emissions.
Carbon dioxide fertilisation is responsible for at least 80% of this increase in photosynthesis. Most of the rest is attributed to a longer growing season in the rapidly warming boreal forest and Arctic.
So how does more carbon dioxide lead to more plant growth anyway?
Higher concentrations of carbon dioxide make plants more productive because photosynthesis relies on using the sun’s energy to synthesise sugar out of carbon dioxide and water. Plants and ecosystems use the sugar both as an energy source and as the basic building block for growth.
When the concentration of carbon dioxide in the air outside a plant leaf goes up, it can be taken up faster, super-charging the rate of photosynthesis.
More carbon dioxide also means water savings for plants. More carbon dioxide available means pores on the surface of plant leaves regulating evaporation (called the stomata) can close slightly. They still absorb the same amount or more of carbon dioxide, but lose less water.
The resulting water savings can benefit vegetation in semi-arid landscapes that dominate much of Australia.
We saw this happen in a 2013 study, which analysed satellite data measuring changes in the overall greenness of Australia. It showed more leaf area in places where the amount of rain hadn’t changed over time. This suggests water efficiency of plants increases in a carbon dioxide-richer world.
In other research published recently, we mapped the carbon uptake of forests of different ages around the world. We showed forests regrowing on abandoned agricultural land occupy a larger area, and draw down even more carbon dioxide than old-growth forests, globally. But why?
In a mature forest, the death of old trees balances the amount of new wood grown each year. The old trees lose their wood to the soil and, eventually, to the atmosphere through decomposition.
A regrowing forest, on the other hand, is still accumulating wood, and that means it can act as a considerable sink for carbon until tree mortality and decomposition catch up with the rate of growth.
This age effect is superimposed on the carbon dioxide fertilisation effect, making young forests potentially very strong sinks.
In fact, globally, we found such regrowing forests are responsible for around 60% of the total carbon dioxide removal by forests overall. Their expansion by reforestation should be encouraged.
Forests are important to society for so many reasons – biodiversity, mental health, recreation, water resources. By absorbing emissions they are also part of our available arsenal to combat climate change. It’s vital we protect them.
Vanessa Haverd, Principal research scientist, CSIRO; Benjamin Smith, Director of Research, Hawkesbury Institute for the Environment, Western Sydney University; Matthias Cuntz, Research Director INRAE, Université de Lorraine, and Pep Canadell, Chief research scientist, CSIRO Oceans and Atmosphere; and Executive Director, Global Carbon Project, CSIRO
Zoe Loh, CSIRO; Blagoj Mitrevski, CSIRO; David Etheridge, CSIRO; Nada Derek, CSIRO; Paul Fraser, CSIRO; Paul Krummel, CSIRO; Paul Steele, CSIRO; Ray Langenfelds, CSIRO, and Sam Cleland, Australian Bureau of Meteorology
This week brought news that atmospheric carbon dioxide (CO₂) levels at the Mauna Loa atmospheric observatory in Hawaii have risen steeply for the seventh year in a row, reaching a May 2019 average of 414.7 parts per million (ppm).
It was the highest monthly average in 61 years of measurements at that observatory, and comes five years after CO₂ concentrations first breached the 400ppm milestone.
But in truth, the amount of greenhouse gas in our atmosphere is higher still. If we factor in the presence of other greenhouse gases besides carbon dioxide, we find that the world has already ticked past yet another milestone: 500ppm of what we call “CO₂-equivalent”, or CO₂-e.
In July 2018, the combination of long-lived greenhouse gases measured in the “cleanest air in the world” at Cape Grim Baseline Atmospheric Pollution Station surpassed 500ppm CO₂-e.
As the atmosphere of the Southern Hemisphere contains less pollution than the north, this means the global average atmospheric concentration of greenhouse gases is now well above this level.
Although CO₂ is the most abundant greenhouse gas, dozens of other gases – including methane (CH₄), nitrous oxide (N₂O) and the synthetic greenhouse gases – also trap heat. Many of them are more powerful greenhouse gases than CO₂, and some linger for longer in the atmosphere. That means they have a significant influence on how much the planet is warming.
Atmospheric scientists use CO₂-e as a convenient way to aggregate the effect of all the long-lived greenhouse gases.
As all the major greenhouse gases (CO₂, CH₄ and N₂O) are rising in concentration, so too is CO₂-e. It has climbed at an average rate of 3.3ppm per year during this decade – faster than at any time in history. And it is showing no sign of slowing.
This milestone, like so many others, is symbolic. The difference between 499 and 500ppm CO₂-e is marginal in terms of the fate of the climate and the life it sustains. But the fact that the cleanest air on the planet has now breached this threshold should elicit deep concern.
The Paris climate agreement is aimed at limiting global warming to less than 2℃ above pre-industrial levels, to avoid the most dangerous effects of climate change. But the task of predicting how human greenhouse emissions will perturb the climate system on a scale of decades to centuries is complex.
This is partly because industrial smog and other tiny particles (together called aerosols) reflect sunlight out to space, offsetting some of the expected warming. What’s more, the climate system responds slowly to rising atmospheric greenhouse gas concentrations because much of the excess heat is taken up by the oceans.
The amount of heat each greenhouse gas can trap depends on its absorption spectrum – how strongly it can absorb energy at different wavelengths, particularly in the infrared range. Despite its simple molecular structure, there is still much to learn about the heat-absorbing properties of methane, the second-biggest component of CO₂-e.
Studies published in 2016 and 2018 led to the estimate of methane’s warming potential being revised upwards by 15%, meaning methane is now considered to be 32 times more efficient at trapping heat in the atmosphere than CO₂, on a per-molecule basis over a 100-year time span.
Considering this new evidence, we calculate that greenhouse gas concentrations at Cape Grim crossed the 500ppm CO₂-e threshold in July 2018.
This is higher than the official estimate based on the previous formulation for calculating CO₂-e, which remains in widespread use. For instance, the US National Oceanic and Atmospheric Administration is reporting 2018 CO₂-e as 496ppm.
The graph below shows the two curves for the time evolution of CO₂-e in the atmosphere as measured at Cape Grim, using the old and new formulae.
Some greenhouse gases, such as chlorofluorocarbons (CFCs), also deplete the ozone layer. CFCs are in decline thanks to the Montreal Protocol, which bans the production and use of these chemicals, despite reports that indicate some recent production of CFC-11 in China.
But unfortunately their ozone-safe replacements, hydrofluorocarbons (HFCs), are very potent greenhouse gases, and are on the rise. The recently enacted Kigali Amendment to the protocol means that consumption controls on HFCs are now in place, and this will see the growth rate of HFCs slow significantly and then reverse in the coming decades.
Australia is at the forefront of initiating measures to curb the impact of HFCs on climate change.
Methane is another low-hanging fruit for climate action, while we undertake the slower and more difficult transition away from CO₂-emitting energy sources.
The significant human methane emissions from leaks in reticulated gas systems, landfills, waste water treatment, and fugitive emissions from coal mining and oil and gas production can be monitored and reduced. We have the science and technology to do this now.
It’s a classic win-win that saves money and reduces climate change, and something we should be implementing in Australia in the near future.
Zoe Loh, Research Scientist, CSIRO; Blagoj Mitrevski, Research scientist, CSIRO; David Etheridge, Principal Research Scientist, CSIRO; Nada Derek, Research Projects Officer, Oceans and Atmosphere, Climate Science Centre, CSIRO; Paul Fraser, Honorary Fellow, CSIRO; Paul Krummel, Research Group Leader, CSIRO; Paul Steele, Honorary Fellow, CSIRO; Ray Langenfelds, Scientist at CSIRO Atmospheric Research, CSIRO, and Sam Cleland, Officer in Charge, Cape Grim Baseline Air Pollution Station, Australian Bureau of Meteorology
Picture the causes of air pollution in a major city and you are likely to visualise pollutants spewing out of cars, trucks and buses.
For some types of air pollutants, however, transportation is only half as important as the chemicals in everyday consumer products like cleaning agents, printer ink, and fragrances, according to a study published today in Science.
Air pollution is a serious health concern, responsible for millions of premature deaths each year, with even more anticipated due to climate change.
Although we typically picture pollution as coming directly from cars or power plants, a large fraction of air pollution actually comes from chemical reactions that happen in the atmosphere. One necessary starting point for that chemistry is a group of hundreds of molecules collectively known as “volatile organic compounds” (VOCs).
VOCs in the atmosphere can come from many different sources, both man-made and natural. In urban areas, VOCs have historically been blamed largely on vehicle fuels (both gasoline and diesel) and natural gas.
Thanks in part to more stringent environmental regulations and in part to technological advances, VOCs released into the air by vehicles have dropped dramatically.
In this new study, the researchers used detailed energy and chemical production records to figure out what fraction of the VOCs from oil and natural gas are released by vehicle fuels versus other sources. They found that the decline in vehicle emissions means that – in a relative sense – nearly twice as much comes from chemical products as comes from vehicle fuel, at least in the US. Those chemicals include cleaning products, paints, fragrances and printer ink – all things found in modern homes.
The VOCs from these products get into the air because they evaporate easily. In fact, in many cases, this is exactly what they are designed to do. Without evaporating VOCs, we wouldn’t be able to smell the scents wafting by from perfumes, scented candles, or air fresheners.
Overall, this is a good news story: VOCs from fuel use have decreased, so the air is cleaner. Since the contribution from fuels has dropped, it is not surprising that chemical products, which have not been as tightly regulated, are now responsible for a larger share of the VOCs.
An important finding from this work is that these chemical products have largely been ignored when constructing the models that we use to predict air pollution – which impacts how we respond to and regulate pollutants.
The researchers found that ignoring the VOCs from chemical products had significant impacts on predictions of air quality. In outdoor environments, they found that these products could be responsible for as much as 60% of the particles that formed chemically in the air above Los Angeles.
The effects were even larger indoors – a major concern as we spend most of our time indoors. Without accounting for chemical products, a model of indoor air pollutants under-predicted measurements by a whopping 87%. Including the consumer products really helped to fix this problem.
In Australia we do a stocktake of our VOC emissions to the air every few years. Our vehicle-related VOC emissions have also been dropping and are now only about a quarter as large as they were in 1990.
Nonetheless, the most recent check suggests most of our VOCs still come from cars and trucks, factories and fires. Still, consumer products can’t be ignored – especially as our urban population continues to grow. Because these sources are spread out across the city, their contributions can be difficult to estimate accurately.
We need to make sure our future VOC stocktakes include sources from consumer products such as cleaning fluids, indoor fragrances and home office items like printing ink. The stocktakes are used as the basis for our models, and comparing models to measurements helps us understand what affects our air quality and how best to improve it. It was a lack of model-to-measurement agreement that helped to uncover the VW vehicle emissions scandal, where the manufacturer was deliberately under-estimating how much nitrogen gas was being released through the exhaust.
If we can’t get our predictions to agree with the indoor measurements, we’ll need to work harder to identify all the emission sources correctly. This means going into typical Australian homes, making air quality measurements, and noting what activities are happening at the same time (like cooking, cleaning or decorating).
If we want to keep air pollution to a minimum, it will become increasingly important to take into account the VOCs from chemical products, both in our models of air pollution and in our regulatory actions.
In the meantime, as we spend so much of our time indoors, it makes sense to try to limit our personal exposure to these VOCs. There are several things we can do, such as choosing fragrance-free cleaning products and keeping our use of scented candles and air fresheners to a minimum. Research from NASA has also shown that growing house plants like weeping figs and spider plants can help to remove some of the VOCs from indoor air.
And of course, we can always open a window (as long as we keep the outdoor air clean, too).
Getting climate change under control is a formidable, multifaceted challenge. Analysis by my colleagues and me suggests that staying within safe warming levels now requires removing carbon dioxide from the atmosphere, as well as reducing greenhouse gas emissions.
The technology to do this is in its infancy and will take years, even decades, to develop, but our analysis suggests that this must be a priority. If pushed, operational large-scale systems should be available by 2050.
We created a simple climate model and looked at the implications of different levels of carbon in the ocean and the atmosphere. This lets us make projections about greenhouse warming, and see what we need to do to limit global warming to within 1.5℃ of pre-industrial temperatures – one of the ambitions of the 2015 Paris climate agreement.
To put the problem in perspective, here are some of the key numbers.
Humans have emitted 1,540 billion tonnes of carbon dioxide gas since the industrial revolution. To put it another way, that’s equivalent to burning enough coal to form a square tower 22 metres wide that reaches from Earth to the Moon.
Half of these emissions have remained in the atmosphere, causing a rise of CO₂ levels that is at least 10 times faster than any known natural increase during Earth’s long history. Most of the other half has dissolved into the ocean, causing acidification with its own detrimental impacts.
Although nature does remove CO₂, for example through growth and burial of plants and algae, we emit it at least 100 times faster than it’s eliminated. We can’t rely on natural mechanisms to handle this problem: people will need to help as well.
The Paris climate agreement aims to limit global warming to well below 2℃, and ideally no higher than 1.5℃. (Others say that 1℃ is what we should be really aiming for, although the world is already reaching and breaching this milestone.)
In our research, we considered 1℃ a better safe warming limit because any more would take us into the territory of the Eemian period, 125,000 years ago. For natural reasons, during this era the Earth warmed by a little more than 1℃. Looking back, we can see the catastrophic consequences of global temperatures staying this high over an extended period.
Sea levels during the Eemian period were up to 10 metres higher than present levels. Today, the zone within 10m of sea level is home to 10% of the world’s population, and even a 2m sea-level rise today would displace almost 200 million people.
Clearly, pushing towards an Eemian-like climate is not safe. In fact, with 2016 having been 1.2℃ warmer than the pre-industrial average, and extra warming locked in thanks to heat storage in the oceans, we may already have crossed the 1℃ average threshold. To keep warming below the 1.5℃ goal of the Paris agreement, it’s vital that we remove CO₂ from the atmosphere as well as limiting the amount we put in.
So how much CO₂ do we need to remove to prevent global disaster?
Currently, humanity’s net emissions amount to roughly 37 gigatonnes of CO₂ per year, which represents 10 gigatonnes of carbon burned (a gigatonne is a billion tonnes). We need to reduce this drastically. But even with strong emissions reductions, enough carbon will remain in the atmosphere to cause unsafe warming.
Using these facts, we identified two rough scenarios for the future.
The first scenario is pessimistic. It has CO₂ emissions remaining stable after 2020. To keep warming within safe limits, we then need to remove almost 700 gigatonnes of carbon from the atmosphere and ocean, which freely exchange CO₂. To start, reforestation and improved land use can lock up to 100 gigatonnes away into trees and soils. This leaves a further 600 gigatonnes to be extracted via technological means by 2100.
Technological extraction currently costs at least US$150 per tonne. At this price, over the rest of the century, the cost would add up to US$90 trillion. This is similar in scale to current global military spending, which – if it holds steady at around US$1.6 trillion a year – will add up to roughly US$132 trillion over the same period.
The second scenario is optimistic. It assumes that we reduce emissions by 6% each year starting in 2020. We then still need to remove about 150 gigatonnes of carbon.
As before, reforestation and improved land use can account for 100 gigatonnes, leaving 50 gigatonnes to be technologically extracted by 2100. The cost for that would be US$7.5 trillion by 2100 – only 6% of the global military spend.
Of course, these numbers are a rough guide. But they do illustrate the crossroads at which we find ourselves.
Right now is the time to choose: without action, we’ll be locked into the pessimistic scenario within a decade. Nothing can justify burdening future generations with this enormous cost.
For success in either scenario, we need to do more than develop new technology. We also need new international legal, policy, and ethical frameworks to deal with its widespread use, including the inevitable environmental impacts.
Releasing large amounts of iron or mineral dust into the oceans could remove CO₂ by changing environmental chemistry and ecology. But doing so requires revision of international legal structures that currently forbid such activities.
Similarly, certain minerals can help remove CO₂ by increasing the weathering of rocks and enriching soils. But large-scale mining for such minerals will impact on landscapes and communities, which also requires legal and regulatory revisions.
And finally, direct CO₂ capture from the air relies on industrial-scale installations, with their own environmental and social repercussions.
Without new legal, policy, and ethical frameworks, no significant advances will be possible, no matter how great the technological developments. Progressive nations may forge ahead toward delivering the combined package.
The costs of this are high. But countries that take the lead stand to gain technology, jobs, energy independence, better health, and international gravitas.
Methane emissions are one of the major concerns surrounding coal seam gas. But we should also be paying attention to other sources of methane, in particular those from coal mining. By dealing with these we could make significant progress on reducing Australia’s greenhouse gas emissions.
Some coal mines have operational power plants and pilot studies to use the vented methane and reduce emissions. But recent mapping of the concentration of methane in the atmosphere at ground level by UNSW Australia in association with Royal Holloway University of London Greenhouse Gas Laboratory shows that we need to do much better.
Methane is a colourless and odourless gas, but, like carbon dioxide, it contributes to global warming. In fact it is more potent: methane released into the atmosphere has a global warming potential 25 times greater than carbon dioxide over 100 years.
Apart from energy, major sources of methane include municipal solid waste, municipal waste water, agriculture (predominantly cattle and rice cultivation), bushfires, termites, wetlands and natural seeps from the Earth.
It may be invisible, but we can now measure and see the distribution of methane in the atmosphere. Portable laser-based gas analysers allow us to measure in real time the concentration of the methane in the atmosphere in parts per billion (ppb).
Methane is a natural part of our world, but human activities over the past two centuries have increased its concentration in the atmosphere from a base global average of 722 ppb in 1750 to a global average of 1,823 ppb in 2015.
Due to lower population densities and industrial activities, the southern hemisphere has cleaner air. Until last year the southern hemisphere had methane concentrations less than 1,800 ppb. However, Australia passed that significant benchmark in 2015.
As we can see from the internationally important Cape Grim data collected by CSIRO, methane concentration stabilised between the years 2000 and 2006. Methane concentration oscillates with the seasons (as does carbon dioxide), peaking in September.
Between the years 2000 and 2006 the annual peak was about 1,740 ppb. But since 2007 it has increased by 4-11 ppb per year, peaking at 1,803 ppb in September 2015. Since 2007, methane in the atmosphere has steadily increased worldwide. Just why it started rising again is poorly understood.
To better understand why methane is increasing in the atmosphere, over the past three years we have been undertaking extensive measurements of greenhouse gases in the ground-level atmosphere throughout New South Wales and Queensland. The focus of our research has been mapping methane in all landscape settings to determine significant sources.
We have travelled many thousands of kilometres to measure greenhouse gas emissions in urban, rural and mining landscapes using a portable greenhouse gas analyser. The methane analyser is simply placed inside a car, and air is drawn into the analyser via a tube which has an inlet mounted on the roof. We then measure the concentration of methane in the atmosphere as we drive along roads.
From the figure above you can see that Hunter Valley coal mines are a major source of methane released into the atmosphere. Most of the methane above background concentrations in the atmosphere is due to venting of methane from underground coal mines to make them a safe place to work – if the mines weren’t vented, the methane could ignite and explode.
While some mines capture vented methane to generate power or flare the methane, this image shows that a lot more work needs to be done if we are to satisfactorily reduce the greenhouse gas footprint of coal mining, even before the coal is used to produce electricity.
On some days methane concentration above 2,000 ppb extends for 50 kilometres near the coal mines. We have not encountered any other landscape with elevated readings extending for kilometres, with the exception of days when there are bushfires.
Current approximations of methane being emitted to the atmosphere are a combination of measurements and estimates. This has resulted in considerable uncertainty in the values reported to government and tallied in Australia’s greenhouse gas accounts.
Australia needs a more extensive greenhouse gas monitoring network, so that we can reduce the uncertainty in our National Greenhouse Accounts and better track progress on our international emission reduction commitments.
Our research is focused on measuring what is actually being released into the atmosphere. This is vital for properly understanding how large our greenhouse gas emissions are, and where to focus our efforts to reduce these. Clearly, further reducing emissions from coal mining is a good place to start.
This article was co-authored by Elisa Ginty, an honours candidate at UNSW.
Over 190 countries are negotiating in Paris a global agreement to stabilise climate change at less than 2℃ above pre-industrial global average temperatures.
For a reasonable chance of keeping warming under 2℃ we can emit a further 865 billion tonnes of carbon dioxide (CO2). The climate commitments to reduce greenhouse gas emissions to 2030 are a first step, but recent analyses show they are not enough.
So what are the options if we cannot limit emissions to remain within our carbon budget?
Emitting more than the allowance would mean we have to remove carbon from the atmosphere. The more carbon we emit over the coming years, the more we will need to remove in future.
In fact, out of 116 scenarios consistent with 2℃ published by the Intergovernmental Panel on Climate Change, 101 scenarios require the removal of CO2 from the atmosphere during the second half of this century. That’s on top of the large emission reductions required.
So how do we remove carbon from the atmosphere? Several technologies have been proposed to this effect. These are often referred to as “negative emissions technologies” because the carbon is being removed from the atmosphere (in the opposite direction to emissions).
In a study published today in Nature Climate Change, which is part of a broader release by the Global Carbon Project, we investigate how big a role these technologies could play in halting global warming.
We find that these technologies might play a role in climate mitigation. However, the large scales of deployment currently used in most pathways that limit warming to 2℃ will be severely constrained by environmental and socio-economic factors. This increases the pressure to raise the level of ambition in reducing fossil fuel emissions now.
The technologies range from relatively simple options, such as planting more trees, which lock up CO2 as they grow, or crushing rocks that naturally absorb CO2 and spreading them on soils and oceans so they remove CO2 more rapidly.
There are also higher-tech options such as using chemicals to absorb CO2 from the air, or burning plants for energy and capturing the CO2 that would otherwise be released, then storing it permanently deep below the ground (called bioenergy with carbon capture and storage).
We examined the impacts of negative emission technologies on land use, greenhouse gas emissions, water use, earth’s reflectivity (or albedo) and soil nutrient loss, as well as the energy and cost requirements for each technology.
One major limitation that we identified is the vast requirements for land.
About 700 million hectares of land are required to grow biomass for bioenergy with carbon capture and storage at the scale needed in many 2℃ pathways. This would remove more than 3 billion tonnes of carbon from the atmosphere every year and would help to compensate an overshoot in emissions earlier this century.
The area required is close to half of current global arable land plus permanent crop area. If bioenergy with carbon capture and storage were deployed at this scale there would be intense competition with food, water and conservation needs.
This land requirement has made other negative emissions technologies attractive, such as direct air capture. However, current cost estimates for such technologies are between US$1,600 and US$2,000 per tonne of carbon removed from the atmosphere. In contrast, the majority of emissions with a carbon price in 40 national jurisdictions have a cost of less than US$10 per tonne of carbon dioxide.
The study shows that there are many such impacts that vary across technologies. These impacts will need to be addressed and should determine the level at which negative emission technologies can play a role in achieving climate mitigation goals.
We conclude that, given the uncertainties around large-scale deployment of negative emissions technologies, we would be taking a big gamble if actions today were based on the expectation of heavy use of unproven technologies tomorrow.
The use of these technologies will likely be limited due to any combination of the environmental, economic or energy constraints we examined. We conclude that “Plan A” must be to reduce greenhouse gas emissions aggressively now. A failure to initiate such a level of emissions cuts may leave us with no “Plan B” to stabilise the climate within the 2℃ target.
The technologies of today are not the technologies of tomorrow. However, a prudent approach must be based on the level of climate abatement required with available technologies, while strongly investing in the research and development that might lead to breakthroughs that will ease the formidable challenge ahead of us.