Dishing the dirt: Australia’s move to store carbon in soil is a problem for tackling climate change



Shutterstock

Robert Edwin White, University of Melbourne and Brian Davidson, University of Melbourne

To slow climate change, humanity has two main options: reduce greenhouse gas emissions directly or find ways to remove them from the atmosphere. On the latter, storing carbon in soil – or carbon farming – is often touted as a promising way to offset emissions from other sources such as energy generation, industry and transport.

The Morrison government’s Technology Investment Roadmap, now open for public comment, identifies soil carbon as a potential way to reduce emissions from agriculture and to offset other emissions.

In particular, it points to so-called “biochar” – plant material transformed into carbon-rich charcoal then applied to soil.

But the government’s plan contains misconceptions about both biochar, and the general effectiveness of soil carbon as an emissions reduction strategy.

Emissions rising from a coal plant.
Soil carbon storage is touted as a way to offset emissions from industry and elsewhere.
Shutterstock

What is biochar?

Through photosynthesis, plants turn carbon dioxide (CO₂) into organic material known as biomass. When that biomass decomposes in soil, CO₂ is produced and mostly ends up in the atmosphere.

This is a natural process. But if we can intervene by using technology to keep carbon in the soil rather than in the atmosphere, in theory that will help mitigate climate change. That’s where biochar comes in.

Making biochar involves heating waste organic materials in a reduced-oxygen environment to create a charcoal-like product – a process called “pyrolysis”. The carbon from the biomass is stored in the charcoal, which is very stable and does not decompose for decades.

Plant materials are the predominant material or “feedstock” used to make biochar, but livestock manure can also be used. The biochar is applied to the soil, purportedly to boost soil fertility and productivity. This has been tested on grassland, cropping soils and in vineyards.

A handful of biochar.
Biochar is produced by burning organic material in a low oxygen environment.
Shutterstock

But there’s a catch

So far, so good. But there are a few downsides to consider.

First, the pyrolysis process produces combustible gases and uses energy – to the extent that when all energy inputs and outputs are considered in a life cycle analysis, the net energy balance can be negative. In other words, the process can create more greenhouse gas emissions than it saves. The balance depends on many factors including the type and condition of the feedstock and the rate and temperature of pyrolysis.

Second, while biochar may improve the soil carbon status at a new site, the sites from which the carbon residues are removed, such as farmers’ fields or harvested forests, will be depleted of soil carbon and associated nutrients. Hence there may be no overall gain in soil fertility.




Read more:
A pretty good start but room for improvement: 3 experts rate Australia’s emissions technology plan


Third, the government roadmap claims increasing soil carbon can reduce emissions from livestock farming while increasing productivity. Theoretically, increased soil carbon should lead to better pasture growth. But the most efficient way for farmers to take advantage of the growth, and increase productivity, is to keep more livestock per hectare.

Livestock such as cows and sheep produce methane – a much more potent greenhouse gas than carbon dioxide. Our analysis suggests the methane produced by the extra stock would exceed the offsetting effect of storing more soil carbon. This would lead to a net increase, not decrease, in greenhouse gas

Beef cattle grazing in a field
Farmers would have to increase stock numbers to benefit from pasture growth.
Dan Peled/AAP

A policy failure

The government plan refers to the potential to build on the success of the Emissions Reduction Fund. Among other measures, the fund pays landholders to increase the amount of carbon stored in soil through carbon credits issued through the Carbon Farming Initiative.

However since 2014, the Emissions Reduction Fund has not significantly reduced Australia’s greenhouse gas emissions – and agriculture’s contribution has been smaller still.




Read more:
Carbon dioxide levels over Australia rose even after COVID-19 forced global emissions down. Here’s why


So far, the agriculture sector has been contracted to provide about 9.5% of the overall abatement, or about 18.3 million tonnes. To date, it’s supplied only 1.54 million tonnes – 8.4% of the sector’s commitment.

The initiative has largely failed because several factors have made it uneconomic for farmers to take part. They include:

  • overly complex regulations
  • requirements for expensive soil sampling and analysis
  • the low value of carbon credits (averaging $12 per tonne of CO₂-equivalent since the scheme began).
A farmer inspecting crops.
For many farmers, taking part in the Emissions Reduction Fund is uneconomic.
Shutterstock

A misguided strategy

We believe the government is misguided in considering soil carbon as an emissions reduction technology.

Certainly, increasing soil carbon at one location can boost soil fertility and potentially productivity, but these are largely private landholder benefits – paid for by taxpayers in the form of carbon credits.




Read more:
Climate explained: are we doomed if we don’t manage to curb emissions by 2030?


If emissions reduction is seen as a public benefit, then the payment to farmers becomes a subsidy. But it’s highly questionable whether the public benefit (in the form of reduced emissions) is worth the cost. The government has not yet done this analysis.

To be effective, future emissions technology in Australia should focus on improving energy efficiency in industry, the residential sector and transport, where big gains are to be made.The Conversation

Robert Edwin White, Professor Emeritus, University of Melbourne and Brian Davidson, Senior Lecturer, Department of Agriculture and Food Systems, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Satellite measurements of slow ground movements may provide a better tool for earthquake forecasting



File 20180819 165967 jkfubz.jpg?ixlib=rb 1.1
The 2016 Kaikoura earthquake shattered the surface and twisted railway lines.
Simon Lamb, CC BY-ND

Simon Lamb, Victoria University of Wellington

It was a few minutes past midnight on 14 November 2016, and I was drifting into sleep in Wellington, New Zealand, when a sudden jolt began rocking the bed violently back and forth. I knew immediately this was a big one. In fact, I had just experienced the magnitude 7.8 Kaikoura earthquake.

Our research, published today, shows how the slow build-up to this earthquake, recorded by satellite GPS measurements, predicted what it would be like. This could potentially provide a better tool for earthquake forecasting.




Read more:
New Zealand’s Alpine Fault reveals extreme underground heat and fluid pressure


Shattering the landscape

The day after the quake, I heard there had been huge surface breaks in a region extending for more than 170 km along the eastern part of the northern South Island. In some places, the ground had shifted by 10 metres, resulting in a complex pattern of fault ruptures.

In effect, the region had been shattered, much like a fractured sheet of glass. The last time anything like this had happened was more than 150 years ago, in 1855.

Quite independently, I had been analysing another extraordinary feature of New Zealand. Over the past century or so, land surveyors had revealed that the landscape is moving all the time, slowly changing shape.

These movements are no more than a few centimetres each year – but they build with time, relentlessly driven by the same forces that move the Earth’s tectonic plates. Like any stiff material subjected to excessive stress, the landscape will eventually break, triggering an earthquake.

I was studying measurements made with state-of-the-art global positioning system (GPS) techniques – and they recorded in great detail the build-up to the 2016 Kaikoura earthquake over the previous two decades.

A mobile crust

GPS measurements for regions at the edges of the tectonic plates, such as New Zealand, have become widely available in the last 15 years or so. Here, the outer part of the Earth (the crust) is broken up by faults into numerous small blocks that are moving over geological time. But it is widely thought that even over periods as short as a few decades, the GPS measurements still record the motion of these blocks.

New Zealand straddles the boundary between the Australian and Pacific tectonic plates, with numerous active faults. Note the locked portion of the underlying megathrust.
Simon Lamb, CC BY

The idea is that at the surface, where the rocks are cold and strong, a fault only moves in sudden shifts during earthquakes, with long intervening periods of inactivity when it is effectively “locked”. During the locked phase, the rocks behave like a piece of elastic, slowly changing shape over a wide region without breaking.

But deeper down, where the rocks are much hotter, there is the possibility that the fault is slowly slipping all the time, gradually adding to the forces in the overlying rocks until the elastic part suddenly breaks. In this case, the GPS measurements could tell us something about how deep one has to go to reach this slipping region, and how fast it is moving.

From this, one could potentially estimate how frequently each fault is likely to rupture during an earthquake, and how big that rupture will be – in other words, the “when and what” of an earthquake. But to achieve this understanding, we would need to consider every major fault when analysing the GPS data.

Invisible faults

Current earthquake forecasting “reverse engineers” past distortions of the Earth’s surface by finding all the faults that could trigger an earthquake, working out their earthquake histories and projecting this pattern into the future in a computer model. But there are some big challenges.

The most obvious is that it is probably impossible to characterise every fault. They are too numerous and many are not visible at the surface. In fact, most historical earthquakes have occurred on faults that were not known before they ruptured.

Our analysis of the GPS measurements has revealed a more fundamental problem that at the same time opens new avenues for earthquake forecasting. Working with statistician Richard Arnold and geophysicist and modeller James Moore, we found the GPS measurements could be better explained if the numerous faults that might rupture in earthquakes were simply ignored. In other words, surface faults seemed to be invisible when looking at the slow movements recorded by GPS.

There was only one fault that mattered – the megathrust that runs under much of New Zealand. It separates the Australian and Pacific tectonic plates and only reaches the surface underwater, about 50 to 100km offshore. Prior to the Kaikoura earthquake, the megathrust was locked at depths shallower than about 30km. Here, the overlying Australian plate had been slowly changing shape like a single piece of elastic.

Slip at depth on the megathrust drives earthquakes in New Zealand, including the M7.8 Kaikoura Earthquake.
Simon Lamb, CC BY

The pacemaker for future quakes

In the conventional view, every big fault has its own inbuilt earthquake driver or pacemaker – the continuously slipping part of the fault deep in the crust. But our analysis suggests that these faults play no role in the driving mechanism of an earthquake, and the pacemaker is the underlying megathrust.

We think the 2016 Kaikoura earthquake provides the vital clue that we are right. The key observation is that numerous ruptures were involved, busting up the boundary between the two plates in a zone that ran more-or-less parallel to the line of locking on the underlying megathrust. This is exactly what we would anticipate if the slow build-up in stress was only driven by slip on the megathrust and not the deeper parts of individual crustal faults.

I remember once watching a documentary about the making of the Boeing 777 aircraft. The engineers were very confident about its design limits under flying conditions, but the Civil Aviation Authority wanted it tested to destruction. In one test, the vast wings were twisted so that their tips arced up to the sky at a weird angle. Suddenly, there was a bang and the wings snapped, greeted by loud cheering because this had occurred almost exactly when predicted. But the details of how this happened, such as where the cracks of metal fatigue twisted the metal, were something that only the experiment could show.

I think this is a good analogy for realistic goals with earthquake prediction. The Herculean task of identifying every fault and its past earthquake history may be of only limited use. In fact, it is becoming clear that earthquake ruptures on individual faults are far from regular. Big faults may never rupture in one go, but bit by bit together with many other faults.

But it might well be possible to forecast when there will be severe shaking in a region near you – surely something that is equally as valuable.The Conversation

Simon Lamb, Associate Professor in Geophysics, Victoria University of Wellington

This article is republished from The Conversation under a Creative Commons license. Read the original article.