Yes, a few climate models give unexpected predictions – but the technology remains a powerful tool


Shutterstock

Nerilie Abram, Australian National University; Andrew King, The University of Melbourne; Andy Pitman, UNSW; Christian Jakob, Monash University; Julie Arblaster, Monash University; Lisa Alexander, UNSW; Sarah Perkins-Kirkpatrick, UNSW; Shayne McGregor, Monash University, and Steven Sherwood, UNSW

The much-awaited new report from the Intergovernmental Panel on Climate Change (IPCC) is due later today. Ahead of the release, debate has erupted about the computer models at the very heart of global climate projections.

Climate models are one of many tools scientists use to understand how the climate changed in the past and what it will do in future.

A recent article in the eminent US magazine Science questioned how the IPCC will deal with some climate models which “run hot”. Some models, it said, have projected global warming rates “that most scientists, including the model makers themselves, believe are implausibly fast”.


Read more: Monday’s IPCC report is a really big deal for climate change. So what is it? And why should we trust it?


Some commentators, including in Australia, interpreted the article as proof climate modelling had failed.

So should we be using climate models? We are climate scientists from Australia’s Centre of Excellence for Climate Extremes, and we believe the answer is a firm yes.

Our research uses and improves climate models so we can help Australia cope with extreme events, now and in future. We know when climate models are running hot or cold. And identifying an error in some climate models doesn’t mean the science has failed – in fact, it means our understanding of the climate system has advanced.

So lets look at what you should know about climate models ahead of the IPCC findings.

What are climate models?

Climate models comprise millions of lines of computer code representing the physics and chemistry of the processes that make up our climate system. The models run on powerful supercomputers and have simulated and predicted global warming with remarkable accuracy.

They unequivocally show that warming of the planet since the Industrial Revolution is due to human-caused emissions of greenhouse gases. This confirms our understanding of the greenhouse effect, known since the 1850s.

Models also show the intensity of many recent extreme weather events around the world would be essentially impossible without this human influence.

 

 

 

Scientists do not use climate models in isolation, or without considering their limitations.

For a few years now, scientists have known some new-generation climate models probably overestimate global warming, and others underestimate it.

This realisation is based on our understanding of Earth’s climate sensitivity – how much the climate will warm when carbon dioxide (CO₂) levels in the atmosphere double.

Before industrial times, CO₂ levels in the atmosphere were 280 parts per million. So a doubling of CO₂ will occur at 560 parts per million. (For context, we’re currently at around 415 parts per million).

The latest scientific evidence, using observed warming, paleoclimate data and our physical understanding of the climate system, suggests global average temperatures will very likely increase by between 2.2℃ and 4.9℃ if CO₂ levels double.

The large majority of climate models run within this climate sensitivity range. But some don’t – instead suggesting a temperature rise as low as 1.8℃ or high as 5.6℃.

It’s thought the biases in some models stem from the representations of clouds and their interactions with aerosol particles. Researchers are beginning to understand these biases, building our understanding of the climate system and how to further improve models in future.

With all this in mind, scientists use climate models cautiously, giving more weight to projections from climate models that are consistent with other scientific evidence.

The following graph shows how most models are within the expected climate sensitivity range – and having some running a bit hot or cold doesn’t change the overall picture of future warming. And when we compare model results with the warming we’ve already observed over Australia, there’s no indication the models are over-cooking things.

Rapid warming in Australia under a very high greenhouse gas emission future (red) compared with climate change stabilisation in a low emission future (blue). Author provided.

What does the future look like?

Future climate projections are produced by giving models different possibilities for greenhouse gas concentrations in our atmosphere.

The latest IPCC models use a set of possibilities called “Shared Socioeconomic Pathways” (SSPs). These pathways match expected population growth, and where and how people will live, with plausible levels of atmospheric greenhouse gases that would result from these socioeconomic choices.

The pathways range from low-emission scenarios that also require considerable atmospheric CO₂ removal – giving the world a reasonable chance of meeting the Paris Agreement targets – to high-emission scenarios where temperature goals are far exceeded.


Nerilie Abram, based on Riahi et al. 2017, CC BY-ND

Ahead of the IPCC report, some say the high-emission scenarios are too pessimistic. But likewise, it could be argued the lack of climate action over the past decade, and absence of technology to remove large volumes of CO₂ from the atmosphere, means low-emission scenarios are too optimistic.

If countries meet their existing emissions reduction commitments under the Paris Agreement, we can expect to land somewhere in the middle of the scenarios. But the future depends on our choices, and we shouldn’t dismiss any pathway as implausible.

There is considerable value in knowing both the future risks to avoid, and what’s possible under ambitious climate action.


Read more: The climate won’t warm as much as we feared – but it will warm more than we hoped


Wind turbines in field
The future climate depends on our choices today. Unsplash

Where to from here?

We can expect the IPCC report to be deeply worrying. And unfortunately, 30 years of IPCC history tells us the findings are more likely to be too conservative than too alarmist.

An enormous global effort – both scientifically and in computing resources – is needed to ensure climate models can provide even better information.

Climate models are already phenomenal tools at large scales. But increasingly, we’ll need them to produce fine-scale projections to help answer questions such as: where to plant forests to mitigate carbon? Where to build flood defences? Where might crops best be grown? Where would renewable energy resources be best located?

Climate models will continue to be an important tool for the IPCC, policymakers and society as we attempt to manage the unavoidable risks ahead.The Conversation

Nerilie Abram, Chief Investigator for the ARC Centre of Excellence for Climate Extremes; Deputy Director for the Australian Centre for Excellence in Antarctic Science, Australian National University; Andrew King, ARC DECRA fellow, The University of Melbourne; Andy Pitman, Director of the ARC Centre of Excellence for Climate Extremes, UNSW; Christian Jakob, Professor in Atmospheric Science, Monash University; Julie Arblaster, Chief Investigator, ARC Centre of Excellence for Climate Extremes; Chief Investigator, ARC Securing Antarctica’s Environmental Future; Professor, Monash University; Lisa Alexander, Chief Investigator ARC Centre of Excellence for Climate Extremes and Professor Climate Change Research Centre, UNSW; Sarah Perkins-Kirkpatrick, ARC Future Fellow, UNSW; Shayne McGregor, Associate professor, Monash University, and Steven Sherwood, Professor of Atmospheric Sciences, Climate Change Research Centre, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Satellite measurements of slow ground movements may provide a better tool for earthquake forecasting



File 20180819 165967 jkfubz.jpg?ixlib=rb 1.1
The 2016 Kaikoura earthquake shattered the surface and twisted railway lines.
Simon Lamb, CC BY-ND

Simon Lamb, Victoria University of Wellington

It was a few minutes past midnight on 14 November 2016, and I was drifting into sleep in Wellington, New Zealand, when a sudden jolt began rocking the bed violently back and forth. I knew immediately this was a big one. In fact, I had just experienced the magnitude 7.8 Kaikoura earthquake.

Our research, published today, shows how the slow build-up to this earthquake, recorded by satellite GPS measurements, predicted what it would be like. This could potentially provide a better tool for earthquake forecasting.




Read more:
New Zealand’s Alpine Fault reveals extreme underground heat and fluid pressure


Shattering the landscape

The day after the quake, I heard there had been huge surface breaks in a region extending for more than 170 km along the eastern part of the northern South Island. In some places, the ground had shifted by 10 metres, resulting in a complex pattern of fault ruptures.

In effect, the region had been shattered, much like a fractured sheet of glass. The last time anything like this had happened was more than 150 years ago, in 1855.

Quite independently, I had been analysing another extraordinary feature of New Zealand. Over the past century or so, land surveyors had revealed that the landscape is moving all the time, slowly changing shape.

These movements are no more than a few centimetres each year – but they build with time, relentlessly driven by the same forces that move the Earth’s tectonic plates. Like any stiff material subjected to excessive stress, the landscape will eventually break, triggering an earthquake.

I was studying measurements made with state-of-the-art global positioning system (GPS) techniques – and they recorded in great detail the build-up to the 2016 Kaikoura earthquake over the previous two decades.

A mobile crust

GPS measurements for regions at the edges of the tectonic plates, such as New Zealand, have become widely available in the last 15 years or so. Here, the outer part of the Earth (the crust) is broken up by faults into numerous small blocks that are moving over geological time. But it is widely thought that even over periods as short as a few decades, the GPS measurements still record the motion of these blocks.

New Zealand straddles the boundary between the Australian and Pacific tectonic plates, with numerous active faults. Note the locked portion of the underlying megathrust.
Simon Lamb, CC BY

The idea is that at the surface, where the rocks are cold and strong, a fault only moves in sudden shifts during earthquakes, with long intervening periods of inactivity when it is effectively “locked”. During the locked phase, the rocks behave like a piece of elastic, slowly changing shape over a wide region without breaking.

But deeper down, where the rocks are much hotter, there is the possibility that the fault is slowly slipping all the time, gradually adding to the forces in the overlying rocks until the elastic part suddenly breaks. In this case, the GPS measurements could tell us something about how deep one has to go to reach this slipping region, and how fast it is moving.

From this, one could potentially estimate how frequently each fault is likely to rupture during an earthquake, and how big that rupture will be – in other words, the “when and what” of an earthquake. But to achieve this understanding, we would need to consider every major fault when analysing the GPS data.

Invisible faults

Current earthquake forecasting “reverse engineers” past distortions of the Earth’s surface by finding all the faults that could trigger an earthquake, working out their earthquake histories and projecting this pattern into the future in a computer model. But there are some big challenges.

The most obvious is that it is probably impossible to characterise every fault. They are too numerous and many are not visible at the surface. In fact, most historical earthquakes have occurred on faults that were not known before they ruptured.

Our analysis of the GPS measurements has revealed a more fundamental problem that at the same time opens new avenues for earthquake forecasting. Working with statistician Richard Arnold and geophysicist and modeller James Moore, we found the GPS measurements could be better explained if the numerous faults that might rupture in earthquakes were simply ignored. In other words, surface faults seemed to be invisible when looking at the slow movements recorded by GPS.

There was only one fault that mattered – the megathrust that runs under much of New Zealand. It separates the Australian and Pacific tectonic plates and only reaches the surface underwater, about 50 to 100km offshore. Prior to the Kaikoura earthquake, the megathrust was locked at depths shallower than about 30km. Here, the overlying Australian plate had been slowly changing shape like a single piece of elastic.

Slip at depth on the megathrust drives earthquakes in New Zealand, including the M7.8 Kaikoura Earthquake.
Simon Lamb, CC BY

The pacemaker for future quakes

In the conventional view, every big fault has its own inbuilt earthquake driver or pacemaker – the continuously slipping part of the fault deep in the crust. But our analysis suggests that these faults play no role in the driving mechanism of an earthquake, and the pacemaker is the underlying megathrust.

We think the 2016 Kaikoura earthquake provides the vital clue that we are right. The key observation is that numerous ruptures were involved, busting up the boundary between the two plates in a zone that ran more-or-less parallel to the line of locking on the underlying megathrust. This is exactly what we would anticipate if the slow build-up in stress was only driven by slip on the megathrust and not the deeper parts of individual crustal faults.

I remember once watching a documentary about the making of the Boeing 777 aircraft. The engineers were very confident about its design limits under flying conditions, but the Civil Aviation Authority wanted it tested to destruction. In one test, the vast wings were twisted so that their tips arced up to the sky at a weird angle. Suddenly, there was a bang and the wings snapped, greeted by loud cheering because this had occurred almost exactly when predicted. But the details of how this happened, such as where the cracks of metal fatigue twisted the metal, were something that only the experiment could show.

I think this is a good analogy for realistic goals with earthquake prediction. The Herculean task of identifying every fault and its past earthquake history may be of only limited use. In fact, it is becoming clear that earthquake ruptures on individual faults are far from regular. Big faults may never rupture in one go, but bit by bit together with many other faults.

But it might well be possible to forecast when there will be severe shaking in a region near you – surely something that is equally as valuable.The Conversation

Simon Lamb, Associate Professor in Geophysics, Victoria University of Wellington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australia: Indigenous Language Map


The link below is to an article that reports on the recently developed ‘Indigenous Language Map,’ which is a crucial tool in preserving Aboriginal culture and languages in Australia.

For more visit:
http://www.australiangeographic.com.au/journal/mapping-aboriginal-language.htm