Yes, a few climate models give unexpected predictions – but the technology remains a powerful tool


Shutterstock

Nerilie Abram, Australian National University; Andrew King, The University of Melbourne; Andy Pitman, UNSW; Christian Jakob, Monash University; Julie Arblaster, Monash University; Lisa Alexander, UNSW; Sarah Perkins-Kirkpatrick, UNSW; Shayne McGregor, Monash University, and Steven Sherwood, UNSW

The much-awaited new report from the Intergovernmental Panel on Climate Change (IPCC) is due later today. Ahead of the release, debate has erupted about the computer models at the very heart of global climate projections.

Climate models are one of many tools scientists use to understand how the climate changed in the past and what it will do in future.

A recent article in the eminent US magazine Science questioned how the IPCC will deal with some climate models which “run hot”. Some models, it said, have projected global warming rates “that most scientists, including the model makers themselves, believe are implausibly fast”.


Read more: Monday’s IPCC report is a really big deal for climate change. So what is it? And why should we trust it?


Some commentators, including in Australia, interpreted the article as proof climate modelling had failed.

So should we be using climate models? We are climate scientists from Australia’s Centre of Excellence for Climate Extremes, and we believe the answer is a firm yes.

Our research uses and improves climate models so we can help Australia cope with extreme events, now and in future. We know when climate models are running hot or cold. And identifying an error in some climate models doesn’t mean the science has failed – in fact, it means our understanding of the climate system has advanced.

So lets look at what you should know about climate models ahead of the IPCC findings.

What are climate models?

Climate models comprise millions of lines of computer code representing the physics and chemistry of the processes that make up our climate system. The models run on powerful supercomputers and have simulated and predicted global warming with remarkable accuracy.

They unequivocally show that warming of the planet since the Industrial Revolution is due to human-caused emissions of greenhouse gases. This confirms our understanding of the greenhouse effect, known since the 1850s.

Models also show the intensity of many recent extreme weather events around the world would be essentially impossible without this human influence.

 

 

 

Scientists do not use climate models in isolation, or without considering their limitations.

For a few years now, scientists have known some new-generation climate models probably overestimate global warming, and others underestimate it.

This realisation is based on our understanding of Earth’s climate sensitivity – how much the climate will warm when carbon dioxide (CO₂) levels in the atmosphere double.

Before industrial times, CO₂ levels in the atmosphere were 280 parts per million. So a doubling of CO₂ will occur at 560 parts per million. (For context, we’re currently at around 415 parts per million).

The latest scientific evidence, using observed warming, paleoclimate data and our physical understanding of the climate system, suggests global average temperatures will very likely increase by between 2.2℃ and 4.9℃ if CO₂ levels double.

The large majority of climate models run within this climate sensitivity range. But some don’t – instead suggesting a temperature rise as low as 1.8℃ or high as 5.6℃.

It’s thought the biases in some models stem from the representations of clouds and their interactions with aerosol particles. Researchers are beginning to understand these biases, building our understanding of the climate system and how to further improve models in future.

With all this in mind, scientists use climate models cautiously, giving more weight to projections from climate models that are consistent with other scientific evidence.

The following graph shows how most models are within the expected climate sensitivity range – and having some running a bit hot or cold doesn’t change the overall picture of future warming. And when we compare model results with the warming we’ve already observed over Australia, there’s no indication the models are over-cooking things.

Rapid warming in Australia under a very high greenhouse gas emission future (red) compared with climate change stabilisation in a low emission future (blue). Author provided.

What does the future look like?

Future climate projections are produced by giving models different possibilities for greenhouse gas concentrations in our atmosphere.

The latest IPCC models use a set of possibilities called “Shared Socioeconomic Pathways” (SSPs). These pathways match expected population growth, and where and how people will live, with plausible levels of atmospheric greenhouse gases that would result from these socioeconomic choices.

The pathways range from low-emission scenarios that also require considerable atmospheric CO₂ removal – giving the world a reasonable chance of meeting the Paris Agreement targets – to high-emission scenarios where temperature goals are far exceeded.


Nerilie Abram, based on Riahi et al. 2017, CC BY-ND

Ahead of the IPCC report, some say the high-emission scenarios are too pessimistic. But likewise, it could be argued the lack of climate action over the past decade, and absence of technology to remove large volumes of CO₂ from the atmosphere, means low-emission scenarios are too optimistic.

If countries meet their existing emissions reduction commitments under the Paris Agreement, we can expect to land somewhere in the middle of the scenarios. But the future depends on our choices, and we shouldn’t dismiss any pathway as implausible.

There is considerable value in knowing both the future risks to avoid, and what’s possible under ambitious climate action.


Read more: The climate won’t warm as much as we feared – but it will warm more than we hoped


Wind turbines in field
The future climate depends on our choices today. Unsplash

Where to from here?

We can expect the IPCC report to be deeply worrying. And unfortunately, 30 years of IPCC history tells us the findings are more likely to be too conservative than too alarmist.

An enormous global effort – both scientifically and in computing resources – is needed to ensure climate models can provide even better information.

Climate models are already phenomenal tools at large scales. But increasingly, we’ll need them to produce fine-scale projections to help answer questions such as: where to plant forests to mitigate carbon? Where to build flood defences? Where might crops best be grown? Where would renewable energy resources be best located?

Climate models will continue to be an important tool for the IPCC, policymakers and society as we attempt to manage the unavoidable risks ahead.The Conversation

Nerilie Abram, Chief Investigator for the ARC Centre of Excellence for Climate Extremes; Deputy Director for the Australian Centre for Excellence in Antarctic Science, Australian National University; Andrew King, ARC DECRA fellow, The University of Melbourne; Andy Pitman, Director of the ARC Centre of Excellence for Climate Extremes, UNSW; Christian Jakob, Professor in Atmospheric Science, Monash University; Julie Arblaster, Chief Investigator, ARC Centre of Excellence for Climate Extremes; Chief Investigator, ARC Securing Antarctica’s Environmental Future; Professor, Monash University; Lisa Alexander, Chief Investigator ARC Centre of Excellence for Climate Extremes and Professor Climate Change Research Centre, UNSW; Sarah Perkins-Kirkpatrick, ARC Future Fellow, UNSW; Shayne McGregor, Associate professor, Monash University, and Steven Sherwood, Professor of Atmospheric Sciences, Climate Change Research Centre, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Making climate models open source makes them even more useful



File 20180306 146675 1qxavh2.png?ixlib=rb 1.1
MiMA: an open source way to model the climate.
Martin Jucker, Author provided

Martin Jucker, University of Melbourne

Designing climate experiments is all but impossible in the real world. We can’t, for instance, study the effects of clouds by taking away all the clouds for a set period of time and seeing what happens.

Instead, we have to design our experiments virtually, by developing computer models. Now, a new open-source set of climate models has allowed this research to become more collaborative, efficient and reliable.




Read more:
Why scientists adjust temperature records, and how you can too


Full climate models are designed to be as close to nature as possible. They are representations of the combined knowledge of climate science and are without a doubt the best tools to understand what the future might look like.

However, many research projects focus on small parts of the climate, such as sudden wind changes, the temperature in a given region, or ocean currents. For these studies, concentrating on a small detail in a full climate model is like trying to find a needle in the haystack.

It is therefore common practice in such cases to take away the haystack by using simpler climate models. Scientists usually write these models for specific projects. A quote commonly attributed to Albert Einstein maybe best summarises the process: “Everything should be made as simple as possible, but not simpler.”

Here’s an example. In a paper from last year I looked at the temperature and wind changes in the upper atmosphere close to the Equator. I didn’t need to know what happened in the ocean, and I didn’t need any chemistry, polar ice, or even clouds in my model. So I wrote a much simpler model without these ingredients. It’s called “MiMA” (Model of an idealised Moist Atmosphere), and is freely available on the web.

MiMA.

The drawbacks of simpler models

Of course, using simpler models comes with its own problems.

The main issue is that researchers have to be very clear what the limits are for each model. For instance, it would be hard to study thunderstorms with a model that doesn’t reproduce clouds.

The second issue is that whereas the scientific results may be published, the code itself is typically not. Everyone has to believe that the model does indeed do what the author claims, and to trust that there are no errors in the code.

The third issue with simpler models is that anyone else trying to duplicate or build on published work would have to rebuild a similar model themselves. But given that the two models will be written by two (or more) different people, it is highly unlikely that they will be exactly the same. Also, the time the first author spends on building their model is then spent a second time by a second author, to achieve at best the same result. This is very inefficient.

Open-source climate models

To remedy some (if not all) of these issues, some colleagues and I have built a framework of climate models called Isca. Isca contains models that are easy to obtain, completely free, documented, and come with software to make installation and running easier. All changes are documented and can be reverted. Therefore, it is easy for everyone to use exactly the same models.

The time it would take for everyone to build their own version of the same model can now be used to extend the existing models. More sets of eyes on one model means that errors can be quickly identified and corrected. The time saved could also be used to build new analysis software, which can extract new information from existing simulations.

As a result, the climate models and their resulting scientific experiments become both more flexible and reliable. All of this only works because the code is publicly available and because any changes are continuously tracked and documented.

An example is my own code, MiMA, which is part of Isca. I have been amazed at the breadth of research it is used for. I wrote it to look at the tropical upper atmosphere, but others have since used it to study the life cycle of weather systems, the Indian monsoon, the effect of volcanic eruptions on climate, and so on. And that’s only one year after its first publication.




Read more:
Climate models too complicated? Here’s one that everyone can use


Making models openly available in this way has another advantage. Using an accessible proof can counter the mistrust of climate science that is still prevalent in some quarters.

The burden of proof automatically falls on the sceptics. As all the code is there and all changes are trackable, it is up to them to point out errors. And if someone does find an error, even better! Correcting it is just another step to make the models even more reliable.

The ConversationGoing open source with scientific code has many more benefits than drawbacks. It allows collaboration between people who don’t even know one another. And, most importantly, it will make our climate models more flexible, more reliable and generally more useful.

Martin Jucker, Maritime Continent Research Fellow, University of Melbourne

This article was originally published on The Conversation. Read the original article.