To fight the catastrophic fires of the future, we need to look beyond prescribed burning



File 20171214 27555 1noxo9d.jpg?ixlib=rb 1.1

AAP Image/ Darren Pateman

James Furlaud, University of Tasmania and David Bowman, University of Tasmania

California is burning – a sentence we’ve heard far too often this year. Sydney is currently on bushfire alert, as firefighters battle a fire in the Hunter Valley region and temperatures are set to top 40℃.

A cocktail of factors, from climate change to centuries of ignoring indigenous burning practises, means that catastrophic fires are likely to become more common.


Read more: Dry winter primes Sydney Basin for early start of bushfire season


One of Australia’s favourite fire prevention measures is prescribed burning – using carefully controlled fires to clear out flammable materials. We’re almost obsessed with it. Indeed, it seems the outcome of every major inquiry is that we need to do more of it.

The Royal Commission inquiry that followed Victoria’s 2009 Black Saturday fires recommended that 5% of all public land in Victoria be treated per year – a doctrine that was subsequently dropped due to impracticality.

Yet our research, published today in the International Journal of Wildland Fire, modelled thousands of fires in Tasmania and found that nearly a third of the state would have to be burned to effectively lower the risk of bushfires.

The question of how much to burn and where is a puzzle we must solve, especially given the inherent risk, issues caused by smoke smoke and shrinking weather windows for safe burning due to climate change.

Why use computer simulations?

The major problem fire science faces is gathering data. Landscape-scale experiments involving extreme fire are rare, for obvious reasons of risk and cost. When a major bushfire happens, all the resources go into putting it out and protecting people. Nobody has the time to painstakingly collect data on how fast it is moving and what it is burning. We are therefore restricted to a few limited data sources to reconstruct the behaviour and impact of fire: we can analyse the scar on the landscape after a fire, look at case studies, or run simulations of computer models.

Most research on the effectiveness of prescribed burning has been at a local scale. We need to start thinking bigger: how can we mitigate the effect of multiple large fires in a region like Tasmania or Southeastern Australia? What is the cumulative effect of different prescribed burning strategies?

A large fuel reduction burn off on Hobart’s eastern shore.
Flickr/Mike Rowe, CC BY-NC

To answer these questions, we create models using mathematical equations to simulate the behaviour of fires across actual landscapes. These models include the effects of vegetation type, terrain and fuel loads, under specific weather conditions. If we simulate thousands of these fires we can get an idea of where fire risk is the highest, and how effective prescribed burning is at reducing that risk.

The island of Tasmania offers the perfect study system. Self-contained, with a wide array of vegetation types and fire regimes, it offers an ideal opportunity to see how fire behaves across a diverse landscape. Perhaps more interestingly, the island contains large areas of flammable landscape surrounding globally unique ecosystems and numerous towns and villages. Obviously, we cannot set fire to all of Tasmania in real life, but computer simulations make it possible!

So, encouraged by the Tasmanian Fire Service, who initiated our research, we simulated tens of thousands of fires across Tasmania under a range of prescribed burning scenarios.

Prescribed fire can be effective, in theory

The first scenario we looked at was the best-case scenario: what happens if we perform prescribed burning on all the vegetation that can handle it, given theoretically unlimited resources? It is possible this approximates the sustained and skillful burning by Tasmanian Aboriginal peoples.

Wildfire simulations following this scenario suggested that such an approach would be extremely effective. Importantly, we saw significant reductions in fire activity even in areas where prescribed burning is impossible (for example, due to the presence of people).

Unfortunately, this best-case approach, while interesting from a theoretical perspective, would require prescribed burning over more than 30% of Tasmania in one year.

We also analysed the effects of 12 more realistic scenarios. These realistic plans were less than half as efficient as the best-case scenario at reducing fire activity.

On average, 3 hectares of prescribed burning would reduce wildfire extent by roughly 1ha in grasslands and dry forests.

In other flammable Tasmanian vegetation types like buttongrass sedgelands and heathlands, the reduction in wildfire was even smaller. This is obviously better than no prescribed burning, but it highlights the fact that this is a relatively inefficient tool, and given the costs and potential drawbacks, should be used only where it is most needed.

This is a fundamental conundrum of prescribed burning: though it is quite effective in theory, the extent to which we would need to implement it to affect fire behaviour across the entire state is completely unachievable.

Therefore, it is imperative that we not just blindly burn a pre-ordained fraction of the landscape. Rather, we must carefully design localised prescribed burning interventions to reduce risk to communities.

We need a multi-tool approach

Our study has shown that while prescribed burning can be quite effective in certain scenarios, it has serious constraints. Additionally, while we analysed these scenarios under bad fire weather, we were not able to analyse the kind of catastrophic days in which the effect of prescribed burning is seriously reduced, with howling dry winds and stupefying heat.

Unfortunately, due to climate change, we are going to see a lot more catastrophic days in the future in Tasmania and indeed globally.

In Hobart this is of particular concern, as the city is surrounded by tall, wet eucalypt forests that have had fifty years grow dense understoreys since the 1967 Black Tuesday fires. These have the potential to cause some of the most intense fires on the planet should conditions get dry enough. Prescribed burning is impossible in these forests.


Read more: Where to take refuge in your home during a bushfire


To combat fire risk we must take a multi-pronged approach that includes innovative strategies, such as designing new spatial patterns for prescribed burning, manually removing fuels from areas in which prescribed burning is not possible, improving the standards for buildings and defensible spaces, and most importantly, engaging the community in all of this.

The ConversationOnly by attacking this problem from multiple angles, and through close collaboration with the community and all levels of government, can we effectively face our fiery future.

James Furlaud, PhD Student in Fire Ecology, University of Tasmania and David Bowman, Professor, Environmental Change Biology, University of Tasmania

This article was originally published on The Conversation. Read the original article.

Advertisements

Burning fossil fuels is responsible for most sea-level rise since 1970


Aimée Slangen, Utrecht University and John Church, CSIRO

Global average sea level has risen by about 17 cm between 1900 and 2005. This is a much faster rate than in the previous 3,000 years.

The sea level changes for several reasons, including rising temperatures as fossil fuel burning increases the amount of greenhouse gases in the atmosphere. In a warming climate, the seas are expected to rise at faster rates, increasing the risk of flooding along our coasts. But until now we didn’t know what fraction of the rise was the result of human activities.

In research published in Nature Climate Change, we show for the first time that the burning of fossil fuels is responsible for the majority of sea level rise since the late 20th century.

As the amount of greenhouse gases we are putting into the atmosphere continues to increase, we need to understand how sea level responds. This knowledge can be used to help predict future sea level changes.


CSIRO

Measuring sea level

Nowadays, we can measure the sea surface height using satellites, so we have an accurate idea of how the sea level is changing, both regionally and in the global mean.

Prior to this (before 1993), sea level was measured by tide gauges, which are spread unevenly across the world. As a result, we have a poorer knowledge of how sea level has changed in the past, particularly before 1960 when there were fewer gauges.

Nevertheless, the tide gauge measurements indicate that global mean sea level has increased by about 17 cm between 1900 and 2005.

What drives sea level rise?

The two largest contributors to rising seas are the expansion of the oceans as temperatures rise, loss of mass from glaciers and ice sheets, and other sources of water on land. Although we now know what the most important contributions to sea-level rise are, we did not know what is driving these changes.

Changes in sea level are driven by natural factors such as natural climate variability (for example El Niño), ongoing response to past climate change (regional warming after the Little Ice Age), volcanic eruptions, and changes in the sun’s activity.

Volcanic eruptions and changes in the sun affect sea level across years to decades. Large volcanic eruptions can cause a temporary sea-level fall because the volcanic ash reduces the amount of solar radiation reaching the ocean, thus cooling the ocean.

Humans have also contributed to sea level rise by burning fossil fuels and increasing the concentration of greenhouse gases in the atmosphere.

Separating the causes

We used climate models to estimate ocean expansion and loss of mass from glaciers and ice sheets for each of the individual factors responsible for sea level change (human and natural). To this we added best estimates of all other known contributions to sea level change, such as groundwater extraction and additional ice sheet contributions.

We then compared these model results to the observed global mean 20th century sea-level change to figure out which factor was responsible for a particular amount of sea level change.

Over the 20th century as a whole, the impact of natural influences is small and explains very little of the observed sea-level trend.

The delayed response of the glaciers and ice sheets to the warmer temperatures after the Little Ice Age (1300-1870 AD) caused a sea-level rise in the early 20th century. This explains much of the observed sea-level change before 1950 (almost 70%), but very little after 1970 (less than 10%).

The human factor

The largest contributions to sea-level rise after 1970 are from ocean thermal expansion and the loss of mass from glaciers in response to the warming from increasing greenhouse gas concentrations. This rise is partly offset by the impact of aerosols, which on their own would cause a cooling of the ocean and less melting of glaciers.

The combined influence of these two factors (greenhouse gases and aerosols) is small in the beginning of the century, explaining only about 15% of the observed rise. However, after 1970, we find that the majority of the observed sea-level rise is a direct response to human influence (nearly 70%), with a slightly increasing percentage up to the present day.

When all factors are considered, the models explain about three quarters of the observed rise since 1900 and almost all of the rise over recent decades (almost 90% since 1970).

The reason for this difference can be found either in the models or in the observations. The models could underestimate the observed rise before 1970 due to, for instance, an underestimated ice sheet contribution. However, the quality and number of sea level observations before the satellite altimeter record is also less.

Tipping the scales

Our paper shows that the driving factors of sea-level change have shifted over the course of the 20th century.

Past natural variations in climate were the dominant factor at the start of the century, as a result of glaciers and ice sheets taking decades to centuries to adapt to climate change.

In contrast, by the end of the 20th century, human influence has become the dominant driving factor for sea-level rise. This will probably continue until greenhouse gas emissions are reduced and ocean temperatures, glaciers and ice sheets are in equilibrium with climate again.


John will be on hand for an Author Q&A between 4 and 5pm AEST on Tuesday, April 12, 2016. Post your questions in the comments section below.

The Conversation

Aimée Slangen, Postdoctoral research fellow, Institute for Marine and Atmospheric Research, Utrecht University and John Church, CSIRO Fellow, CSIRO

This article was originally published on The Conversation. Read the original article.

Huge fires are burning northern Australia every year: it’s time to get them under control


Owen Price, University of Wollongong

On October 1, 2015 a fire lit to manage weeds at the Ranger Uranium Mine burned through 14,000 hectares of Kakadu National Park, threatening important rock-art sites and closing several tourist attractions.

The Northern Territory Government and Energy Resources of Australia (the mine operators) are conducting enquiries to work out what went wrong and how to prevent similar accidents in the future, because like all natural disasters each one is an opportunity to learn.

As it happens, this fire coincided with the publication this month of my research that helps us to understand the problem posed by unplanned fires in the savannas of northern Australia.

That research highlights a 60-day window between August 9 and October 7 each year when huge fires can occur and these contribute an inordinate amount to the total area burnt across the north.

Going up in smoke

Before August mild weather and moisture in the vegetation constrain fires. After October rain and high humidity do the same. Natural fires, caused by lightning, occur from November onwards, and although these account for more than 60% of unplanned fires started, they cause less than 10% of the total area burnt.

Rather, it is fires in the high risk window that are the real problem for fire management over a vast slab of Australia and they are neither natural nor planned.

My study used MODIS satellite mapping to examine the ignition date, duration and eventual size of 126,000 fires in Arnhem Land over a 10 year period. The largest fire ignited in late August 2004 and burned 445,000 hectares, 30 times the area burnt by the Ranger Mine or equivalent to a quarter of the size of Kakadu, our country’s largest national park.

But other regions have it worse: an accidental fire in the northern Tanami ignited on August 4 2011 burnt an area at least 5 million ha. There are 22 European countries smaller than that. So the Ranger Mine fire is in no way unique: it just happened to occur in a highly visible area.

Let’s take a step back and consider what is at stake with fires such as these. Research over many decades have shown that many species of fire-sensitive plants and animals are in decline across the north this is at least partially related to the loss of traditional burning practices which has led to an increase in fire frequency and a predominance of high-intensity late dry season fires (such as the Ranger Mine fire).

It is not certain whether high fire frequency or high fire intensity is the main problem, but it is probably a bit of both.

Stop fires starting

Managers of country such as Kakadu National Park and Arnhem Land recognise this problem and have taken steps to wrest back control of the fire regime. Their main tool is the use of planned burning in the early dry season fires (April – July), akin to traditional burning practices.

By treating the land with a patchwork of these low intensity, low impact fires, subsequent fires can be prevented or constrained to protect sensitive areas. In some areas, including Kakadu and Western Arnhem Land, between 10% and 30% of the country is burnt each year by planned burns. This year 31% of Kakadu National Park had been treated in this way and this patchwork of burnt areas was the context in which the Ranger Mine fire started.

This approach has been successful at reducing the area burnt by unplanned late dry season fires, but it is only a partial fix. The Ranger Mine fire illustrates the main problem: that fires will burn around previously burnt patches.

This fire spread through a small gap between previous patches (at point A on the map below), enlarging its size five-fold. It is fair to say that the southern and western progress of the fire was contained by the planned burning. This protected Nourlangie Rock and Jabiru township and the fire could have been much larger without it.

The large fire that threatened Aboriginal heritage burned around previous burnoffs.
Owen Price, Author provided

The consequence of this “leaky” patchwork of protection is that early dry season burning on its own does not do much to reduce the overall area burnt. Rather it replaces high intensity late dry season fires with low intensity fires (which itself is a good thing).

This replacement phenomenon has been demonstrated
using fine scale fire mapping in Western Arnhem Land. My new study points out that if reducing fire frequency across the north is a goal (and it ought to be), then we need to place more focus on stopping fires starting in the main danger period (mid August to mid October).

Achieving substantial reduction in these ignitions is a huge challenge, and I don’t have any easy answers, but it would help if burn-offs such as the one that started the Ranger Mine fire were not allowed at this risky time of year.

The Conversation

Owen Price, Senior Research Fellow, Centre for Environmental Risk Management of Bushfires, University of Wollongong

This article was originally published on The Conversation. Read the original article.