Computing faces an energy crunch unless new technologies are found

File 20181127 130884 1qm1olz.jpg?ixlib=rb 1.1
The tools on our smartphones are enabled by a huge network of mobile phone towers, Wi-Fi networks and server farms.

Daisy Wang, UNSW and Jared Cole, RMIT University

There’s little doubt the information technology revolution has improved our lives. But unless we find a new form of electronic technology that uses less energy, computing will become limited by an “energy crunch” within decades.

Even the most common events in our daily life – making a phone call, sending a text message or checking an email – use computing power. Some tasks, such as watching videos, require a lot of processing, and so consume a lot of energy.

Because of the energy required to power the massive, factory-sized data centres and networks that connect the internet, computing already consumes 5% of global electricity. And that electricity load is doubling every decade.

Fortunately, there are new areas of physics that offer promise for massively reduced energy use.

Read more:
Bitcoin’s high energy consumption is a concern – but it may be a price worth paying

The end of Moore’s Law

Humans have an insatiable demand for computing power.

Smartphones, for example, have become one of the most important devices of our lives. We use them to access weather forecasts, plot the best route through traffic, and watch the latest season of our favourite series.

And we expect our smartphones to become even more powerful in the future. We want them to translate language in real time, transport us to new locations via virtual reality, and connect us to the “Internet of Things”.

The computing required to make these features a reality doesn’t actually happen in our phones. Rather it’s enabled by a huge network of mobile phone towers, Wi-Fi networks and massive, factory-sized data centres known as “server farms”.

For the past five decades, our increasing need for computing was largely satisfied by incremental improvements in conventional, silicon-based computing technology: ever-smaller, ever-faster, ever-more efficient chips. We refer to this constant shrinking of silicon components as “Moore’s Law”.

Moore’s law is named after Intel co-founder Gordon Moore, who observed that:

the number of transistors on a chip doubles every year while the costs are halved.

But as we hit limits of basic physics and economy, Moore’s law is winding down. We could see the end of efficiency gains using current, silicon-based technology as soon as 2020.

Our growing demand for computing capacity must be met with gains in computing efficiency, otherwise the information revolution will slow down from power hunger.

Achieving this sustainably means finding a new technology that uses less energy in computation. This is referred to as a “beyond CMOS” solution, in that it requires a radical shift from the silicon-based CMOS (complementary metal–oxide–semiconductor) technology that has been the backbone of computing for the last five decades.

Read more:
Moore’s Law is 50 years old but will it continue?

Why does computing consume energy at all?

Processing of information takes energy. When using an electronic device to watch TV, listen to music, model the weather or any other task that requires information to be processed, there are millions and millions of binary calculations going on in the background. There are zeros and ones being flipped, added, multiplied and divided at incredible speeds.

The fact that a microprocessor can perform these calculations billions of times a second is exactly why computers have revolutionised our lives.

But information processing doesn’t come for free. Physics tells us that every time we perform an operation – for example, adding two numbers together – we must pay an energy cost.

And the cost of doing calculations isn’t the only energy cost of running a computer. In fact, anyone who has ever used a laptop balanced on their legs will attest that most of the energy gets converted to heat. This heat comes from the resistance that electricity meets when it flows through a material.

It is this wasted energy due to electrical resistance that researchers are hoping to minimise.

Recent advances point to solutions

Running a computer will always consume some energy, but we are a long way (several orders of magnitude) away from computers that are as efficient as the laws of physics allow. Several recent advances give us hope for entirely new solutions to this problem via new materials and new concepts.

Very thin materials

One recent step forward in physics and materials science is being able to build and control materials that are only one or a few atoms thick. When a material forms such a thin layer, and the movement of electrons is confined to this sheet, it is possible for electricity to flow without resistance.

There are a range of different materials that show this property (or might show it). Our research at the ARC Centre for Future Low-Energy Electronics Technologies (FLEET) is focused on studying these materials.

The study of shapes

There is also an exciting conceptual leap that helps us understand this property of electricity flow without resistance.

This idea comes from a branch of mathematics called “topology”. Topology tells us how to compare shapes: what makes them the same and what makes them different.

Image a coffee cup made from soft clay. You could slowly squish and squeeze this shape until it looks like a donut. The hole in the handle of the cup becomes the hole in the donut, and the rest of the cup gets squished to form part of the donut.

Topology tells us that donuts and coffee cups are equivalent because we can deform one into the other without cutting it, poking holes in it, or joining pieces together.

It turns out that the strange rules that govern how electricity flows in thin layers can be understood in terms of topology. This insight was the focus of the 2016 Nobel Prize, and it’s driving an enormous amount of current research in physics and engineering.

Read more:
Physicists explore exotic states of matter inspired by Nobel-winning research

We want to take advantage of these new materials and insights to develop the next generation of low-energy electronics devices, which will be based on topological science to allow electricity to flow with minimal resistance.

This work creates the possibility of a sustainable continuation of the IT revolution – without the huge energy cost.The Conversation

Daisy Wang, Postdoctoral Fellow, UNSW School of Physics, UNSW and Jared Cole, Professor of Physics, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Technology is making cities ‘smart’, but it’s also costing the environment

File 20180724 194131 1q57kz9.jpg?ixlib=rb 1.1
A smart city is usually one connected and managed through computing — sensors, data analytics and other information and communications technology.

Mark Sawyer, University of Western Australia

The Australian government has allocated A$50 million for the Smarter Cities and Suburbs Program to encourage projects that “improve the livability, productivity and sustainability of cities and towns across Australia”.

One project funded under the program is installation of temperature, lighting and motion sensors in buildings and bus interchanges in Woden, ACT. This will allow energy systems to be automatically adjusted in response to people’s use of these spaces, with the aim of reducing energy use and improving safety and security.

In similar ways, governments worldwide are partnering with technology firms to make cities “smarter” by retrofitting various city objects with technological features. While this might make our cities safer and potentially more user-friendly, we can’t work off a blind faith in technology which, without proper design, can break down and leave a city full of environmental waste.

Read more:
Can a tech company build a city? Ask Google

How cities are getting smarter

A “smart city” is an often vague term that usually describes one of two things. The first is a city that takes a knowledge-based approach to its economy, transport, people and environment. The second is a city connected and managed through computing — sensors, data analytics and other information and communications technology.

It’s the second definition that aligns with the interests of multinational tech firms. IBM, Serco, Cisco, Microsoft, Philips and Google are among those active in this market. Each is working with local authorities worldwide to provide the hardware, software and technical know-how for complex, urban-scale projects.

In Rio de Janeiro, a partnership between the city government and IBM has created an urban-scale network of sensors, bringing data from thirty agencies into a single centralised hub. Here it is examined by algorithms and human analysts to help model and plan city development, and to respond to unexpected events.

Tech giants provide expertise for a city to become “smart” and then keep its systems running afterwards. In some cases, tech-led smart cities have risen from the ground up. Songdo, in South Korea, and Masdar, UAE, were born smart by integrating advanced technologies at the masterplanning and construction stages.

Read more:
How does a city get to be ‘smart’? This is how Tel Aviv did it

More often, though, existing cities are retrofitted with smart systems. Barcelona, for instance, has gained a reputation as one of the world’s top smart cities, after its existing buildings and infrastructure were fitted with sensors and processors to monitor and maintain infrastructure, as well as for planning future development.

The city is dotted with electric vehicle charging points and smart parking spaces. Sensors and a data-driven irrigation system monitor and manage water use. The public transport system has interactive touch screens at bus stops and USB chargers on buses.

Barcelona has a reputation of being one of the world’s smartest cities.

Suppliers of smart systems claim a number of benefits for smart cities, arguing these will result in more equitable, efficient and environmentally sustainable urban centres. Other advocates claim smart cities are more “happy and resilient”. But there are also hidden costs to smart cities.

The downsides of being smart

Cyber-security and technology ethics are important topics. Smart cities represent a complex new field for governments, citizens, designers and security experts to navigate.

The privatisation of civic space and public services is a hidden cost too. The complexity of smart city systems and their need for ongoing maintenance could lead to long-term reliance on a tech company to deliver public services.

Read more:
Sensors in public spaces can help create cities that are both smart and sociable

Many argue that, by improving data collection and monitoring and allowing for real-time responses, smart systems will lead to better environmental outcomes. For instance, waste bins that alert city managers when they need collecting, or that prompt recycling through tax credits, and street lamps that track movement and adjust lighting levels have the potential to reduce energy use.

But this runs contrary to studies that show more information and communication technology actually leads to higher energy use. At best, smart cities may end up a zero-sum game in terms of sustainability because their “positive and negative impacts tend to cancel each other out”.

And then there’s the less-talked-about issue of e-waste, which is a huge global challenge. Adding computers to objects could create what one writer has termed a new “internet of trash” — products designed to be thrown away as soon as their batteries run down.

Computer technology is often short-lived and needs upgrading often.

As cities become smart they need more and more objects — bollards, street lamps, public furniture, signboards — to integrate sensors, screens, batteries and processors. Objects in our cities are usually built with durable materials, which means they can be used for decades.

Computer processors and software systems, on the other hand, are short-lived and may need upgrading every few years. Adding technology to products that didn’t have this in the past effectively shortens their life-span and makes servicing, warranties and support contracts more complex and unreliable. One outcome could be a landscape of smart junk — public infrastructure that has stopped working, or that needs ongoing patching, maintenance and upgrades.

Read more:
Does not compute: Australia is still miles behind in recycling electronic products

In Barcelona, many of the gadgets that made it one of the world’s smartest cities no longer work properly. The smart streetlights on the Passatge de Mas de Roda, which were put in place in 2011 to improve energy efficiency by detecting human movement, noise and climatic conditions, later fell into disrepair.

If smart objects aren’t designed so they can be disassembled at the end of their useful life, electronic components are likely to be left inside where they hamper recycling efforts. Some digital components contain toxic materials. Disposing of these through burning or in landfill can contaminate environments and threaten human health.

The ConversationThese are not insurmountable challenges. Information and communications technology, data and networks have an important place in our shared urban future. But this future will be determined by our attitudes toward these technologies. We need to make sure that instead of being short-term gimmicks to be thrown away when their novelty wears off, they are thoughtfully designed, and that they put they put the needs of citizens and environments first.

Mark Sawyer, Lecturer in Architecture, University of Western Australia

This article was originally published on The Conversation. Read the original article.

From drone swarms to tree batteries, new tech is revolutionising ecology and conservation

File 20180508 34006 eyxvq5.jpg?ixlib=rb 1.1
Eyes in the sky: drone footage is becoming a vital tool for monitoring ecosystems.
Deakin Marine Mapping Group

Euan Ritchie, Deakin University and Blake Allan, Deakin University

Understanding Earth’s species and ecosystems is a monumentally challenging scientific pursuit. But with the planet in the grip of its sixth mass extinction event, it has never been a more pressing priority.

To unlock nature’s secrets, ecologists turn to a variety of scientific instruments and tools. Sometimes we even repurpose household items, with eyebrow-raising results – whether it’s using a tea strainer to house ants, or tackling botfly larvae with a well-aimed dab of nail polish.

But there are many more high-tech options becoming available for studying the natural world. In fact, ecology is on the cusp of a revolution, with new and emerging technologies opening up new possibilities for insights into nature and applications for conserving biodiversity.

Our study, published in the journal Ecosphere, tracks the progress of this technological development. Here we highlight a few examples of these exciting advances.

Tiny tracking sensors

Electronically recording the movement of animals was first made possible by VHF radio telemetry in the 1960s. Since then even more species, especially long-distance migratory animals such as caribou, shearwaters and sea turtles, have been tracked with the help of GPS and other satellite data.

But our understanding of what affects animals’ movement and other behaviours, such as hunting, is being advanced further still by the use of “bio-logging” – equipping the animals themselves with miniature sensors.

Bio-logging is giving us new insight into the lives of animals such as mountain lions.

Many types of miniature sensors have now been developed, including accelerometers, gyroscopes, magnetometers, micro cameras, and barometers. Together, these devices make it possible to track animals’ movements with unprecedented precision. We can also now measure the “physiological cost” of behaviours – that is, whether an animal is working particularly hard to reach a destination, or within a particular location, to capture and consume its prey.

Taken further, placing animal movement paths within spatially accurate 3D-rendered (computer-generated) environments will allow ecologists to examine how individuals respond to each other and their surroundings.

These devices could also help us determine whether animals are changing their behaviour in response to threats such as invasive species or habitat modification. In turn, this could tell us what conservation measures might work best.

Autonomous vehicles

Remotely piloted vehicles, including drones, are now a common feature of our skies, land, and water. Beyond their more typical recreational uses, ecologists are deploying autonomous vehicles to measure environments, observe species, and assess changes through time, all with a degree of detail that was never previously possible.

There are many exciting applications of drones in conservation, including surveying cryptic and difficult to reach wildlife such as orangutans

Coupling autonomous vehicles with sensors (such as thermal imaging) now makes it easier to observe rare, hidden or nocturnal species. It also potentially allows us to catch poachers red-handed, which could help to protect animals like rhinoceros, elephants and pangolins.

3D printing

Despite 3D printing having been pioneered in the 1980s, we are only now beginning to realise the potential uses for ecological research. For instance, it can be used to make cheap, lightweight tracking devices that can be fitted onto animals. Or it can be used to create complex and accurate models of plants, animals or other organisms, for use in behavioural studies.

3D printing is shedding new light on animal behaviour, including mate choice.


Keeping electronic equipment running in the field can be a challenge. Conventional batteries have limited life spans, and can contain toxic chemicals. Solar power can help with some of these problems, but not in dimly lit areas, such as deep in the heart of rainforests.

“Bio-batteries” may help to overcome this challenge. They convert naturally occurring sources of chemical energy, such as starch, into electricity using enzymes. “Plugging-in” to trees may allow sensors and other field equipment to be powered cheaply for a long time in places without sun or access to mains electricity.

Combining technologies

All of the technologies described above sit on a continuum from previous (now largely mainstream) technological solutions, to new and innovative ones now being trialled.

Illustrative timeline of new technologies in ecology and environmental science. Source and further details at DOI: 10.1002/ecs2.2163.
Euan Ritchie

Emerging technologies are exciting by themselves, but when combined with one another they can revolutionise ecological research. Here is a modified exerpt from our paper:

Imagine research stations fitted with remote cameras and acoustic recorders equipped with low-power computers for image and animal call recognition, powered by trees via bio-batteries. These devices could use low-power, long-range telemetry both to communicate with each other in a network, potentially tracking animal movement from one location to the next, and to transmit information to a central location. Swarms of drones working together could then be deployed to map the landscape and collect data from a central location wirelessly, without landing. The drones could then land in a location with an internet connection and transfer data into cloud-based storage, accessible from anywhere in the world.

Visualisation of a future smart research environment, integrating multiple ecological technologies. The red lines indicate data transfer via the Internet of things (IoT), in which multiple technologies are communicating with one another. The gray lines indicate more traditional data transfer. Broken lines indicate data transferred over long distances. (1) Bio-batteries; (2) The Internet of things (IoT); (3) Swarm theory; (4) Long-range low-power telemetry; (5) Solar power; (6) Low-power computer; (7) Data transfer via satellite; and (8) Bioinformatics. Source and further details at DOI: 10.1002/ecs2.2163.
Euan Ritchie

These advancements will not only generate more accurate research data, but should also minimise the disturbance to species and ecosystems in the process.

Not only will this minimise the stress to animals and the inadvertent spread of diseases, but it should also provide a more “natural” picture of how plants, animals and other organisms interact.

Read more:
‘Epic Duck Challenge’ shows drones can outdo people at surveying wildlife

Realising the techno-ecological revolution will require better collaboration across disciplines and industries. Ecologists should ideally also be exposed to relevant technology-based training (such as engineering or IT) and industry placements early in their careers.

The ConversationSeveral initiatives, such as Wildlabs, the Conservation Technology Working Group and TechnEcology, are already addressing these needs. But we are only just at the start of what’s ultimately possible.

Euan Ritchie, Associate Professor in Wildlife Ecology and Conservation, Centre for Integrative Ecology, School of Life & Environmental Sciences, Deakin University and Blake Allan, , Deakin University

This article was originally published on The Conversation. Read the original article.

Charging ahead: how Australia is innovating in battery technology

File 20180209 160250 o3e3m4.jpg?ixlib=rb 1.1
Since sodium is abundant, battery technology that uses it side-steps many of the issues associated with lithium batteries.
Paul Jones/UOW, Author provided

Jonathan Knott, University of Wollongong

Lithium-ion remains the most widespread battery technology in use today, thanks to the fact that products that use it are both portable and rechargeable. It powers everything from your smartphone to the “world’s biggest battery” in South Australia.

Demand for batteries is expected to accelerate in coming decades with the increase in deployment of electric vehicles and the need to store energy generated from renewable sources, such as solar photovoltaic panels. But rising concerns about mining practices and shortages in raw materials for lithium-ion batteries – as well as safety issues – have led to a search for alternative technologies.

Many of these technologies aren’t being developed to replace lithium-ion batteries in portable devices, rather they’re looking to take the pressure off by providing alternatives for large-scale, stationary energy storage.

Australian companies and universities are leading the way in developing innovative solutions, but the path to commercial success has its challenges.

Read more:
A month in, Tesla’s SA battery is surpassing expectations

Australian alternatives

Flow batteries

In flow batteries the cathode and anode are liquids, rather than solid as in other batteries. The advantage of this is that the stored energy is directly related to the amount of liquid. That means if more energy is needed, bigger tanks can be easily fitted to the system. Also, flow batteries can be completely discharged without damage – a major advantage over other technologies.

ASX-listed battery technology company Redflow has been developing zinc-bromine flow batteries for residential and commercial energy storage. Meanwhile, VSUN Energy is developing a vanadium-based flow battery for large-scale energy storage systems.

Flow batteries have been receiving considerable attention and investment due to their inherent technical and safety advantages. A recent survey of 500 energy professionals saw 46% of respondents predict flow battery technology will soon become the dominant utility-scale battery energy storage method.

Redflow ZBM2 zinc-bromine flow battery cell.
from Redflow


Lead-acid batteries were invented in 1859 and have been the backbone of energy storage applications ever since. One major disadvantage of traditional lead-acid batteries is the faster they are discharged, the less energy they can supply. Additionally, the lifetime of lead-acid batteries significantly decreases the lower they are discharged.

Energy storage company Ecoult has been formed around CSIRO-developed Ultrabattery technology – the combination of a lead-acid battery and a carbon ultracapacitor. One key advantage of this technology is that it is highly sustainable – essentially all components in the battery are recyclable. Ultrabatteries also address the issue of rate-dependent energy capacity, taking advantage of the ultracapacitor characteristics to allow high discharge (and charge) rates.

These batteries are showing excellent performance in grid-scale applications. Ecoult has also recently received funding to expand to South Asia and beyond.

Ecoult Ultrabatteries photographed during installation on site.

Read more:
Politically charged: do you know where your batteries come from?

Repurposed storage solutions

Rechargeable batteries are considered to have reached their “end of life” when they can only be charged to 80% of their initial capacity. This makes sense for portable applications – a Tesla Model S would have a range of 341 km compared to the original 426 km. However, these batteries can still be used where reduced capacity is acceptable.

Startup Relectrify has developed a battery management system that allows end of life electric vehicle batteries to be used in residential energy storage. This provides a solution to mounting concerns about the disposal of lithium-ion batteries, and reports that less than 5% of lithium-ion batteries in Europe are being recycled. Relectrify has recently secured a A$1.5m investment in the company.

Relectrify’s smart battery management system.
from Relectrify

Thermal energy storage

Energy can be stored in many forms – including as electrochemical, gravitational, and thermal energy. Thermal energy storage can be a highly efficient process, particularly when the sun is the energy source.

Renewable energy technology company Vast Solar has developed a thermal energy storage solution based on concentrated solar power (CSP). This technology gained attention in Australia with the announcement of the world’s largest CSP facility to be built in Port Augusta. CSP combines both energy generation and storage technologies to provide a complete and efficient solution.

1414 degrees is developing a technology for large-scale applications that stores energy as heat in molten silicon. This technology has the potential to demonstrate very high energy densities and efficiencies in applications where both heat and electricity are required. For example, in manufacturing facilities and shopping centres.

Research and development

Sodium-ion batteries

At the University of Wollongong I’m part of the team heading the Smart Sodium Storage Solution (S4) Project. It’s a A$10.5 million project to develop sodium-ion batteries for renewable energy storage. This ARENA-funded project builds upon previous research undertaken at the University of Wollongong and involves three key battery manufacturing companies in China.


We’ve selected the sodium-ion chemistry for the S4 project because it sidesteps many of the raw materials issues associated with lithium-ion batteries. One of the main materials we use to manufacture our batteries is sodium chloride – better known as “table salt” – which is not only abundant, but also cheap.

We’ll be demonstrating the sodium-ion batteries in a residential application at University of Wollongong’s Illawarra Flame House and in an industrial application at Sydney Water’s Bondi Sewage Pumping Station.

Sydney’s iconic Bondi Beach – the location for the demonstration of sodium-ion batteries.
Paul Jones/UOW

Gel-based zinc-bromine batteries

Gelion, a spin-off company from the University of Sydney, is developing gel-based zinc-bromine batteries – similar to the Redflow battery technology. They are designed for use in residential and commercial applications.

The Gelion technology is claimed to have performance comparable with lithium-ion batteries, and the company has attracted significant funding to develop its product. Gelion is still in the early stages of commercialisation, however plans are in place for large-scale manufacturing by 2019.

Challenges facing alternatives

While this paints a picture of a vibrant landscape of exciting new technologies, the path to commercialisation is challenging.

Not only does the product have to be designed and developed, but so does the manufacturing process, production facility and entire supply chain – which can cause issues bringing a product to market. Lithium-ion batteries have a 25 year headstart in these areas. Combine that with the consumer familiarity with lithium-ion, and it’s difficult for alternative technologies to gain traction.

One way of mitigating these issues is to piggyback on established manufacturing and supply chain processes. That’s what we’re doing with the S4 Project: leveraging the manufacturing processes and production techniques developed for lithium-ion batteries to produce sodium-ion batteries. Similarly, Ecoult is drawing upon decades of lead-acid battery manufacturing expertise to produce its Ultrabattery product.

Read more:
How to make batteries that last (almost) forever

Some challenges, however, are intrinsic to the particular technology.

For example, Relectrify does not have control over the quality or history of the cells it uses for their energy storage – making it difficult to produce a consistent product. Likewise, 1414 degrees have engineering challenges working with very high temperatures.

The ConversationForecasts by academics, government officials, investors and tech billionaires all point to an explosion in the future demand for energy storage. While lithium-ion batteries will continue to play a large part, it is likely these innovative Australian technologies will become critical in ensuring energy demands are met.

Jonathan Knott, Associate Research Fellow in Battery R&D, University of Wollongong

This article was originally published on The Conversation. Read the original article.

All hail new weather radar technology, which can spot hailstones lurking in thunderstorms

Joshua Soderholm, The University of Queensland; Alain Protat, Australian Bureau of Meteorology; Hamish McGowan, The University of Queensland; Harald Richter, Australian Bureau of Meteorology, and Matthew Mason, The University of Queensland

An Australian spring wouldn’t be complete without thunderstorms and a visit to the Australian Bureau of Meteorology’s weather radar website. But a new type of radar technology is aiming to make weather radar even more useful, by helping to identify those storms that are packing hailstones.

Most storms just bring rain, lightning and thunder. But others can produce hazards including destructive flash flooding, winds, large hail, and even the occasional tornado. For these potentially dangerous storms, the Bureau issues severe thunderstorm warnings.

For metropolitan regions, warnings identify severe storm cells and their likely path and hazards. They provide a predictive “nowcast”, such as forecasts up to three hours before impact for suburbs that are in harm’s way.

Read more: To understand how storms batter Australia, we need a fresh deluge of data

When monitoring thunderstorms, weather radar is the primary tool for forecasters. Weather radar scans the atmosphere at multiple levels, building a 3D picture of thunderstorms, with a 2D version shown on the bureau’s website.

This is particularly important for hail, which forms several kilometres above ground in towering storms where temperatures are well below freezing.

Bureau of Meteorology 60-minute nowcast showing location and projected track of severe thunderstorms in 10-minute steps.
Australian Bureau of Meteorology

In terms of insured losses, hailstorms have caused more insured losses than any other type of severe weather events in Australia. Brisbane’s November 2014 hailstorms cost an estimated A$1.41 billion, while Sydney’s April 1999 hailstorm, at A$4.3 billion, remains the nation’s most costly natural disaster.

Breaking the ice

Nonetheless, accurately detecting and estimating hail size from weather radar remains a challenge for scientists. This challenge stems from the diversity of hail. Hailstones can be large or small, densely or sparsely distributed, mixed with rain, or any combination of the above.

Conventional radars measure the scattering of the radar beams as they pass through precipitation. However, a few large hailstones can look the same as lots of small ones, making it hard to determine hailstones’ size.

A new type of radar technology called “dual-polarisation” or “dual-pol” can solve this problem. Rather than using a single radar beam, dual-pol uses two simultaneous beams aligned horizontally and vertically. When these beams scatter off precipitation, they provide relative measures of horizontal and vertical size.

Therefore, an observer can see the difference between flatter shapes of rain droplets and the rounder shapes of hailstones. Dual-pol can also more accurately measure the size and density of rain droplets, and whether it’s a mixture or just rain.

Together, these capabilities mean that dual-pol is a game-changer for hail detection, size estimation and nowcasting.

Into the eye of the storm

Dual-pol information is now streaming from the recently upgraded operational radars in Adelaide, Melbourne, Sydney and Brisbane. It allows forecasters to detect hail earlier and with more confidence.

However, more work is needed to accurately estimate hail size using dual-pol. The ideal place for such research is undoubtedly southeast Queensland, the hail capital of the east coast.

When it comes to thunderstorm hazards, nothing is closer to reality than scientific observations from within the storm. In the past, this approach was considered too costly, risky and demanding. Instead, researchers resorted to models or historical reports.

The Atmospheric Observations Research Group at the University of Queensland (UQ) has developed a unique capacity in Australia to deploy mobile weather instrumentation for severe weather research. In partnership with the UQ Wind Research Laboratory, Guy Carpenter and staff in the Bureau of Meteorology’s Brisbane office, the Storms Hazards Testbed has been established to advance the nowcasting of hail and wind hazards.

Over the next two to three years, the testbed will take a mobile weather radar, meteorological balloons, wind measurement towers and hail size sensors into and around severe thunderstorms. Data from these instruments provide high-resolution case studies and ground-truth verification data for hazards observed by the Bureau’s dual-pol radar.

Since the start of October, we have intercepted and sampled five hailstorms. If you see a convoy of UQ vehicles heading for ominous dark clouds, head in the opposite direction and follow us on Facebook instead.

UQ mobile radar deployed for thunderstorm monitoring.
Kathryn Turner

Unfortunately, the UQ storm-chasing team can’t get to every severe thunderstorm, so we need your help! The project needs citizen scientists in southeast Queensland to report hail through #UQhail. Keep a ruler or object for scale (coins are great) handy and, when a hailstorm has safely passed, measure the largest hailstone.

Submit reports via, email, Facebook or Twitter. We greatly appreciate photos with a ruler or reference object and approximate location of the hail.

How to report for uqhail.

Combining measurements, hail reports and the Bureau of Meteorology’s dual-pol weather radar data, we are working towards developing algorithms that will allow hail to be forecast more accurately. This will provide greater confidence in warnings and those vital extra few minutes when cars can be moved out of harm’s way, reducing the impact of storms.

Read more: Tropical thunderstorms are set to grow stronger as the world warms

Advanced techniques developed from storm-chasing and citizen science data will be applied across the Australian dual-pol radar network in Sydney, Melbourne and Adelaide.

The ConversationWho knows, in the future if the Bureau’s weather radar shows a thunderstorm heading your way, your reports might even have helped to develop that forecast.

Joshua Soderholm, Research scientist, The University of Queensland; Alain Protat, Principal Research Scientist, Australian Bureau of Meteorology; Hamish McGowan, Professor, The University of Queensland; Harald Richter, Senior Research Scientist, Australian Bureau of Meteorology, and Matthew Mason, Lecturer in Civil Engineering, The University of Queensland

This article was originally published on The Conversation. Read the original article.

New technology offers hope for storing carbon dioxide underground

Dom Wolff-Boenisch, Curtin University

To halt climate change and prevent dangerous warming, we ultimately have to stop pumping greenhouse gases into the atmosphere. While the world is making slow progress on reducing emissions, there are more radical options, such as removing greenhouse gases from the atmosphere and storing them underground.

In a paper published today in Science my colleagues and I report on a successful trial converting carbon dioxide (CO₂) to rock and storing it underground in Iceland. Although we trialled only a small amount of CO₂, this method has enormous potential.

Here’s how it works.

Turning CO₂ to rock

Our paper is the culmination of a decade of scientific field and laboratory work known as CarbFix in Iceland, working with a group of international scientists, among them Wallace Broecker who coined the expression “global warming” in the 1970s. We also worked with the Icelandic geothermal energy company Reykjavik Energy.

The idea itself to convert CO₂ into carbonate minerals, the basis of limestone, is not new. In fact, Earth itself has been using this conversion technique for aeons to control atmospheric CO₂ levels.

However, scientific opinion had it up to now that converting CO₂ from a gas to a solid (known as mineralisation) would take thousands (or tens of thousands) of years, and would be too slow to be used on an industrial scale.

To settle this question, we prepared a field trial using Reykjavik Energy’s injection and monitoring wells. In 2012, after many years of preparation, we injected 248 tonnes of CO₂ in two separate phases into basalt rocks around 550m underground.

Most CO₂ sequestration projects inject and store “supercritical CO₂”, which is CO₂ gas that has been compressed under pressure to considerably decrease its volume*. However, supercritical CO₂ is buoyant, like a gas, and this approach has thus proved controversial due to the possibility of leaks from the storage reservoir upwards into groundwater and eventually back to the atmosphere.

In fact, some European countries such as the Netherlands have stopped their efforts to store supercritical CO₂ on land because of lack of public acceptance, driven by the fear of possible leaks in the unforeseeable future. Austria went even so far as to ban underground storage of carbon dioxide outright.

The injection well with monitoring station in the background.
Dom Wolff-Boenisch, Author provided

Our Icelandic trial worked in a different way. We first dissolved CO₂ in water to create sparkling water. This carbonated water has two advantages over supercritical CO₂ gas.

First, it is acidic, and attacks basalt which is prone to dissolve under acidic conditions.

Second, the CO₂ cannot escape because it is dissolved and will not rise to the surface. As long as it remains under pressure it will not rise to the surface (you can see the same effect when you crack open a soda can; only then is the dissolved CO₂ released back into the air).

Dissolving basalt means elements such as calcium, magnesium, and iron are released into pore water. Basaltic rocks are rich in these metals that team up with the dissolved CO₂ and form solid carbonate minerals.

Through observations and tracer studies at the monitoring well, we found that over 95% of the injected CO₂ (around 235 tonnes) was converted to carbonate minerals in less than two years. While the initial amount of injected CO₂ was small, the Icelandic field trial clearly shows that mineralisation of CO₂ is feasible and more importantly, fast.

Storing CO₂ under the oceans

The good news is this technology need not be exclusive to Iceland. Mineralisation of CO₂ requires basaltic or peridotitic rocks because these types of rocks are rich in the metals required to form carbonates and bind the CO₂.

As it turns out the entire vast ocean floor is made up of kilometre-thick oceanic basaltic crust, as are large areas on the continental margins. There are also vast land areas covered with basalt (so-called igneous provinces) or peridotite (so-called “ophiolitic complexes”).

The overall potential storage capacity for CO₂ is much larger than the global CO₂ emissions of many centuries. The mineralisation process removes the crucial problem of buoyancy and the need for permanent monitoring of the injected CO₂ to stop and remedy potential leakage to the surface, an issue that supercritical CO₂ injection sites will face for centuries or even millennia to come.

On the downside, CO₂ mineralisation with carbonated water requires substantial amounts of water, meaning that this mineralisation technique can only succeed where vast supplies of water are available.

However, there is no shortage of seawater on the ocean floor or continental margins. Rather, the costs involved present a major hurdle to this kind of permanent storage option, for the time being at least.

In the case of our trial, a tonne of mineralised CO₂ via carbonated water cost about US$17, roughly twice that of using supercritical CO₂ for storage.

It means that as long as there are no financial incentives such as a carbon tax or higher price on carbon emissions, there is no real driving force for carbon storage, irrespective of the technique we use.

*Correction: The sentence has been corrected to note that gas volume rather than density decreases when it is compressed. Thankyou to the readers who pointed out the error.

The Conversation

Dom Wolff-Boenisch, Senior Lecturer, Western Australian School of Mines, Curtin University

This article was originally published on The Conversation. Read the original article.

Combating Illegal Logging

The link below is to an article that looks at how technology is combating illegal logging.

For more visit: