Genome and satellite technology reveal recovery rates and impacts of climate change on southern right whales



University of Auckland tohorā research team, Department of Conservation permit DJI

Emma Carroll

After close to a decade of globe-spanning effort, the genome of the southern right whale has been released this week, giving us deeper insights into the histories and recovery of whale populations across the southern hemisphere.

Up to 150,000 southern right whales were killed between 1790 and 1980. This whaling drove the global population from perhaps 100,000 to as few as 500 whales in 1920. A century on, we estimate there are 12,000 southern right whales globally. It’s a remarkable conservation success story, but one facing new challenges.

A southern right whale calf breaches in the subantarctic Auckland Islands.
A southern right whale calf breaches in the subantarctic Auckland Islands.
University of Auckland tohorā research team, Author provided

The genome represents a record of the different impacts a species has faced. With statistical models we can use genomic information to reconstruct historical population trajectories and patterns of how species interacted and diverged.

We can then link that information with historical habitat and climate patterns. This look back into the past provides insights into how species might respond to future changes. Work on penguins and polar bears has already shown this.

But we also have a new and surprising short-term perspective on the population of whales breeding in the subantarctic Auckland Islands group — Maungahuka, 450km south of New Zealand.

Spying on whales via satellite

Known as tohorā in New Zealand, southern right whales once wintered in the bays and inlets of the North and South Islands of Aotearoa, where they gave birth and socialised. Today, the main nursery ground for this population is Port Ross, in the subantarctic Auckland Islands.

Adult whales socialise at both the Auckland and Campbell Islands during the austral winter. Together these subantarctic islands are internationally recognised as an important marine mammal area.

In August 2020, I led a University of Auckland and Cawthron Institute expedition to the Auckland Islands. We collected small skin samples for genetic and chemical analysis and placed satellite tags on six tohorā. These tags allowed us to follow their migrations to offshore feeding grounds.

It matters where tohorā feed and how their populations recover from whaling because the species is recognised as a sentinel for climate change throughout the Southern Hemisphere. They are what we describe as “capital” breeders — they fast during the breeding season in wintering grounds like the Auckland Islands, living off fat reserves gained in offshore feeding grounds.

Females need a lot in the “bank” because their calves need a lot of energy. At 4-5m at birth, these calves can grow up to a metre a month. This investment costs the mother 25% of her size over the first few months of her calf’s life. It’s no surprise that calf growth depends on the mother being in good condition.




Read more:
I measure whales with drones to find out if they’re fat enough to breed


Females can only breed again once they’ve regained their fat capital. Studies in the South Atlantic show wintering grounds in Brazil and Argentina produce more calves when prey is more abundant, or environmental conditions suggest it should be.

The first step in understanding the relationship between recovery and prey in New Zealand is to identify where and on what tohorā feed. The potential feeding areas for our New Zealand population could cover roughly a third of the Southern Ocean. That’s why we turn to technologies like satellite tags to help us understand where the whales are going and how they get there.

Where tohorā go

So far, all tracked whales have migrated west; away from the historical whaling grounds to the east near the Chatham Islands. As they left the Auckland Islands, two whales visited other oceanic islands — skirting around Macquarie Island and visiting Campbell Island.

It also seems one whale (Bill or Wiremu, identified as male using genetic analysis of his skin sample) may have reached his feeding grounds, likely at the subtropical convergence. The clue is in the pattern of his tracks: rather than the continuous straight line of a whale migrating, it shows the doughnuts of a whale that has found a prey patch.

Migratory track of southern right whale Bill/Wiremu, where the convoluted track could indicate foraging behaviour.

The subtropical convergence is an area of the ocean where temperature and salinity can change rapidly, and this can aggregate whale prey. Two whales we tracked offshore from the Auckland Islands in 2009 visited the subtropical convergence, but hundreds of kilometres to the east of Bill’s current location.

As Bill and his compatriots migrate, we’ve begun analysing data that will tell us about the recovery of tohorā in the past decade. The most recent population size estimate we have is from 2009, when there were about 2,000 whales.




Read more:
Humans threaten the Antarctic Peninsula’s fragile ecosystem. A marine protected area is long overdue


I am using genomic markers to learn about the kin relationships and, in doing so, the population’s size and growth rate. Think of it like this. Everybody has two parents and if you have a small population, say a small town, you are more likely to find those parents than if you have a big population, say a city.

This nifty statistical trick is known as the “close kin” approach to estimating population size. It relies on detailed understanding of the kin relationships of the whales — something we have only really been able to do recently using new genomic sequencing technology.

Global effort to understand climate change impacts

Globally, southern right whales in South Africa and Argentina have bred less often over the past decade, leading to a lower population growth rate in Argentina.

Concern over this slowdown in recovery has prompted researchers from around the world to work together to understand the relationship between climate change, foraging ecology and recovery of southern right whales as part of the International Whaling Commission Southern Ocean Research Partnership.

The genome helps by giving us that long view of how the whales responded to climate fluctuations in the past, while satellite tracking gives us the short view of how they are responding on a day-to-day basis. Both will help us understand the future of these amazing creatures.The Conversation

Emma Carroll, Rutherford Discovery Fellow

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘A dose of reality’: Morrison government’s new $1.9 billion techno-fix for climate change is a small step



Dean Lewins/AAP

Frank Jotzo, Australian National University

The Morrison government today announced A$1.9 billion over ten years to develop clean technology in industry, agriculture and transport. In some ways it’s a step in the right direction, but a far cry from what’s needed to drive Australia’s shift to a low emissions economy.

The big change involves what the money is for. The new funding will enable the Australian Renewable Energy Agency (ARENA) to support technologies such as green steel production, industrial processes to reduce energy consumption and somewhat controversially, carbon-capture and storage and soil-carbon sequestration.

This is a big move away from ARENA’s current investment priorities. Importantly it means ARENA will continue to operate, as it is running out of money now.

However technology development alone is not enough to cut Australia’s emissions deeply and quickly – which is what’s needed to address the climate threat. Other policies and more money will be needed.

Interior of steelworks
Cutting emissions from industry will be a focus of the new spending.
Dean Lewins/AAP

New role for ARENA

ARENA will receive the lion’s share of the money: A$1.4 billion over ten years in guaranteed baseline funding. ARENA has spent A$1.6 billion since it was established in 2012. So the new funding is lower on an annual basis. It’s also far less than what’s needed to properly meet the challenge, in a country with a large industrial sector and huge opportunities for zero carbon production.

To date, ARENA’s investments have focused on renewable energy supply. Prime Minister Scott Morrison today said the renewables industry was enjoying a “world-leading boom” and no longer needs government subsidies. Critics may be dismayed to see ARENA steered away from its original purpose. But it is true solar parks and wind farms are now commercially viable, and technologies to integrate large amounts of renewables into the grid are available.

So it makes sense to spend new research and development (R&D) funding on the next generation of low-emissions technologies. But how to choose what to spend the money on?

A few simple principles should inform those choices. The spending should help develop new zero- or low-emissions technologies or make them cheaper. It should also enable the shift to a net-zero emissions future, rather than locking in structures that continue to emit. The investment choices should be made by independent bodies such as ARENA’s board, based on research and expert judgement, rather than politically determined priorities.

For the industrial sector, the case for supporting zero-emissions technologies is clear. A sizeable share of Australia’s total emissions stem from fossil fuel use in industry.




Read more:
Government targets emerging technologies with $1.9 billion, saying renewables can stand on own feet


In some cases, government-supported R&D could help lay the foundation for zero-emissions industries of the future. But in others, what’s needed is a financial incentive for businesses to switch to clean energy or zero-emissions production methods, or regulation to require cleaner processes.

Green steel is a perfect example of the positive change that is possible. Steel can be made using clean hydrogen and renewable electricity, and the long term possibility of a green steel industry in Australia is tantalising.

Steel being made
Steel could be made cleanly using hydrogen instead of coking coal.
Dean Lewins/AAP

A future for fossil fuels?

The government’s support for carbon capture and storage (CCS) will be highly contested, because it’s a way to continue using fossil fuels at reduced – though not zero – emissions. This is achieved by capturing carbon dioxide before it enters the atmosphere and storing it underground, a technically feasible but costly process.

CCS will not perpetuate fossil fuel use in the energy sector, because renewables combined with energy storage are now much cheaper. Rather, CCS can be an option in specific processes that do not have ready alternatives, such as the production of cement, chemicals and fertiliser.

One step further is so-called “carbon capture and use” (CCU), where carbon dioxide is not pumped underground but turned into products, such as building materials. One program announced is for pilot projects of that kind.




Read more:
Yes, carbon emissions fell during COVID-19. But it’s the shift away from coal that really matters


A different proposition is the idea of hydrogen produced from coal or gas, in which some resulting emissions are captured. This method competes with “green” hydrogen produced using renewable electricity. It seems the government for now intends to support fossil fuel-derived hydrogen.

Reducing fossil fuel use, and using CCS/CCU where it makes sense, will not get the world to net-zero emissions. Emissions from other sources must be cut by as much as technically possible, at justifiable cost. Remaining emissions must then be negated by drawing carbon dioxide from the atmosphere. Such “negative emissions” can be achieved through technological means, and also by permanently increasing the amount of carbon stored in plants and soil.

The new funding includes support for increasing the amount of soil carbon. This method may hold promise in principle, but in practice its effectiveness is uncertain, and hard to measure. At the same time, the large emissions from agriculture are not yet addressed.

Gas flaring from an industrial plant
Reducing the burning of fossil fuels is not enough to get to net-zero emissions.
Matt Black Productions

A piecemeal effort

The spending amounts to A$140 million per year for ARENA, plus about A$500 million all up through other programs. A dose of reality is needed about what this money can achieve. It will create better understanding of options, some technological progress across the board and surely the occasional highlight. But a much greater effort is likely needed to achieve fundamental technological breakthroughs. And crucially, new technologies must be widely deployed.

For a sense of scale, consider that the Snowy 2.0 scheme is costed at around A$5 billion, and a single 1 gigawatt gas power plant, as mooted by the government for the Hunter Valley, would cost in the order of A$1.5 billion to build.

As well as additional spending, policies will be needed to drive the uptake of low-emissions technologies. The shift to renewables is now happening in the energy sector without government help, though some hurdles remain. But we cannot expect the same across the economy.

Governments will need to help drive uptake through policy. The most efficient way is usually to ensure producers of emissions pay for the environmental damage caused. In other words, putting a price on carbon.

The funding announced today is merely one piece of a national long-term strategy to deeply cut emissions – and not a particularly big piece.




Read more:
Carbon pricing works: the largest-ever study puts it beyond doubt


The Conversation


Frank Jotzo, Director, Centre for Climate and Energy Policy, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Government targets emerging technologies with $1.9 billion, saying renewables can stand on own feet


Michelle Grattan, University of Canberra

The government has unveiled a $1.9 billion package of investments in new and emerging energy and emission-reducing technologies, and reinforced its message that it is time to move on from assisting now commercially-viable renewables.

The package will be controversial, given its planned broadening of the remit of the government’s clean energy investment vehicles, currently focused on renewables, and the attention given to carbon capture and storage, which has many critics.

The latest announcement follows the “gas-fired recovery” energy plan earlier this week, which included the threat the government would build its own gas-fired power station if the electricity sector failed to fill the gap left by the scheduled closure of the coal-fired Liddell power plant in 2023.




Read more:
Morrison government threatens to use Snowy Hydro to build gas generator, as it outlines ‘gas-fired recovery’ plan


Unveiling the latest policy, Scott Morrison said solar panels and wind farms were commercially viable “and have graduated from the need for government subsidies”.

The government was now looking to unlock new technologies “to help drive down costs, create jobs, improve reliability and reduce emissions. This will support our traditional industries – manufacturing, agriculture, transport – while positioning our economy for the future.”

An extra $1.62 billion will be provided for the Australian Renewable Energy Agency (ARENA) to invest.

The government will expand the focus of ARENA and the Clean Energy Finance Corporation (CEFC) to back new technologies that would reduce emissions in agriculture, manufacturing, industry and transport.

At present ARENA can only support renewable energy and the CEFC can only invest in clean energy technologies (although it can support some types of gas projects).

The changes to ARENA and the CEFC will need legislation.

The government says it will cut the time taken to develop new Emissions Reduction Fund (ERF) methods from two years or more to under a year, involving industry in a co-design process.

This follows a review of the fund, which is a centrepiece of the Coalition’s emissions reduction policy. The cost of the changes is put at $24.6 million. The fund has had trouble attracting proposals from some sectors because of its complex administrative requirements.

Other measures in the policy include a new $95.4 million Technology Co-Investment Fund to support businesses in the agriculture, manufacturing, industrial and transport sectors to take up technologies to boost productivity and reduce emissions.

A $50 million Carbon Capture Use and Storage Development Fund will pilot carbon capture projects. This technology buries carbon but has run into many problems over the years and its opponents point to it being expensive, risky and encouraging rather than discouraging the use of fossil fuels.

Businesses and regional communities will be encouraged to use hydrogen, electric, and bio-fuelled vehicles, supported by a new $74.5 million Future Fuels Fund.

A hydrogen export hub will be set up, with $70.2 million. Chief Scientist Alan Finkel has been a strong advocate for the potential of hydrogen, saying Australia has competitive advantages as a future hydrogen exporter.

Some $67 million will back new microgrids in regional and remote communities to deliver affordable and reliable power.

There will be $52.2 million to increase the energy productivity of homes and businesses. This will include grants for hotels’ upgrades.

The government says $1.8 billion of the package is new money.

Here are the details of the package:

The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Andrew Forrest’s high-tech plan to extinguish bushfires within an hour is as challenging as it sounds



Warren Frey/AAP

James Jin Kang, Edith Cowan University

The philanthropic foundation of mining billionaire Andrew “Twiggy” Forrest has unveiled a plan to transform how Australia responds to bushfires.

The Fire Shield project aims to use emerging technologies to rapidly find and extinguish bushfires. The goal is to be able to put out any dangerous blaze within an hour by 2025.

Some of the proposed technology includes drones and aerial surveillance robots, autonomous fire-fighting vehicles and on-the-ground remote sensors. If successful, the plan could alleviate the devastating impact of bushfires Australians face each year.

But while bushfire behaviour is an extensively studied science, it’s not an exact one. Fires are subject to a wide range of variables including local weather conditions, atmospheric pressure and composition, and the geographical layout of an area.

There are also human factors, such as how quickly and effectively front-line workers can respond, as well as the issue of arson.




Read more:
Humans light 85% of bushfires, and we do virtually nothing to stop it


A plan for rapid bushfire detection

The appeal of the Fire Shield plan is in its proposal to use emerging fields of computer science to fight bushfires, especially AI and the Internet of Things (IoT) network.

While we don’t currently have details on how the Fire Shield plan will be carried out, the use of an IoT bushfire monitoring network seems like the most viable option.

The IoT network is made of many wireless connected devices. Deploying IoT devices with sensors in remote areas could allow the monitoring of changes in soil temperature, air temperature, weather conditions, moisture and humidity, wind speed, wind direction and forest density.

The sensors could also help pinpoint a fire’s location, predict where it will spread and also where it most likely started. This insight would greatly help with the early evacuation of vulnerable communities.

Data collected could be quickly processed and analysed using machine learning. This branch of AI provides intelligent analysis much quicker than traditional computing, or human reckoning.

Water bomber puts out a blaze from the sky.
Water bomber helicopters were used in NSW earlier this year as almost 150 bushfires burnt across the state at one point.
Bianca De Marchi/AAP

A more reliable network

A wireless low power wide area network (LPWAN) would be the best option for implementing the required infrastructure for the proposal. LPWAN uses sensor devices with batteries lasting up to 15 years.

And although a LPWAN only allows limited coverage (10-40km) in rural areas, a network with more coverage would need batteries that have to be replaced more often — making the entire system less reliable.

In the event of sensors being destroyed by fire, neighbouring sensors can send this information back to the server to build a sensor “availability and location map”. With this map, tracking destroyed sensors would also help track a bushfire’s movement.

Dealing with logistics

While it’s possible, the practicalities of deploying sensors for a remote bushfire monitoring network make the plan hugely challenging. The areas to cover would be vast, with varying terrain and environmental conditions.

Sensor devices could potentially be deployed by aircrafts across a region. On-ground distribution by people would be another option, but a more expensive one.

However, the latter option would have to be used to distribute larger gateway devices. These act as the bridge between the other sensors on ground and the server in the cloud hosting the data.

Gateway devices have more hardware and need to be set up by a person when first installed. They play a key role in LPWAN networks and must be placed carefully. After being placed, IoT devices require regular monitoring and calibration to ensure the information being relayed to the server is accurate.

Weather and environmental factors (such as storms or floods) have the potential to destroy the sensors. There’s also the risk of human interference, as well as legal considerations around deploying sensors on privately owned land.




Read more:
There’s only one way to make bushfires less powerful: take out the stuff that burns


Unpredictable interruptions

While statisticians can provide insight into the likelihood of a bushfire starting at a particular location, bushfires remain inherently hard to predict.

Any sensor network will be counter-acted by unpredictable environmental conditions and technological issues such as interrupted network signals. And such disruptions could lead to delays in important information reaching authorities.

Potential solutions for this include using satellite services in conjunction with an LPWAN network, or balloon networks (such as Google’s project Loon) which can provide better internet connectivity in remote areas.

But even once the sensors can be used to identify and track bushfires, putting a blaze out is another challenge entirely. The Fire Shield plan’s vision “to detect, monitor and extinguish dangerous blazes within an hour anywhere in Australia” will face challenges on several fronts.

It may be relatively simple to predict hurdles in getting the technology set up. But once a bushfire is detected, it’s less clear as to what course of action could possible extinguish it within the hour. In some very remote areas, aerial firefighting (such as with water bombers) may be the only option.

That begs the next question: how can we have enough aircrafts and controllers ready to be dispatched to a remote place at a moment’s notice? Considering the logistics, it won’t be easy.The Conversation

James Jin Kang, Lecturer, Computing and Security, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A pretty good start but room for improvement: 3 experts rate Australia’s emissions technology plan



James Gourley/AAP

Jake Whitehead, The University of Queensland; Chris Greig, and Simon Smart, The University of Queensland

Energy Minister Angus Taylor yesterday released his government’s emissions reduction technology plan, setting out priorities for meeting Australia’s climate targets while growing the economy.

The long-awaited Technology Investment Roadmap examined more than 140 technologies for potential investment between now and 2050. They include electric vehicles, biofuels, batteries, hydrogen, nuclear and carbon capture and storage.




Read more:
Morrison government dangles new carrots for industry but fails to fix bigger climate policy problem


The discussion paper builds on the need for a post-pandemic recovery plan. It sets a positive tone, and highlights Australia’s enormous opportunities to support investment in low-emission technologies, while increasing prosperity.

But it’s not clear whether the government grasps the sheer scale of infrastructure and behaviour change required to meet our climate goals – nor the urgency of the task.

So let’s take a closer look at where the report hits the mark, and where there’s room for improvement.

The University of Queensland’s 78 megawatt solar farm at Warwick.
Author provided

Positive signs

The paper gives a reasonably comprehensive overview of new and emerging technologies, and builds on a significant body of prior work and investment. This includes the CSIRO’s Low Emissions Technology Roadmap and ARENA’s Commercial Readiness Index.

Crucially, the paper recognises the need for government funding to help share the financial risks of deploying technologies in their early stages. It also acknowledges the need for partnerships between government, industry and research institutions to drive innovation.

Encouragingly, the paper recognises Australia’s responsibility to support our neighbours across the Indo-Pacific, to help reduce international emissions.




Read more:
Coronavirus is a ‘sliding doors’ moment. What we do now could change Earth’s trajectory


The paper is a “living” document, designed to be updated in response to future developments in technology, domestic demand, international markets and so on. Progress will be reported through annual “low emissions technology statements”, and the roadmap can be adjusted as certain technologies flourish and others fail.

This process recognises the considerable uncertainties around the performance and costs of future technologies. It will allow ongoing assessment of where future technologies should be deployed, and can ultimately deliver the greatest emission reduction benefit.

The paper considers the role of both coal and natural gas in Australia’s transition to net-zero emissions. We don’t object to the inclusion of these energy sources, as long as they’re decarbonised, for example using carbon capture and storage or verifiable carbon offsets.

Coal and gas should be decarbonised if they are part of our energy future.
Julian Smith/AAP

Room for improvement

The paper’s emphasis on technology and investment is clear. But what’s less clear is an appreciation of the sheer scale of change needed to support a low- or net-zero emissions future.

The roadmap would benefit from an assessment of the scale of investment and infrastructure needed to meet the long-term emissions goals of the Paris Agreement. This will require nations including Australia to reduce economy-wide emissions to net-zero.

We believe the lack of clarity around mid-century (and intermediate) emissions targets is a significant gap in the roadmap. It obscures the scale and pace of technological change required across all sectors, and has already prompted criticism.

The energy transition must start as soon as possible. It will involve unprecedented levels of behaviour change, infrastructure investment and technology deployment, which must be maintained over several decades.

The deployment of new technologies affects communities and natural landscapes. The paper touches on these issues, such as the use of water resources to produce renewable hydrogen.

But it does not sufficiently emphasise the need to consult a broad range of stakeholders, such as community, environment and business groups. This should happen before investment begins, and throughout the transition.

The paper also omits notable low-emission technologies already deployed in Australia. This includes zero-emission electric heavy vehicles such as buses, trackless trams and trucks. Future consultation on the paper will help fill these gaps.

The Brisbane Metro project involves electric buses.

Planning for an uncertain future

The roadmap process should explore the various technology pathways that could plausibly emerge between now and 2050, depending on how technologies progress and costs evolve, levels of public acceptance, and the nature of policies adopted.

The process should also seek to identify and deal with industrial, regulatory and social bottlenecks or constraints that might slow down technological efforts to decarbonise our economy, and those of our trading partners.




Read more:
Wrong way, go back: a proposed new tax on electric vehicles is a bad idea


With Princeton University, we are co-leading such a project. Known as Rapid Switch, the international collaboration will determine the actions needed in various countries to reach net-zero emissions by 2050.

Our work highlights the need for most low-carbon technologies to be deployed at historically unprecedented rates. This wholesale transformation will have dramatic impacts on landscapes, natural resources, industries and current practices.

The road ahead

Overall, the Technology Investment Roadmap is a solid foundation for building a low-emissions future.

It should encourage the right technology investment, if supported by other policy mechanisms. These should include an expanded Renewable Energy Target and low-carbon fuel and material standards which, for example, would encourage the production of green hydrogen and steel.

But the divisive nature of Australia’s climate politics over the past decade shows that securing bipartisan support for this plan, and its implementation over the long term, is crucial.

The magnitude of the challenge of transitioning our economy must not be taken for granted. But with a few important changes, this roadmap could help get us there.The Conversation

Jake Whitehead, Advance Queensland Industry Research Fellow & Tritum E-Mobility Fellow, The University of Queensland; Chris Greig, Professor, and Simon Smart, Associate professor, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There are 10 catastrophic threats facing humans right now, and coronavirus is only one of them


Arnagretta Hunter, Australian National University and John Hewson, Crawford School of Public Policy, Australian National University

Four months in, this year has already been a remarkable showcase for existential and catastrophic risk. A severe drought, devastating bushfires, hazardous smoke, towns running dry – these events all demonstrate the consequences of human-induced climate change.

While the above may seem like isolated threats, they are parts of a larger puzzle of which the pieces are all interconnected. A report titled Surviving and Thriving in the 21st Century, published today by the Commission for the Human Future, has isolated ten potentially catastrophic threats to human survival.

Not prioritised over one another, these risks are:

  1. decline of natural resources, particularly water
  2. collapse of ecosystems and loss of biodiversity
  3. human population growth beyond Earth’s carrying capacity
  4. global warming and human-induced climate change
  5. chemical pollution of the Earth system, including the atmosphere and oceans
  6. rising food insecurity and failing nutritional quality
  7. nuclear weapons and other weapons of mass destruction
  8. pandemics of new and untreatable disease
  9. the advent of powerful, uncontrolled new technology
  10. national and global failure to understand and act preventatively on these risks.

The start of ongoing discussions

The Commission for the Human Future formed last year, following earlier discussions within emeritus faculty at the Australian National University about the major risks faced by humanity, how they should be approached and how they might be solved. We hosted our first round-table discussion last month, bringing together more than 40 academics, thinkers and policy leaders.

The commission’s report states our species’ ability to cause mass harm to itself has been accelerating since the mid-20th century. Global trends in demographics, information, politics, warfare, climate, environmental damage and technology have culminated in an entirely new level of risk.

The risks emerging now are varied, global and complex. Each one poses a “significant” risk to human civilisation, a “catastrophic risk”, or could actually extinguish the human species and is therefore an “existential risk”.

The risks are interconnected. They originate from the same basic causes and must be solved in ways that make no individual threat worse. This means many existing systems we take for granted, including our economic, food, energy, production and waste, community life and governance systems – along with our relationship with the Earth’s natural systems – must undergo searching examination and reform.

COVID-19: a lesson in interconnection

It’s tempting to examine these threats individually, and yet with the coronavirus crisis we see their interconnection.

The response to the coronavirus has had implications for climate change with carbon pollution reduction, increased discussion about artificial intelligence and use of data (including facial recognition), and changes to the landscape of global security particularly in the face of massive economic transition.

It’s not possible to “solve” COVID-19 without affecting other risks in some way.

Shared future, shared approach

The commission’s report does not aim to solve each risk, but rather to outline current thinking and identify unifying themes. Understanding science, evidence and analysis will be key to adequately addressing the threats and finding solutions. An evidence-based approach to policy has been needed for many years. Under-appreciating science and evidence leads to unmitigated risks, as we have seen with climate change.

The human future involves us all. Shaping it requires a collaborative, inclusive and diverse discussion. We should heed advice from political and social scientists on how to engage all people in this conversation.




Read more:
From the bushfires to coronavirus, our old ‘normal’ is gone forever. So what’s next?


Imagination, creativity and new narratives will be needed for challenges that test our civil society and humanity. The bushfire smoke over the summer was unprecedented, and COVID-19 is a new virus.

If our policymakers and government had spent more time using the available climate science to understand and then imagine the potential risks of the 2019-20 summer, we would have recognised the potential for a catastrophic season and would likely have been able to prepare better. Unprecedented events are not always unexpected.

Prepare for the long road

The short-termism of our political process needs to be circumvented. We must consider how our actions today will resonate for generations to come.

The commission’s report highlights the failure of governments to address these threats and particularly notes the short-term thinking that has increasingly dominated Australian and global politics. This has seriously undermined our potential to decrease risks such as climate change.




Read more:
Listen to your people Scott Morrison: the bushfires demand a climate policy reboot


The shift from short to longer term thinking can began at home and in our daily lives. We should make decisions today that acknowledge the future, and practise this not only in our own lives but also demand it of our policy makers.

We’re living in unprecedented times. The catastrophic and existential risks for humanity are serious and multifaceted. And this conversation is the most important one we have today.The Conversation

Arnagretta Hunter, ANU Human Futures Fellow 2020; Cardiologist and Physician., Australian National University and John Hewson, Professor and Chair, Tax and Transfer Policy Institute, Crawford School of Public Policy, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australia wants to install military technology in Antarctica – here’s why that’s allowed



Technology, such as satellite systems, can be used for both military and scientific purposes.
Shutterstock

Tony Press, University of Tasmania

This week, the ABC revealed that the Australian Defence Force wants to roll out military technology in Antarctica.

The article raises the issue of what is, or is not, legitimate use of technology under the Antarctic Treaty. And it has a lot to do with how technology is used and provisions in the treaty.

The Antarctic Treaty was negotiated in the late 1950s, during the Cold War. Its purpose was to keep Antarctica separate from any Cold War conflict, and any arguments over sovereignty claims.




Read more:
As China flexes its muscles in Antarctica, science is the best diplomatic tool on the frozen continent


The words used in the treaty reflect the global politics and technologies back then, before there were satellites and GPS systems. But its provisions and prohibitions are still relevant today.

The opening provision of the Antarctic Treaty, which came into force in 1961, says:

Antarctica shall be used for peaceful purposes only. There shall be prohibited, [among other things], any measures of a military nature, such as the establishment of military bases and fortifications, the carrying out of military manoeuvres, as well as the testing of any type of weapons.

The treaty also prohibits “any nuclear explosions in Antarctica” and disposal of radioactive waste. What the treaty does not do, however, is prohibit countries from using military support in their peaceful Antarctic activities.

Many Antarctic treaty parties, including Australia, New Zealand, the United Kingdom, the US, Chile and Argentina, rely on military support for their research. This includes the use of ships, aircraft, personnel and specialised services like aircraft ground support.

In fact, the opening provision of the treaty is clarified by the words:

the present Treaty shall not prevent the use of military personnel or equipment for scientific research or for any other peaceful purpose.

It would be a breach of the treaty if “military exercises” were being conducted in Antarctica, or if military equipment was being used for belligerent purposes. But the treaty does not deal specifically with technology. It deals with acts or actions. The closest it gets to technology is the term “equipment” as used above.

Dual use technology

So-called “dual use” technology – which that can be used for both peaceful and military purposes – is allowed in Antarctica in support of science.




Read more:
For the first time, we can measure the human footprint on Antarctica


The term is often used to describe technology such as the widely-used GPS, which relies on satellites and a worldwide system of ground-based receiving stations. Norway’s “Trollsat”, China’s “Beidou”, and Russia’s “GLONASS” systems are similar, relying on satellites and ground stations for their accuracy.

What’s more, modern science heavily relies on satellite technology and the use of Antarctic ground stations for data gathering and transmission.

And scientific equipment, like ice-penetrating radars, carried on aircraft, drones, and autonomous airborne vehicles are being used extensively to understand the Antarctic continent itself and how it’s changing.

Much, if not all, of this technology could have “dual use”. But its use is not contrary to the Antarctic Treaty.

In fact, the use of this equipment for “scientific research” or a “peaceful purpose” is not only legitimate, it’s also essential for Antarctic research, and global understanding of the health of our planet.




Read more:
The benefits – and pitfalls – of working in isolation


The technologies Australia deploys in Antarctica all relate to its legitimate Antarctic operations and to science.

There are also facilities in Antarctica used to monitor potential military-related activities elsewhere in the world, such as the monitoring stations used under the Comprehensive Nuclear Test Ban Treaty.

The circumstances under which modern technology would, or could be, used against the provisions of the Antarctic Treaty have not been tested. But the activity would have to go beyond “dual purpose” and not be for science or peaceful purposes.

Science in Antarctica is open to scrutiny

Science in Antarctica is very diverse, from space sciences to ecosystem science, and 29 countries have active research programs there.

And since Antarctica plays a significant role in the global climate system, much modern Antarctic research focuses on climate science and climate change.

But there has been speculation about whether Antarctica is crucial to the development of alternatives to GPS (for example, by Russia and China) that could also be used in warfare as well as for peaceful purposes. It’s unclear whether using ground stations in Antarctica is essential for such a purpose.

For instance, Claire Young, a security analyst writing for the Australian Strategic Policy Institute, said the accuracy of China’s Beidou satellite has already been improved by international testing, so testing in Antarctica will make very little difference.




Read more:
Remembering Antarctica’s nuclear past with ‘Nukey Poo’


This leads to another important provision of the Antarctic Treaty.

The treaty foreshadowed compliance problems in the remote and hostile continent by including an open ended provision for any Antarctic Treaty Party to inspect any Antarctic facility.

In other words, any party has complete freedom to access all parts of Antarctica at any time to inspect ships, aircraft, equipment, or any other facility, and even use “aerial observations” for inspection. This means the activities of all parties, and all actions in Antarctica, are available for open scrutiny.

This inspection regime is important because inspections can be used to determine if modern technology on the continent is, in fact, being used for scientific or peaceful purposes, in line with the provisions of the treaty.The Conversation

Tony Press, Adjunct Professor, Institute for Marine and Antarctic Studies, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Design and repair must work together to undo our legacy of waste



Apple’s industrial design has played a fundamental role in transforming computers from machines for tinkerers into desirable objects of self-actualisation.
Shutterstock

Tom Lee, University of Technology Sydney; Alexandra Crosby, University of Technology Sydney; Clare Cooper, University of Technology Sydney; Jesse Adams Stein, University of Technology Sydney, and Katherine Scardifield, University of Technology Sydney

This article is part of our occasional long read series Zoom Out, where authors explore key ideas in science and technology in the broader context of society and humanity.


“Design” has been one of the big words of the twentieth century. To say that an object has been designed implies a level of specialness. “Designer items” are invested with a particular kind of expertise that is likely to make them pleasing to use, stylish, or – less common in late-capitalist society – well made.

Due to this positive association, design has become an “elevator word”, to borrow a phrase used by philosopher of science Ian Hacking. Like the words “facts”, “truth”, “knowledge”, “reality”, “genuine” and “robust”, the word design is used to raise the level of discourse.

“Repair” hasn’t had such a glossy recent history. We don’t have universities or TAFEs offering degrees in repair, churning out increasingly large numbers of repairers. Repair exists in the shadow of design, in unfashionable, unofficial pockets. And, until recently, repair mostly passed unremarked.

British literary scholar Steven Connor points to the ambiguous status of repair in his analysis of “fixing”. Connor discusses fixing and fixers in the context of related figures, such as the tinker, bodger and mender, all of which share outsider status.




Read more:
Why can’t we fix our own electronic devices?


One might be forgiven for thinking “design” and “repair” were opposing forces. The former word has become so bound up with notions of newness, improvement, performance and innovation that it emphatically signals its difference from the seamful, restorative connotations of repair.

If repair is hessian and twine, design is sleek uniformity. Repair is about upkeep. Design is about updating. Repair is ongoing and cyclical. Design is about creative “genius” and finish. To design is, supposedly, to conceive and complete, to repair is to make do.

But perhaps design and repair are not, or ought not to be, as divergent as such a setting of the scene suggests. Thinking metaphorically of repair as design, and design as repair, can offer new and useful perspectives on both of these important spheres of cultural activity.

Repair and design have a lot in common

As a surface sheen that soothes us, design distracts us from any uncomfortable reminders of the disastrous excesses of global capitalist consumption and waste. The acquisition of new “designs” becomes addictive, a quick hit of a fresh design assures us that life is progressing.

As each new object is designed into existence and used over time, it is accompanied by an inevitable need for repair that evolves in parallel. Repair, where possible, cleans up the mess left by design.

Design and repair are different though related approaches to the common problem of entropy. Repair might seem only to be about returning an object to its previous state, whether for functional or decorative purposes. But maintaining that state is a hard fought affair, no less invested by collective or personal value.

The act of repair is also a determinate of worth. Whether at an individual or collective scale, choosing to repair this, and discard or neglect that, shares much in common with the process of selection, which informs the design of objects, images, garments or spaces.

Apple is revered for its design

Apple’s outgoing Chief Design Officer Jonathan Ive’s influence at Apple is among the most popularised examples of “successful design”, to which other designers and design students have long aspired. With Ive’s departure from Apple this year, we have an opportunity to take a long view of his legacy.

Since the distinctive bubble iMac in 1998, Ive shifted computing away from the beige, boxy uniformity of the IBM PC era, aligning computing with “high design” and investing it with deep popular appeal.

Even prior to Ive’s influence – take for example the 1977 Apple II – Apple’s industrial design has played a fundamental role in transforming computers from machines for tinkerers, into desirable objects of self-actualisation, blending leisure and labour with incomparable ease.

The iPhone is one among a suite of Apple products that have changed cultural expectations around consumer electronics, and other smart phone manufacturers have followed suit.




Read more:
Understanding the real innovation behind the iPhone


The ubiquity of iPhones makes it increasingly difficult to appreciate their strangeness. Not only do they appear sealed beyond consumer access, they almost induce a forgetting of seals altogether. The glistening surface expresses an idea of inviolability which is completely at odds with the high likelihood of wear and tear.

The Apple iPhone Xs.
Apple

The iPhone is perhaps the ultimate example of a “black box”, an object that exhibits a pronounced distinction between its interior mechanics, which determine its functionality, and its exterior appearance. It gives nothing away, merely reflecting back at us through its “black mirror”, to borrow the title of Charlie Brooker’s dystopian television series.

The design of the iPhone – among other similar devices – forecloses against repair, both through its physical form, and also through the obsolescence built into its software and systems design, which defensively pits individuals against the power of a giant multinational company.

‘Right to repair’ is gaining ground

Apple deliberately discourages its customers using independent repair services. It has a track record of punishing people who have opted for independent repairs, rather than going through Apple (at much greater expense). This is an example of the company’s attempt to keep its customers in an ongoing cycle of constant consumption.

This has put Apple – along with the agricultural equipment company John Deere – in the crosshairs of the growing Right to Repair movement in the United States. Right to Repair is centred on a drive to reform legislation in 20 US states, targeting manufacturers’ “unfair and deceptive policies that make it difficult, expensive, or impossible for you to repair the things you own”.

The movement could perhaps be criticised for focusing too much on libertarian individualism. Other groups advocate more community-focused repair strategies, such as the global proliferation of Repair Cafes, and Sweden’s groundbreaking secondhand mall, ReTuna Recycling Galleria.

Either way, there is agreement that something must be done to reduce the staggering amounts of e-waste we produce. In Australia alone, 485,000 tonnes of e-waste was generated in 2016/2017, and the annual rates are increasing.

This legacy of digital technology’s “anti-repairability” has been accepted as inevitable for some time, but the tide is turning. For example, the Victorian government has banned e-waste from landfill from July 1.

Designing for the future

Considering the increasing importance of responsible production and consumption, it is easily imaginable that, in a not too distant future, designers and design historians might point to the iPhone as naive, regressive and destructive. An example of design with thoroughly dated priorities, like the buildings in the Gothic revival style that provoked the ire of modernist architects.

Obscuring the wastage of valuable resources through sleek design could be decried as an outrageous excess, rather than celebrated for its “simiplicity”. With the benefit of hindsight, we might finally see that the iPhone was the opposite of minimalism.




Read more:
Mending hearts: how a ‘repair economy’ creates a kinder, more caring community


Perhaps the revered objects of this imagined future will be launched by an entrepreneur who spruiks features and services associated with repair, rather than pacing the stage, championing an object because of its slimness, sleekness and speed. Hackability, ease of access, modularity, spare parts and durability might be touted as a product’s best features.

Alternatively, if the use of an object is decoupled from individual ownership, the responsibility for repair and waste might fall back on the producer. Perhaps “repair bins” will become a taken for granted feature of the urban landscape like curbside recycling bins are today.

To compel the pragmatists among us, such wishful thinking needs to remain mindful of the power multinationals have demonstrated in thwarting dreams of open access. Repair-oriented practices still face vast challenges when it is seemingly so convenient to waste. But to use one of the words of the day, aspirations need to be articulated if we, collectively, want to have the chance of living the dream.The Conversation

Tom Lee, Senior Lecturer, School of Design, University of Technology Sydney; Alexandra Crosby, Senior Lecturer, Design, University of Technology Sydney; Clare Cooper, Lecturer, University of Technology Sydney; Jesse Adams Stein, Chancellor’s Postdoctoral Research Fellow, School of Design, University of Technology Sydney, and Katherine Scardifield, Lecturer, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Computing faces an energy crunch unless new technologies are found


File 20181127 130884 1qm1olz.jpg?ixlib=rb 1.1
The tools on our smartphones are enabled by a huge network of mobile phone towers, Wi-Fi networks and server farms.
Shutterstock

Daisy Wang, UNSW and Jared Cole, RMIT University

There’s little doubt the information technology revolution has improved our lives. But unless we find a new form of electronic technology that uses less energy, computing will become limited by an “energy crunch” within decades.

Even the most common events in our daily life – making a phone call, sending a text message or checking an email – use computing power. Some tasks, such as watching videos, require a lot of processing, and so consume a lot of energy.

Because of the energy required to power the massive, factory-sized data centres and networks that connect the internet, computing already consumes 5% of global electricity. And that electricity load is doubling every decade.

Fortunately, there are new areas of physics that offer promise for massively reduced energy use.




Read more:
Bitcoin’s high energy consumption is a concern – but it may be a price worth paying


The end of Moore’s Law

Humans have an insatiable demand for computing power.

Smartphones, for example, have become one of the most important devices of our lives. We use them to access weather forecasts, plot the best route through traffic, and watch the latest season of our favourite series.

And we expect our smartphones to become even more powerful in the future. We want them to translate language in real time, transport us to new locations via virtual reality, and connect us to the “Internet of Things”.

The computing required to make these features a reality doesn’t actually happen in our phones. Rather it’s enabled by a huge network of mobile phone towers, Wi-Fi networks and massive, factory-sized data centres known as “server farms”.

For the past five decades, our increasing need for computing was largely satisfied by incremental improvements in conventional, silicon-based computing technology: ever-smaller, ever-faster, ever-more efficient chips. We refer to this constant shrinking of silicon components as “Moore’s Law”.

Moore’s law is named after Intel co-founder Gordon Moore, who observed that:

the number of transistors on a chip doubles every year while the costs are halved.

But as we hit limits of basic physics and economy, Moore’s law is winding down. We could see the end of efficiency gains using current, silicon-based technology as soon as 2020.

Our growing demand for computing capacity must be met with gains in computing efficiency, otherwise the information revolution will slow down from power hunger.

Achieving this sustainably means finding a new technology that uses less energy in computation. This is referred to as a “beyond CMOS” solution, in that it requires a radical shift from the silicon-based CMOS (complementary metal–oxide–semiconductor) technology that has been the backbone of computing for the last five decades.




Read more:
Moore’s Law is 50 years old but will it continue?


Why does computing consume energy at all?

Processing of information takes energy. When using an electronic device to watch TV, listen to music, model the weather or any other task that requires information to be processed, there are millions and millions of binary calculations going on in the background. There are zeros and ones being flipped, added, multiplied and divided at incredible speeds.

The fact that a microprocessor can perform these calculations billions of times a second is exactly why computers have revolutionised our lives.

But information processing doesn’t come for free. Physics tells us that every time we perform an operation – for example, adding two numbers together – we must pay an energy cost.

And the cost of doing calculations isn’t the only energy cost of running a computer. In fact, anyone who has ever used a laptop balanced on their legs will attest that most of the energy gets converted to heat. This heat comes from the resistance that electricity meets when it flows through a material.

It is this wasted energy due to electrical resistance that researchers are hoping to minimise.

Recent advances point to solutions

Running a computer will always consume some energy, but we are a long way (several orders of magnitude) away from computers that are as efficient as the laws of physics allow. Several recent advances give us hope for entirely new solutions to this problem via new materials and new concepts.

Very thin materials

One recent step forward in physics and materials science is being able to build and control materials that are only one or a few atoms thick. When a material forms such a thin layer, and the movement of electrons is confined to this sheet, it is possible for electricity to flow without resistance.

There are a range of different materials that show this property (or might show it). Our research at the ARC Centre for Future Low-Energy Electronics Technologies (FLEET) is focused on studying these materials.

The study of shapes

There is also an exciting conceptual leap that helps us understand this property of electricity flow without resistance.

This idea comes from a branch of mathematics called “topology”. Topology tells us how to compare shapes: what makes them the same and what makes them different.

Image a coffee cup made from soft clay. You could slowly squish and squeeze this shape until it looks like a donut. The hole in the handle of the cup becomes the hole in the donut, and the rest of the cup gets squished to form part of the donut.

Topology tells us that donuts and coffee cups are equivalent because we can deform one into the other without cutting it, poking holes in it, or joining pieces together.

It turns out that the strange rules that govern how electricity flows in thin layers can be understood in terms of topology. This insight was the focus of the 2016 Nobel Prize, and it’s driving an enormous amount of current research in physics and engineering.




Read more:
Physicists explore exotic states of matter inspired by Nobel-winning research


We want to take advantage of these new materials and insights to develop the next generation of low-energy electronics devices, which will be based on topological science to allow electricity to flow with minimal resistance.

This work creates the possibility of a sustainable continuation of the IT revolution – without the huge energy cost.The Conversation

Daisy Wang, Postdoctoral Fellow, UNSW School of Physics, UNSW and Jared Cole, Professor of Physics, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Technology is making cities ‘smart’, but it’s also costing the environment



File 20180724 194131 1q57kz9.jpg?ixlib=rb 1.1
A smart city is usually one connected and managed through computing — sensors, data analytics and other information and communications technology.
from shutterstock.com

Mark Sawyer, University of Western Australia

The Australian government has allocated A$50 million for the Smarter Cities and Suburbs Program to encourage projects that “improve the livability, productivity and sustainability of cities and towns across Australia”.

One project funded under the program is installation of temperature, lighting and motion sensors in buildings and bus interchanges in Woden, ACT. This will allow energy systems to be automatically adjusted in response to people’s use of these spaces, with the aim of reducing energy use and improving safety and security.

In similar ways, governments worldwide are partnering with technology firms to make cities “smarter” by retrofitting various city objects with technological features. While this might make our cities safer and potentially more user-friendly, we can’t work off a blind faith in technology which, without proper design, can break down and leave a city full of environmental waste.




Read more:
Can a tech company build a city? Ask Google


How cities are getting smarter

A “smart city” is an often vague term that usually describes one of two things. The first is a city that takes a knowledge-based approach to its economy, transport, people and environment. The second is a city connected and managed through computing — sensors, data analytics and other information and communications technology.

It’s the second definition that aligns with the interests of multinational tech firms. IBM, Serco, Cisco, Microsoft, Philips and Google are among those active in this market. Each is working with local authorities worldwide to provide the hardware, software and technical know-how for complex, urban-scale projects.

In Rio de Janeiro, a partnership between the city government and IBM has created an urban-scale network of sensors, bringing data from thirty agencies into a single centralised hub. Here it is examined by algorithms and human analysts to help model and plan city development, and to respond to unexpected events.

Tech giants provide expertise for a city to become “smart” and then keep its systems running afterwards. In some cases, tech-led smart cities have risen from the ground up. Songdo, in South Korea, and Masdar, UAE, were born smart by integrating advanced technologies at the masterplanning and construction stages.




Read more:
How does a city get to be ‘smart’? This is how Tel Aviv did it


More often, though, existing cities are retrofitted with smart systems. Barcelona, for instance, has gained a reputation as one of the world’s top smart cities, after its existing buildings and infrastructure were fitted with sensors and processors to monitor and maintain infrastructure, as well as for planning future development.

The city is dotted with electric vehicle charging points and smart parking spaces. Sensors and a data-driven irrigation system monitor and manage water use. The public transport system has interactive touch screens at bus stops and USB chargers on buses.

Barcelona has a reputation of being one of the world’s smartest cities.

Suppliers of smart systems claim a number of benefits for smart cities, arguing these will result in more equitable, efficient and environmentally sustainable urban centres. Other advocates claim smart cities are more “happy and resilient”. But there are also hidden costs to smart cities.

The downsides of being smart

Cyber-security and technology ethics are important topics. Smart cities represent a complex new field for governments, citizens, designers and security experts to navigate.

The privatisation of civic space and public services is a hidden cost too. The complexity of smart city systems and their need for ongoing maintenance could lead to long-term reliance on a tech company to deliver public services.




Read more:
Sensors in public spaces can help create cities that are both smart and sociable


Many argue that, by improving data collection and monitoring and allowing for real-time responses, smart systems will lead to better environmental outcomes. For instance, waste bins that alert city managers when they need collecting, or that prompt recycling through tax credits, and street lamps that track movement and adjust lighting levels have the potential to reduce energy use.

But this runs contrary to studies that show more information and communication technology actually leads to higher energy use. At best, smart cities may end up a zero-sum game in terms of sustainability because their “positive and negative impacts tend to cancel each other out”.

And then there’s the less-talked-about issue of e-waste, which is a huge global challenge. Adding computers to objects could create what one writer has termed a new “internet of trash” — products designed to be thrown away as soon as their batteries run down.

Computer technology is often short-lived and needs upgrading often.
from shutterstock.com

As cities become smart they need more and more objects — bollards, street lamps, public furniture, signboards — to integrate sensors, screens, batteries and processors. Objects in our cities are usually built with durable materials, which means they can be used for decades.

Computer processors and software systems, on the other hand, are short-lived and may need upgrading every few years. Adding technology to products that didn’t have this in the past effectively shortens their life-span and makes servicing, warranties and support contracts more complex and unreliable. One outcome could be a landscape of smart junk — public infrastructure that has stopped working, or that needs ongoing patching, maintenance and upgrades.




Read more:
Does not compute: Australia is still miles behind in recycling electronic products


In Barcelona, many of the gadgets that made it one of the world’s smartest cities no longer work properly. The smart streetlights on the Passatge de Mas de Roda, which were put in place in 2011 to improve energy efficiency by detecting human movement, noise and climatic conditions, later fell into disrepair.

If smart objects aren’t designed so they can be disassembled at the end of their useful life, electronic components are likely to be left inside where they hamper recycling efforts. Some digital components contain toxic materials. Disposing of these through burning or in landfill can contaminate environments and threaten human health.

The ConversationThese are not insurmountable challenges. Information and communications technology, data and networks have an important place in our shared urban future. But this future will be determined by our attitudes toward these technologies. We need to make sure that instead of being short-term gimmicks to be thrown away when their novelty wears off, they are thoughtfully designed, and that they put they put the needs of citizens and environments first.

Mark Sawyer, Lecturer in Architecture, University of Western Australia

This article was originally published on The Conversation. Read the original article.