Research reveals shocking detail on how Australia’s environmental scientists are being silenced



Authors provided

Don Driscoll, Deakin University; Bob Pressey, James Cook University; Euan Ritchie, Deakin University, and Noel D Preece, James Cook University

Ecologists and conservation experts in government, industry and universities are routinely constrained in communicating scientific evidence on threatened species, mining, logging and other threats to the environment, our new research has found.

Our study, just published, shows how important scientific information about environmental threats often does not reach the public or decision-makers, including government ministers.

In some cases, scientists self-censor information for fear of damaging their careers, losing funding or being misrepresented in the media. In others, senior managers or ministers’ officers prevented researchers from speaking truthfully on scientific matters.

This information blackout, termed “science suppression”, can hide environmentally damaging practices and policies from public scrutiny. The practice is detrimental to both nature and democracy.

A scientist kneels by a stream
When scientists are free to communicate their knowledge, the public is kept informed.
University of Queensland/AAP

Code of silence

Our online survey ran from October 25, 2018, to February 11, 2019. Through advertising and other means, we targeted Australian ecologists, conservation scientists, conservation policy makers and environmental consultants. This included academics, government employees and scientists working for industry such as consultants and non-government organisations.

Some 220 people responded to the survey, comprising:

  • 88 working in universities
  • 79 working in local, state or federal government
  • 47 working in industry, such as environmental consulting and environmental NGOs
  • 6 who could not be classified.

In a series of multiple-choice and open-ended questions, we asked respondents about the prevalence and consequences of suppressing science communication.




Read more:
Let there be no doubt: blame for our failing environment laws lies squarely at the feet of government


About half (52%) of government respondents, 38% from industry and 9% from universities had been prohibited from communicating scientific information.

Communications via traditional (40%) and social (25%) media were most commonly prohibited across all workplaces. There were also instances of internal communications (15%), conference presentations (11%) and journal papers (5%) being prohibited.

A video explaining the research findings.

‘Ministers are not receiving full information’

Some 75% of respondents reported having refrained from making a contribution to public discussion when given the opportunity – most commonly in traditional media or social media. A small number of respondents self-censored conference presentations (9%) and peer-reviewed papers (7%).

Factors constraining commentary from government respondents included senior management (82%), workplace policy (72%), a minister’s office (63%) and middle management (62%).

Fear of barriers to advancement (49%) and concern about media misrepresentation (49%) also discouraged public communication by government respondents.

Almost 60% of government respondents and 36% of industry respondents reported unduly modified internal communications.

One government respondent said:

Due to ‘risk management’ in the public sector […] ministers are not receiving full information and advice and/or this is being ‘massaged’ by advisors (sic).

University respondents, more than other workplaces, avoided public commentary out of fear of how they would be represented by the media (76%), fear of being drawn beyond their expertise (73%), stress (55%), fear that funding might be affected (53%) and uncertainty about their area of expertise (52%).

One university respondent said:

I proposed an article in The Conversation about the impacts of mining […] The uni I worked at didn’t like the idea as they received funding from (the mining company).

vehicle operating at a coal mine
A university researcher was dissuaded from writing an article for The Conversation on mining.
Dave Hunt/AAP

Critical conservation issues suppressed

Information suppression was most common on the issue of threatened species. Around half of industry and government respondents, and 28% of university respondents, said their commentary on the topic was constrained.

Government respondents also reported being constrained in commenting on logging and climate change.

One government respondent said:

We are often forbidden (from) talking about the true impacts of, say, a threatening process […] especially if the government is doing little to mitigate the threat […] In this way the public often remains ‘in the dark’ about the true state and trends of many species.

University respondents were most commonly constrained in talking about feral animals. A university respondent said:

By being blocked from reporting on the dodgy dealings of my university with regards to my research and its outcomes I feel like I’m not doing my job properly. The university actively avoids any mention of my study species or project due to vested financial interests in some key habitat.

Industry respondents, more than those from other sectors, were constrained in commenting on the impacts of mining, urban development and native vegetation clearing. One industry respondent said:

A project […] clearly had unacceptable impacts on a critically endangered species […] the approvals process ignored these impacts […] Not being able to speak out meant that no one in the process was willing or able to advocate for conservation or make the public aware of the problem.

a dead koala in front of trees
Information suppression on threatened species was common.

The system is broken

Of those respondents who had communicated information publicly, 42% had been harassed or criticised for doing so. Of those, 83% believed the harassers were motivated by political or economic interests.

Some 77 respondents answered a question on whether they had suffered personal consequences as a result of suppressing information. Of these, 18% said they had suffered mental health effects. And 21% reported increased job insecurity, damage to their career, job loss, or had left the field.

One respondent said:

I declared the (action) unsafe to proceed. I was overruled and properties and assets were impacted. I was told to be silent or never have a job again.

Another said:

As a consultant working for companies that damage the environment, you have to believe you are having a positive impact, but after years of observing how broken the system is, not being legally able to speak out becomes harder to deal with.

a scientist tests water
Scientists want to have a positive impact on environmental outcomes.
Elaine Thompson/AP

Change is needed

We acknowledge that we receive grants involving contracts that restrict our academic freedom. And some of us self-censor to avoid risks to grants from government, resulting in personal moral conflict and a less informed public. When starting this research project, one of our colleagues declined to contribute for fear of losing funding and risking employment.

But Australia faces many complex and demanding environmental problems. It’s essential that scientists are free to communicate their knowledge on these issues.




Read more:
Conservation scientists are grieving after the bushfires — but we must not give up


Public servant codes of conduct should be revised to allow government scientists to speak freely about their research in both a public and private capacity. And government scientists and other staff should report to new, independent state and federal environment authorities, to minimise political and industry interference.

A free flow of information ensures government policy is backed by the best science. Conservation dollars would be more wisely invested, costly mistakes avoided and interventions more effectively targeted.

And importantly, it would help ensure the public is properly informed – a fundamental tenet of a flourishing democracy.The Conversation

Don Driscoll, Professor in Terrestrial Ecology, Deakin University; Bob Pressey, Professor and Program Leader, Conservation Planning, ARC Centre of Excellence for Coral Reef Studies, James Cook University; Euan Ritchie, Associate Professor in Wildlife Ecology and Conservation, Centre for Integrative Ecology, School of Life & Environmental Sciences, Deakin University, and Noel D Preece, Adjunct Asssociate Professor, James Cook University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New research reveals how Australia and other nations play politics with World Heritage sites



Shutterstock

Tiffany Morrison, James Cook University; Katrina Brown, University of Exeter; Maria Lemos, University of Michigan, and Neil Adger, University of Exeter

Some places are considered so special they’re valuable to all humanity and must be preserved for future generations. These irreplaceable gems – such as Machu Picchu, Stonehenge, Yosemite National Park and the Great Barrier Reef – are known as World Heritage sites.

When these places are threatened, they can officially be placed on the “List of World Heritage in Danger”. This action brings global attention to the natural or human causes of the threats. It can encourage emergency conservation action and mobilise international assistance.

However, our research released today shows the process of In Danger listings is being manipulated for political gain. National governments and other groups try to keep sites off the list, with strategies such as lobbying, or partial efforts to protect a site. Australian government actions to keep the Great Barrier Reef off the list are a prime example.

These practices are a problem for many reasons – not least because they enable further damage to threatened ecosystems.

Yosemite National Park is on the World Heritage list.
AAP/Kathryn Bermingham

What is the In Danger list?

World Heritage sites represent outstanding socioeconomic, natural and cultural values. Nations vie to have their sites included on the World Heritage list, which can attract tourist dollars and international prestige. In return, the nations are responsible for protecting the sites.

World Heritage sites are protected by an international convention, overseen by the United Nations body UNESCO and its World Heritage Committee. The committee consists of representatives from 21 of the 193 nations signed up to the convention.




Read more:
We just spent two weeks surveying the Great Barrier Reef. What we saw was an utter tragedy


When a site comes under threat, the World Heritage Committee can list the site as in danger of losing its heritage status. In 2014 for example, the committee threatened to list the Great Barrier Reef as In Danger – in part due to a plan to dump dredged sediment from a port development near the reef, as well as poor water quality, climate change and other threats. This listing did not eventuate.

An In Danger listing can attract help to protect a site. For example, the Galápagos Islands were placed on the list in 2007. The World Heritage Fund provided the Ecuadorian government with technical and financial assistance to restore the site’s World Heritage status. The work is not yet complete, but the islands were removed from the In Danger list in 2010.

Ecuador’s Galapagos Islands were removed from the In Danger list in 2010.
EPA

Political games

Our study shows political manipulation appears to be compromising the process that determines if a site is listed as In Danger.

We examined interactions between UNESCO and 102 national governments, from 1972 until 2019. We interviewed experts from the World Heritage Committee, government agencies and elsewhere, and combined this with global site threat data, UNESCO and government records, and economic and governance data.

We found at least 41 World Heritage sites, including the Great Barrier Reef, were at least once considered by the World Heritage Committee for the In Danger list, but weren’t put on it. This is despite these sites being reported by UNESCO as threatened, or more threatened, than those already on the In Danger list. And 27 of the 41 sites were considered for an In Danger listing more than once.

The number of sites on the In Danger list declined by 31.6% between 2001 and 2008, and has plateaued since. By 2019, only 16 of 238 ecosystems were certified as In Danger. In contrast, the number of ecosystems on the World Heritage list has increased steadily over the past 20 years.




Read more:
Explainer: what is the List of World Heritage in Danger?


So why is this happening? Our analysis showed the threat of an In Danger listing drives a range of government responses.

This includes governments complying only partially with World Heritage Committee recommendations or making only symbolic commitments. Such “rhetorical” adoption of recommendations has been seen in relation to the Three Parallel Rivers in China’s Yunnan province, the Western Caucasus in Russia and Australia’s Great Barrier Reef (explored in more detail below).

In other cases, threats to a site are high but attract limited attention and effort from either the national government or UNESCO. These sites include Halong Bay in Vietnam and the remote Tubbataha Reefs in the Philippines.

A 2004 amendment to the way the World Heritage Committee assesses In Danger listings means sites can be “considered” for inclusion rather than just listed, retained or removed. This has allowed governments to use delay tactics, such as in the case of Cameroon’s Dja Faunal Reserve. It has been considered for the In Danger list five times since 2011, but never listed.

Threats to Vietnam’s Halong Bay receive little attention.
Richard Vogel/AAP

Case in point: The Great Barrier Reef

In 2014 and 2015, the Australian government spent more than A$400,000 on overseas lobbying trips to keep the Great Barrier Reef off the In Danger list. The environment minister and senior bureaucrats travelled to most of the 21 countries on the committee, plus other nations, to argue against the listing. The mining industry also contributed to the lobbying effort.

The World Heritage Committee had asked Australia to develop a long-term plan to protect the reef. The Australian and Queensland governments appeared to comply, by releasing the Reef 2050 Plan in 2015.

But in 2018, a national audit and Senate inquiry found a substantial portion of finance for the plan was delivered – in a non-competitive and hidden process – to the private Great Barrier Reef Foundation, which had limited capacity and expertise. This casts doubt over whether the aims of the reef plan can be achieved.

Real world damage

Our study makes no recommendation on which World Heritage sites should be listed as In Danger. But it uncovered political manipulation that has real-world consequences. Had the Great Barrier Reef been listed as In Danger, for example, developments potentially harmful to the reef, such as the Adani coal mine, may have struggled to get approval.

Last year, an outlook report gave the reef a “very poor” prognosis and last summer the reef suffered its third mass bleaching in five years. There are grave concerns for the ecosystem’s ability to recover before yet another bleaching event.

Political manipulation of the World Heritage process undermines the usefulness of the In Danger list as a policy tool. Given the global investment in World Heritage over the past 50 years, it is essential to address the hidden threats to good governance and to safeguard all ecosystems.




Read more:
Australia reprieved – now it must prove it can care for the Reef


The Conversation


Tiffany Morrison, Professorial Research Fellow, ARC Centre of Excellence for Coral Reef Studies, James Cook University; Katrina Brown, Professor of Social Sciences, University of Exeter; Maria Lemos, Professor of Environmental Justice, Environmental Policy and Planning, Climate + Energy,, University of Michigan, and Neil Adger, Professor of Human Geography, University of Exeter

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Study identifies nine research priorities to better understand NZ’s vast marine area



New Zealand’s coastline spans a distance greater than from the south pole to the north pole.
from http://www.shutterstock.com, CC BY-ND

Rebecca Jarvis, Auckland University of Technology and Tim Young, Auckland University of Technology

The islands of New Zealand are only the visible part of a much larger submerged continent, known as Te Riu a Māui or Zealandia. Most of New Zealand’s sovereign territory, around 96%, is under water – and this means that the health of the ocean is of paramount importance.

Most of the Zealandia continent is under water.
CC BY-SA

New Zealand’s marine and coastal environments have significant ecological, economic, cultural and social value, but they face many threats. Disjointed legislation and considerable knowledge gaps limit our ability to effectively manage marine resources.

With the UN decade of ocean science starting in 2021, it is essential that we meet the challenges ahead. To do so, we have asked the New Zealand marine science community to collectively identify the areas of research we should focus on.

Ten important science questions were identified within nine research areas. The full list of 90 questions can be found in the paper and policy brief, but these are the nine priority areas:




Read more:
No-take marine areas help fishers (and fish) far more than we thought



CC BY-ND

1. Food from the ocean

Fisheries and aquaculture are vital sources of food, income and livelihoods, and it is crucial that we ensure these industries are sustainable. Our study has identified the need for new methods to minimise bycatch, mitigate environmental impacts and better understand the influence of commercial interests in fishers’ ability to adequately conserve and manage marine environments.

2. Biosecurity

The number of marine pests has increased by 10% since 2009, and questions remain around how we can best protect our natural and cultural marine heritage. Future directions include the development of new techniques to improve the early detection of invasive species, and new tools to identify where they came from, and when they arrived in New Zealand waters.

3. Climate change

Climate change already has wide ranging impacts on our coasts and oceans. We need research to better understand how climate change will affect different marine species, how food webs might respond to future change, and how ocean currents around New Zealand might be affected.

Climate change already affects marine species and food webs.
CC BY-ND

4. Marine reserves and protected areas

Marine protected areas are widely recognised as important tools for marine conservation and fisheries management. But less than 1% of New Zealand’s waters is protected to date. Future directions include research to identify where and how we should be implementing more protected areas, whether different models (including protection of customary fisheries and temporary fishing closures) could be as effective, and how we might integrate New Zealand’s marine protection into a wider Pacific network.




Read more:
Squid team finds high species diversity off Kermadec Islands, part of stalled marine reserve proposal


5. Ecosystems and biodiversity

While we know about 15,000 marine species, there may be as many as 65,000 in New Zealand. On average, seven new species are identified every two weeks, and there is much we do not know about our oceans. We need research to understand how we can best identify the current baseline of biodiversity across New Zealand’s different marine habitats, predict marine tipping points and restore degraded ocean floor habitats.

6. Policy and decision making

New Zealand’s policy landscape is complicated, at times contradictory, and we need an approach to marine management that better connects science, decision making and action. We also need to understand how to navigate power in decision making across diverse interests to advance an integrated ocean policy.

7. Marine guardianship

Marine guardianship, or kaitiakitanga, means individual and collective stewardship to protect the environment, while safeguarding marine resources for future generations. Our research found that citizen science can help maximise observations of change and connect New Zealanders with their marine heritage. It can also improve our understanding of how we can achieve a partnership between Western and indigenous science, mātauranga Māori.

8. Coastal and ocean processes

New Zealand’s coasts span a distance greater than from the south pole to the north pole. Erosion and deposition of land-based sediments into our seas has many impacts and affects ocean productivity, habitat structure, nutrient cycling and the composition of the seabed.

Future research should focus on how increased sedimentation affects the behaviour and survival of species at offshore sites and on better methods to measure physical, chemical and biological processes with higher accuracy to understand how long-term changes in the ocean might influence New Zealand’s marine ecosystems.

9. Other anthropogenic factors

Our study identified a range of other human threats that need more focused investigation, including agriculture, forestry mining and urban development.
We need more research into the relative effects of different land-use types on coastal water quality to establishing the combined effects of multiple contaminants (pesticides, pharmaceuticals, etc) on marine organisms and ecosystems. Pollution with microplastics and other marine debris is another major issue.

We hope this horizon scan will drive the development of new research areas, complement ongoing science initiatives, encourage collaboration and guide interdisciplinary teams. The questions the New Zealand marine science community identified as most important will help us fill existing knowledge gaps and make greater contributions to marine science, conservation, sustainable use, policy and management.The Conversation

Rebecca Jarvis, Research Fellow, Auckland University of Technology and Tim Young, Marine Scientist, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dolphin researchers say NZ’s proposed protection plan is flawed and misleading



Hector’s dolphins (Cephalorhynchus hectori) are found only in New Zealand.
Flickr/Scott Thompson, CC BY-ND

Elisabeth Slooten, University of Otago and Steve Dawson, University of Otago

The New Zealand government recently proposed a plan to manage what it considers to be threats to Hector’s dolphins, an endemic species found only in coastal waters. This includes the North Island subspecies Māui dolphin.

Māui dolphins are critically endangered and Hector’s dolphins are endangered. With only an estimated 57 Māui dolphins left, they are literally teetering on the edge of extinction. The population of Hector’s dolphins has declined from 30,000-50,000 to 10,000-15,000 over the past four decades.

The Ministry for Primary Industries (MPI) and the Department of Conservation (DOC) released a discussion document which includes a complex range of options aimed at improving protection.

But the proposals reveal two important issues – flawed science and management.




Read more:
Despite its green image, NZ has world’s highest proportion of species at risk


Flawed science

Several problems combine to overestimate the importance of disease and underestimate the importance of bycatch in fishing nets. For many years, MPI and the fishing industry have argued that diseases like toxoplasmosis and brucellosis are the main cause of decline in dolphin populations. This is not shared by New Zealand and international experts, who have been highly sceptical of the evidence. Either way, it is not an argument to ignore dolphin deaths in fishing nets.

Three international experts from the US, UK and Canada examined MPI’s work. They concluded that it is not possible to estimate the number of dolphin deaths from disease, much less claim that this impact is more serious than bycatch. On the other hand, it is easy to obtain an accurate estimate of the number of dolphins dying in fishing nets, as long as enough observers are allocated. MPI has failed to do so. Coverage has been so low that MPI’s estimate of catch rates in trawl fisheries is based on one observed capture.

It would be possible to estimate dolphin deaths in trawl nets if enough observers were allocated.
Supplied, CC BY-ND

The MPI model used in the public discussion document (and described in more detail in supporting materials) is complex, and a one-off. It is based on a “habitat model” of dolphin distribution, but fits actual dolphin sightings poorly.

Another problematic aspect of the method is that there is no clear time frame for the “recovery” of dolphin populations to the specified 90% of the unimpacted population size for Hector’s dolphins and 95% for Maui dolphins. This is one of the first things any decision maker would want to know. Would Māui dolphins be held at the current critically endangered population level for another 50 years? If so, this dramatically increases their chance of extinction.

Flawed management options

The second set of problems concerns the management options themselves. These are a complex mix of regulations that differ from one area to another, for gillnets and trawling. They frankly don’t make sense. The International Whaling Commission (IWC) and International Union for Conservation of Nature (IUCN) have recommended banning gillnet and trawl fisheries throughout Māui and Hector’s habitats. MPI’s best option for Māui dolphins comes close to this in the middle of the dolphins’ range, but doesn’t go as far offshore in the southern part of their range.

The South Island options for Hector’s dolphin are much worse. MPI’s approach has been to try to reduce the total number of dolphins killed to just below the level they believe is sustainable. MPI has invented its own method for calculating a sustainable number of dolphin deaths, which is much higher than the well-tested method used in the United States. The next step has been to find areas where the greatest number of deaths can be avoided at the least cost to the fishing industry.

Several Hector’s dolphin populations in the South Island are small, but they act as a bridge between larger populations.
Supplied, CC BY-ND

This sounds reasonable, but fixing the problem only in the places where the largest number of dolphins is being killed will have several negative consequences. Experience shows that fishing effort shifts beyond protected areas, merely moving the problem.

For example, MPI’s proposals leave a large gap on the south and east side of Banks Peninsula, in prime dolphin habitat. If the nearby areas are protected, this gap will be fished, and dolphin bycatch will continue unabated. What’s needed is protection of the areas where dolphins live.

MPI’s focus on reducing the total number of dolphin deaths also ignores the fact that it really matters where those deaths occur. Several Hector’s dolphin populations in the South Island are as small, or smaller, than the Māui dolphin population.

Entanglement deaths have much worse consequences in such small populations, which form a bridge between larger populations. Yet they get no attention in the current options. MPI’s proposals would lead to the depletion of small populations, with increased fragmentation and extinction of local populations.

Only one option

If we want to ensure the long-term survival of these dolphins, there is only one realistic solution: to remove fishing methods that kill dolphins from dolphin habitat. The simple solution is to use only dolphin-safe fishing methods in all waters less than 100 metres deep. This means no gillnets or trawling in harbours and other coastal waters up to the 100 metre depth contour.

There is no need to ban recreational or commercial fishing, but we must make the transition to selective, sustainable fishing methods. These include fish traps, longlines and other hook and line methods. Selective, sustainable fishing methods also use less fuel than trawling and avoid impacts of trawling and gillnets on the broader marine environment.

We also need more observers and more cameras on fishing boats. MPI’s estimate of how many dolphins are dying in fishing nets is almost certainly an under-estimate. It depends heavily on assumptions that are not supported by data.

With observers on only about 2-3% of the inshore fishing boats, the chances of missing bycatch altogether is very high. Low observer coverage also means boats can fish differently on the days when they have an observer aboard (for example, avoiding areas where they have caught dolphins).

Selective fishing methods would protect dolphins and the broader marine environment.
Supplied, CC BY-ND

We know what works

Despite getting a poor report card from the international expert panel, MPI presented a virtually unmodified analysis to the IWC’s scientific committee last month. The committee identified most of the same issues and concluded it needed more time to decide whether MPI’s approach is fit for purpose. Meanwhile the IWC reiterated its recommendation, which it has been making for eight years, to ban gillnets and trawl fisheries throughout Māui dolphin habitat.

In the meantime, dolphins continue to be killed in fishing. We need to make decisions on the basis of scientific evidence available now. All of the population surveys, including those funded by MPI, show Hector’s and Māui dolphins live in waters less than 100 metres deep.

The best evidence of what works comes from Banks Peninsula, where the dolphins have had partial protection since 1988, and detailed follow-up research. This population was declining at 6% per year before gillnets were banned to four nautical miles offshore and trawling to two nautical miles. Even though there was no management of disease, the rate of population decline has dropped dramatically to less than 1% per year. If disease were a serious problem, the restrictions on gillnets would have made little difference.

A general principle in conservation is that the longer you wait, the more difficult and more expensive it will be to save a species, and the more likely we are to fail.The Conversation

Elisabeth Slooten, Professor, University of Otago and Steve Dawson, Professor, University of Otago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New research could lead to a pregnancy test for endangered marsupials



Knew you were coming: a koala cub on the back of the mother.
Shutterstock/PARFENOV

Oliver Griffith, University of Melbourne

Many women realise they are pregnant before they’ve even done the test – perhaps feeling a touch of nausea, or tender, larger-than-usual breasts.

For a long time, biologists had thought most marsupials lacked a way to recognise a pregnancy.

But new research published today shows a marsupial mum knows – in a biological sense – when she’s carrying a young one before they make their journey to the pouch.




Read more:
All female mammals have a clitoris – we’re starting to work out what that means for their sex lives


This knowledge changes how we think pregnancy evolved in mammals. It may also help in breeding programs for threatened or endangered marsupials by contributing to new technologies such as a marsupial pregnancy test.

Marsupials do things differently

When people think of marsupials – animals that mostly rear their young in a pouch (although not all marsupials have a pouch) – kangaroos and koalas tend to spring to mind. But marsupials come in a range of shapes and sizes.

A red-necked wallaby with a joey.
Pixabay/sandid

Australia has about 250 species of marsupials, including wombats, possums, sugar gliders, the extinct Tasmanian tiger, and several endangered species such as the Tasmanian devil.

In addition to Australia’s marsupial diversity, there are also 120 marsupial species in South America – most of which are opossums – and just one species in North America, the Virginia opossum.

One thing all marsupials have in common is they give birth to very small, almost embryonic, young.

An opossum with two day old young.
Oliver Griffith, Author provided

Because marsupial pregnancy passes so quickly (12-40 days, depending on the species), and marsupial young are so small and underdeveloped at birth, biologists had thought the biological changes required to support the fetus through a pregnancy happened as a follow on from releasing an egg (ovulation), rather than a response to the presence of a fetus.

Marsupial pregnancy is quick

One way to explore the question of whether it is an egg or a fetus that tells the marsupial female to be ready for pregnancy is to look at the uterus and the placenta.

In marsupials, just like in humans, embryos develop inside the uterus where they are nourished by a placenta.

Previously, biologists thought all of the physiological changes required for pregnancy in marsupials were regulated by hormones produced in the ovary after ovulation.

If this hypothesis is right, then the uterus of pregnant opossums should look the same as the uterus of opossums that ovulate but don’t have the opportunity to mate with a male.

To test this hypothesis, my colleagues at Yale’s Systems Biology Institute and I examined reproduction in the grey short-tailed opossum.

Grey short tailed opossum with young.
Oliver Griffith

Signs of pregnancy

We looked at two groups of opossums: females that were exposed to male pheromones to induce ovulation, and females that were put with males so they could mate and become pregnant.

We then used microscopy and molecular techniques to compare females from the two groups. Contrary to the current dogma, we found that the uterus in pregnancy looked very different to those females that did not mate.

In particular, we found the blood vessels that bring blood from the mother to the placenta interface were only present in pregnancy. We also noticed that the machinery responsible for nutrient transport (nutrient transporting molecules) from the mother to the fetus was only produced in pregnancy.

While hormones may be regulating some aspects of maternal physiology, the mother is certainly detecting the presence of embryos and responding in a way that shows she is recognising pregnancy.

How this knowledge can help others

Given that recognition of pregnancy has now been found in both eutherian (formerly known as placental) mammals like ourselves and marsupials with the more ancestral reproductive characters, it appears likely that recognition of pregnancy is a common feature of all live bearing mammals.




Read more:
Sexual aggression key to spread of deadly tumours in Tasmanian devils


But this knowledge does more than satisfy our curiosity. It could lead to new technologies to better manage marsupial conservation. Several marsupials face threats in the wild, and captive breeding programs are an important way to secure the future of several species.

Two Tasmanian devils.
Pixabay/pen_ash

One such species is the Tasmanian devil, which faces extinction from a dangerous contagious cancer. Captive breeding programs may be one of the only mechanisms to ensure the species survives.

But management can be made more difficult when we don’t know which animals are pregnant. Our research shows that maternal signals are produced in response to the presence of developing embryos. With a bit more research, it may be possible to test for these signals directly.

New reproductive technologies are likely crucial for improving outcomes of conservation programs, and this work shows, that to do this we first need a better understanding of the biology of the animals we are trying to save.The Conversation

Oliver Griffith, ARC DECRA Fellow, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ten years ago, climate adaptation research was gaining steam. Today, it’s gutted


Rod Keenan, University of Melbourne

Ten years ago, on February 7, 2009, I sat down in my apartment in central Melbourne to write a job application. All of the blinds were down, and the windows tightly closed. Outside it was 47℃. We had no air conditioning. The heat seeped through the walls.

When I stepped outside, the air ripped at my nose and throat, like a fan-forced sauna. It felt ominous. With my forestry training, and some previous experience of bad fire weather in Tasmania, I knew any fires that day would be catastrophic. They were. Black Saturday became Australia’s worst-ever bushfire disaster.

I was applying for the position of Director of the Victorian Centre for Climate Change Adaptation Research (VCCCAR). I was successful and started the job later that year.




Read more:
We can build homes to survive bushfires, so why don’t we?


The climate in Victoria over the previous 12 years had been harsh. Between 1997 and 2009 the state suffered its worst drought on record, and major bushfires in 2003 and 2006-07 burned more than 2 million hectares of forest. Then came Black Saturday, and the year after that saw the start of Australia’s wettest two-year period on record, bringing major floods to the state’s north, as well as to vast swathes of the rest of the country.

In Victoria alone, hundreds of millions of dollars a year were being spent on response and recovery from climate-related events. In government, the view was that things couldn’t go on that way. As climate change accelerated, these costs would only rise.

We had to get better at preparing for, and avoiding, the future impacts of rapid climate change. This is what is what we mean by the term “climate adaptation”.

Facing up to disasters

A decade after Black Saturday, with record floods in Queensland, severe bushfires in Tasmania and Victoria, widespread heatwaves and drought, and a crisis in the Murray-Darling Basin, it is timely to reflect on the state of adaptation policy and practice in Australia.

In 2009 the Rudd Labor government had taken up the challenge of reducing greenhouse gas emissions. With Malcolm Turnbull as opposition leader, we seemed headed for a bipartisan national solution ahead of the Copenhagen climate summit in December. Governments, meanwhile, agreed that adaptation was more a state and local responsibility. Different parts of Australia faced different climate risks. Communities and industries in those regions had different vulnerabilities and adaptive capacities and needed locally driven initiatives.

Led by the Brumby government in Victoria, state governments developed an adaptation policy framework and sought federal financial support to implement it. This included research on climate adaptation. The federal government put A$50 million into a new National Climate Change Adaptation Research Facility, based in Queensland, alongside the CSIRO Adaptation Flagship which was set up in 2007.

The Victorian Government invested A$5 million in VCCCAR. The state faced local risks: more heatwaves, floods, storms, bushfires and rising sea levels, and my colleagues and I found there was plenty of information on climate impacts. The question was: what can policy-makers, communities, businesses and individuals do in practical terms to plan and prepare?

Getting to work

From 2009 until June 2014, researchers from across disciplines in four universities collaborated with state and local governments, industry and the community to lay the groundwork for better decisions in a changing climate.

We held 20 regional and metropolitan consultation events and hosted visiting international experts on urban design, flood, drought, and community planning. Annual forums brought together researchers, practitioners, consultants and industry to share knowledge and engage in collective discussion on adaptation options. We worked with eight government departments, driving the message that adapting to climate change wasn’t just an “environmental” problem and needed responses across government.

All involved considered the VCCCAR a success. It improved knowledge about climate adaptation options and confidence in making climate decisions. The results fed into Victoria’s 2013 Climate Change Adaptation Plan, as well as policies for urban design and natural resource management, and practices in the local government and community sectors. I hoped the centre would continue to provide a foundation for future adaptation policy and practice.

Funding cuts

In the 2014 state budget the Napthine government chose not to continue funding the VCCCAR. Soon after, the Abbott federal government reduced the funding and scope of its national counterpart, and funding ended last year.

Meanwhile, CSIRO chief executive Larry Marshall argued that climate science was less important than the need for innovation and turning inventions into benefits for society. Along with other areas of climate science, the Adaptation Flagship was cut, its staff let go or redirected. From a strong presence in 2014, climate adaptation has become almost invisible in the national research landscape.

In the current chaos of climate policy, adaptation has been downgraded. There is a national strategy but little high-level policy attention. State governments have shifted their focus to energy, investing in renewables and energy security. Climate change was largely ignored in developing the Murray-Darling Basin Plan.

Despite this lack of policy leadership, many organisations are adapting. Local governments with the resources are addressing their particular challenges, and building resilience. Our public transport now functions better in heatwaves, and climate change is being considered in new transport infrastructure. The public is more aware of heatwave risks, and there is investment in emergency management research, but this is primarily focused on disaster response.

Large companies making long-term investments, such as Brisbane Airport, have improved their capacity to consider future climate risks. There are better planning tools and systems for business, and the finance and insurance sectors are seriously considering these risks in investment decisions. Smart rural producers are diversifying, using their resources differently, or shifting to different growing environments.

Struggling to cope

But much more is needed. Old buildings and cooling systems are not built to cope with our current temperatures. Small businesses are suffering, but few have capacity to analyse their vulnerabilities or assess responses. The power generation system is under increasing pressure. Warning systems have improved but there is still much to do to design warnings in a way that ensures an appropriate public reaction. Too many people still adopt a “she’ll be right” attitude and ignore warnings, or leave it until the last minute to evacuate.

In an internal submission to government in 2014 we proposed a Victorian Climate Resilience Program to provide information and tools for small businesses. Other parts of the program included frameworks for managing risks for local governments, urban greening, building community leadership for resilience, and new conservation approaches in landscapes undergoing rapid change.




Read more:
The 2017 budget has axed research to help Australia adapt to climate change


Investment in climate adaptation pays off. Small investments now can generate payoffs of 3-5:1 in reduced future impacts. A recent business round table report indicates that carefully targeted research and information provision could save state and federal governments A$12.2 billion and reduce the overall economic costs of natural disasters (which are projected to rise to A$23 billion a year by 2050) by more than 50%.

Ten years on from Black Saturday, climate change is accelerating. The 2030 climate forecasts made in 2009 have come true in half the time. Today we are living through more and hotter heatwaves, longer droughts, uncontrollable fires, intense downpours and significant shifts in seasonal rainfall patterns.

Yes, policy-makers need to focus on reducing greenhouse emissions, but we also need a similar focus on adaptation to maintain functioning and prosperous communities, economies and ecosystems under this rapid change. It is vital that we rebuild our research capacity and learn from our past experiences, to support the partnerships needed to make climate-smart decisions.The Conversation

Rod Keenan, Professor, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can bees do maths? Yes – new research shows they can add and subtract



File 20181211 76962 cfh85r.jpg?ixlib=rb 1.1
Can we have a count of all the honeycomb cells please?
from www.shutterstock.com

Scarlett Howard, RMIT University; Adrian Dyer, RMIT University, and Jair Garcia, RMIT University

The humble honeybee can use symbols to perform basic maths including addition and subtraction, shows new research published today in the journal Science Advances.

Bee have miniature brains – but they can learn basic arithmetic.

Despite having a brain containing less than one million neurons, the honeybee has recently shown it can manage complex problems – like understanding the concept of zero.

Honeybees are a high value model for exploring questions about neuroscience. In our latest study we decided to test if they could learn to perform simple arithmetical operations such as addition and subtraction.




Read more:
Which square is bigger? Honeybees see visual illusions like humans do


Addition and subtraction operations

As children, we learn that a plus symbol (+) means we have to add two or more quantities, while a minus symbol (-) means we have to subtract quantities from each other.

To solve these problems, we need both long-term and short-term memory. We use working (short-term) memory to manage the numerical values while performing the operation, and we store the rules for adding or subtracting in long-term memory.

Although the ability to perform arithmetic like adding and subtracting is not simple, it is vital in human societies. The Egyptians and Babylonians show evidence of using arithmetic around 2000BCE, which would have been useful – for example – to count live stock and calculate new numbers when cattle were sold off.

This scene depicts a cattle count (copied by the Egyptologist Lepsius). In the middle register we see 835 horned cattle on the left, right behind them are some 220 animals and on the right 2,235 goats. In the bottom register we see 760 donkeys on the left and 974 goats on the right.
Wikimedia commons, CC BY

But does the development of arithmetical thinking require a large primate brain, or do other animals face similar problems that enable them to process arithmetic operations? We explored this using the honeybee.

How to train a bee

Honeybees are central place foragers – which means that a forager bee will return to a place if the location provides a good source of food.

We provide bees with a high concentration of sugar water during experiments, so individual bees (all female) continue to return to the experiment to collect nutrition for the hive.

In our setup, when a bee chooses a correct number (see below) she receives a reward of sugar water. If she makes an incorrect choice, she will receive a bitter tasting quinine solution.

We use this method to teach individual bees to learn the task of addition or subtraction over four to seven hours. Each time the bee became full she returned to the hive, then came back to the experiment to continue learning.




Read more:
Are they watching you? The tiny brains of bees and wasps can recognise faces


Addition and subtraction in bees

Honeybees were individually trained to visit a Y-maze shaped apparatus.

The bee would fly into the entrance of the Y-maze and view an array of elements consisting of between one to five shapes. The shapes (for example: square shapes, but many shape options were employed in actual experiments) would be one of two colours. Blue meant the bee had to perform an addition operation (+ 1). If the shapes were yellow, the bee would have to perform a subtraction operation (- 1).

For the task of either plus or minus one, one side would contain an incorrect answer and the other side would contain the correct answer. The side of stimuli was changed randomly throughout the experiment, so that the bee would not learn to only visit one side of the Y-maze.

After viewing the initial number, each bee would fly through a hole into a decision chamber where it could either choose to fly to the left or right side of the Y-maze depending on operation to which she had been trained for.

The Y-maze apparatus used for training honeybees.
Scarlett Howard

At the beginning of the experiment, bees made random choices until they could work out how to solve the problem. Eventually, over 100 learning trials, bees learnt that blue meant +1 while yellow meant -1. Bees could then apply the rules to new numbers.

During testing with a novel number, bees were correct in addition and subtraction of one element 64-72% of the time. The bee’s performance on tests was significantly different than what we would expect if bees were choosing randomly, called chance level performance (50% correct/incorrect)

Thus, our “bee school” within the Y-maze allowed the bees to learn how to use arithmetic operators to add or subtract.

Why is this a complex question for bees?

Numerical operations such as addition and subtraction are complex questions because they require two levels of processing. The first level requires a bee to comprehend the value of numerical attributes. The second level requires the bee to mentally manipulate numerical attributes in working memory.

In addition to these two processes, bees also had to perform the arithmetic operations in working memory – the number “one” to be added or subtracted was not visually present. Rather, the idea of plus one or minus “one” was an abstract concept which bees had to resolve over the course of the training.

Showing that a bee can combine simple arithmetic and symbolic learning has identified numerous areas of research to expand into, such as whether other animals can add and subtract.




Read more:
Bees get stressed at work too (and it might be causing colony collapse)


Implications for AI and neurobiology

There is a lot of interest in AI, and how well computers can enable self learning of novel problems.

Our new findings show that learning symbolic arithmetic operators to enable addition and subtraction is possible with a miniature brain. This suggests there may be new ways to incorporate interactions of both long-term rules and working memory into designs to improve rapid AI learning of new problems.

Also, our findings show that the understanding of maths symbols as a language with operators is something that many brains can probably achieve, and helps explain how many human cultures independently developed numeracy skills.


This article has been published simultaneously in Spanish on The Conversation Espana.The Conversation

Scarlett Howard, PhD candidate, RMIT University; Adrian Dyer, Associate Professor, RMIT University, and Jair Garcia, Research fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Our survey found ‘questionable research practices’ by ecologists and biologists – here’s what that means



File 20180405 189821 oqdb0h.jpg?ixlib=rb 1.1
Negative results are still useful, and should not be hidden.
from www.shutterstock.com

Fiona Fidler, University of Melbourne and Hannah Fraser, University of Melbourne

Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.




Read more:
Science is in a reproducibility crisis – how do we resolve it?


We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)

  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)

  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).

Although these results are very similar to those that have been found in psychology, reactions suggest that they are surprising – at least to some ecology and evolution researchers.

//platform.twitter.com/widgets.js

There are many possible interpretations of our results. We expect there will also be many misconceptions about them and unjustified extrapolations. We talk though some of these below.




Read more:
How we edit science part 2: significance testing, p-hacking and peer review


It’s fraud!

It’s not fraud. Scientific fraud involves fabricating data and carries heavy criminal penalties. The questionable research practices we focus on are by definition questionable: they sit in a grey area between acceptable practices and scientific misconduct.

Not crazy. Not kooky. Scientists are just humans.
from www.shutterstock.com

We did ask one question about fabricating data and the answer to that offered further evidence that it is very rare, consistent with findings from other fields.




Read more:
Research fraud: the temptation to lie – and the challenges of regulation


Scientists lack integrity and we shouldn’t trust them

There are a few reasons why this should not be the take home message of our paper.

First, reactions to our results so far suggest an engaged, mature scientific community, ready to acknowledge and address these problems.

//platform.twitter.com/widgets.js

If anything, this sort of engagement should increase our trust in these scientists and their commitment to research integrity.

Second, the results tell us much more about structured incentives and institutions than they tell us about individuals and their personal integrity.




Read more:
Publish or perish culture encourages scientists to cut corners


For example, these results tell us about the institution of scientific publishing, where negative (non statistically significant results) are all but banished from most journals in most fields of science, and where replication studies are virtually never published because of relentless focus on novel, “ground breaking” results.

The survey results tells us about scientific funding, again where “novel” (meaning positive, significant) findings are valued more than careful, cautious procedures and replication. They also tell us about universities, about the hiring and promotion practices within academic science that focus on publication metrics and overvalue quantity at the expense of quality.

So what do they mean, these questionable research practices admitted by the scientists in our survey? We think they’re best understood as the inevitable outcome of publication bias, funding protocols and an ever increasing pressure to publish.




Read more:
Novelty in science – real necessity or distracting obsession?


We can’t base important decisions on current scientific evidence

There’s a risk our results will feed into a view that our science is not policy ready. In many areas, such as health and the environment, this could be very damaging, even disastrous.

One reason it’s unwarranted is that climate science is a model based science, and there have been many independent replications of these models. Similarly with immunisation trials.

We know that any criticism of scientific practice runs a risk in the context of anti-science sentiment, but such criticism is fundamental to the success of science.

Remaining open to criticism is science’s most powerful self-correction mechanism, and ultimately what makes the scientific evidence base trustworthy.

Transparency can build trust in science and scientists.
from www.shutterstock.com

Scientists are human and we need safeguards

This is an interpretation we wholeheartedly endorse. Scientists are human and subject to the same suite of cognitive biases – like confirmation bias – as the rest of us.

As we learn more about cognitive biases and how best to mitigate them in different circumstances, we need to feed this back into the norms of scientific practice.




Read more:
Confirmation bias: A psychological phenomenon that helps explain why pundits got it wrong


The same is true of our knowledge about how people function under different incentive structures and conditions. This is the basis of many of the initiatives designed to make science more open and transparent.

The open science movement is about developing initiatives to protect against the influence of cognitive bias, and alter the incentive structures so that research using these questionable research practices stops being rewarded.

Some of these initiatives have been enthusiastically adopted by many scientists and journal editors. For example, many journals now publish analysis code and data along with their articles, and many have signed up to Transparency and Openness Promotion (TOP) guidelines.

Other initiatives offer great promise too. For example, registered report formats are now offered by some journals, mostly in psychology and medical fields. In a registered report, articles are reviewed on the strength of their underlying premise and approach, before data is collected. This removes the temptation to select only positive results or to apply different standards of rigour to negative results. In short, it thwarts publication bias.

The ConversationWe hope that by drawing attention to the prevalence of questionable research practices, our research will encourage support of these initiatives, and importantly, encourage institutions to support researchers in their own efforts to align their practice with their scientific values.

Fiona Fidler, Associate Professor, School of Historical and Philosophical Studies, University of Melbourne and Hannah Fraser, Postdoctoral Researcher , University of Melbourne

This article was originally published on The Conversation. Read the original article.

Now you see us: how casting an eerie glow on fish can help count and conserve them



File 20180215 124899 101fonp.jpg?ixlib=rb 1.1
Biofluorescence makes researching cryptic species such as this Lizardfish easier and less harmful.
Maarten De Brauwer, Author provided

Maarten De Brauwer, Curtin University

News stories about fish often focus either on large fish like sharks, or on tasty seafood. So it might come as a surprise that more than half of the fish on coral reefs are tiny and well camouflaged.

This naturally makes them hard to find, and as a result we know very little about these so-called “cryptic” species.

Now my colleagues and I have developed a new method to make it easier to study these fish. As we report in the journal Conservation Biology, many of these species are “biofluorescent” – if you shine blue light on them they will reflect it back in a different colour. This makes them a whole lot easier to spot.

Cryptic fish such as the Moray species are easily detectible using this new method.
Maarten De Brauwer, Author provided



Read more:
Dazzling or deceptive? The markings of coral reef fish


Marine biologists try to collect essential information about species so as to help protect them. One of the most important pieces of information is estimating how many of these cryptic species are out there.

Now you ‘sea’ them.

These cryptic fishes are more important for us than people realise. They are highly diverse and hugely important to coral reef health. They are also food for the fish we like to eat, and provide incomes for thousands of people through scuba diving tourism.

These small fishes live fast and die young, reproducing quickly and being eaten by bigger fish almost as quickly. We do know that some species are dwindling in number. The Knsyna seahorse in South Africa is in danger of extinction, while many cryptic goby species in the Caribbean were being eaten by invasive lionfish before they had even been described, let alone counted.

Some cryptic species, such as this thorny seahorse (Hippocampus histrix) are more popular than other species in aquaria, for divers and as the subjects in movies.
Maarten De Brauwer, Author provided

Because cryptic fishes are so easy to miss, their total abundance is likely to be underestimated. When attempting to survey their populations, scientists generally had to resort to using chemicals to stun or kill the fish, after which they are collected and counted. This method is efficient, but it is not ideal to kill members of species that might be endangered.

Developing an efficient, non-destructive way to survey fish would benefit researchers and conservationists, and this is where biofluorescence comes in.

Biofluorescence or bioluminescence?

Biofluorescence is very different to bioluminescence, the chemical process by which animals such as deep-sea fish or fireflies produce their own light. In contrast, biofluorescent animals absorb light and reflect it as a different colour, so this process needs an external source of light.

Biofluorescence is most easily observed in corals, where it has been used to find small juveniles. In the ocean, biofluorescence can be observed by using a strong blue light source, combined with a diving mask fitted with a yellow filter.

Before… a scorpionfish captured without showing its biofluorescence, camouflaged against the rocks.
Maarten De Brauwer, Author provided
And after… the same scorpionfish in an image that captures its biofluorescence.
Maarten De Brauwer, Author provided

Recent research showed that biofluorescence is more common among fish than we previously realised. This prompted us to investigate whether biofluorescence can be used to detect cryptic fishes.

On the glow

We tested 230 fish species through the Coral Triangle to Australia’s north, and found that biofluorescence is indeed widespread in cryptic fish species.

It is so common, in fact, that the probability of a fish being biofluorescent is 70.9 times greater for cryptic species than for highly visible species.

But can this actually be used to improve our detection of cryptic fish species? The answer is yes.

Biofluorescence makes these seahorses much easier to spot.
Maarten De Brauwer, Author provided

We compared normal visual surveys to surveys using biofluorescence on one rare (Bargibant’s pygmy seahorse) and two common cryptic species (Largemouth triplefin and Highfin triplefin). Using biofluorescence we found twice as many pygmy seahorses, and three times the number of triplefins than with normal methods.

This method, which we have dubbed the “underwater biofluorescence census” makes detecting cryptic fishes easier, and counting them more accurate. While it might not detect all the animals in the way that surveys with chemicals do, it has the big benefit of not killing the species you’re counting.

A closer look at three large cryptic fish families (Gobies, Scorpionfishes, and Seahorses and Pipefishes) will tell you that they contain more than 2,000 species globally. The extinction risk of more than half of these species has not yet been evaluated. Many species that have been assessed are nevertheless classed as “data deficient” – a euphemistic way of saying that we don’t know enough to decide if they are endangered or not.




Read more:
Why you should never put a goldfish in a park pond … or down the toilet


As the majority of these cryptic species are likely to be biofluorescent, our new technique could be used to help figure out the conservation status of hundreds or even thousands of species. Our method is relatively cheap and easy to learn, and could potentially be used by citizen scientists all over the world.

The ConversationUltimately, the goal of scientists and conservationists alike is protecting marine ecosystems so we can have our seafood, enjoy our dives, and people can make a sustainable living off the ocean. Small cryptic fishes are essential in making all of this possible, and biofluorescent fish surveys can play a role in studying these understudied critters.

Maarten De Brauwer, PhD-candidate in Marine Ecology, Curtin University

This article was originally published on The Conversation. Read the original article.

New research suggests common herbicides are linked to antibiotic resistance



File 20171117 7557 hxmg6.jpg?ixlib=rb 1.1
New Zealand researchers have found that the active ingredients in commonly-used weed killers like Round-up and Kamba can cause bacteria to become less susceptible to antibiotics.
from http://www.shutterstock.com, CC BY-ND

Jack Heinemann

Antibiotics are losing their ability to kill bacteria.

One of the main reasons for the rise in antibiotic resistance is the improper use of antibiotics, but our latest research shows that the ingredients in commonly-used weed killers like Round-up and Kamba can also cause bacteria to become less susceptible to antibiotics.

Herbicides induce gene activity

Already, about 700,000 deaths are attributable each year to infections by drug-resistant bacteria. A recent report projected that by 2050, 10 million people a year will die from previously treatable bacterial infections, with a cumulative cost to the world economy of $US100 trillion.

The bacteria we study are potential human pathogens. Seventy years ago pathogens were uniformly susceptible to antibiotics used in medicine and agriculture. That has changed. Now some are resistant to all but one or two remaining antibiotics. Some strains are resistant to all.


Read more: Drug resistance: how we keep track of whether antibiotics are being used responsibly


When bacteria were exposed to commercial herbicide formulations based
on 2,4-D, dicamba or glyphosate, the lethal concentration of various antibiotics
changed. Often it took more antibiotic to kill them, but sometimes it took less.
We showed that one effect of the herbicides was to induce certain genes that they all carry, but don’t always use.

These genes are part of the so-called “adaptive response”. The main elements of this response are proteins that “pump” toxins out of the cell, keeping intracellular concentrations sublethal. We knew this because the addition of a chemical inhibitor of the pumps eliminated the protective effect of the herbicide.

In our latest work, we tested this by using gene “knockout” bacteria, which had been engineered to lose just one pump gene. We found that most of the effect of the herbicide was explained by these pumps.

Reduced antibiotic use may not fix the problem

For decades we have put our faith in inventing new antibiotics above the wisdom
of preserving the effectiveness of existing ones. We have applied the same invention incentives to the commercialisation of antibiotics as those used with mobile phones. Those incentives maximise the rate of product sales. They have saturated the market with phones, and they saturate the earth with antibiotic resistant bacteria.

Improper use of antibiotics is a powerful driver of the widespread resistance.
Knowing this naturally leads to the hypothesis that proper and lower use will make the world right again. Unfortunately, the science is not fully on the side of that hypothesis.

Studies following rates of resistance do generally find a decrease in resistance to specific drugs when their use is banned or decreased. However, the effect is not a restoration of a pre-antibiotic susceptibility, characterised by multi-year effectiveness of the antibiotic. Instead, resistance returns rapidly when the drug is used again.

This tells us that once resistance has stablised in populations of bacteria, suspended use may change the ratio of resistant to susceptible but it does not eliminate resistant types. Very small numbers of resistant bacteria can undermine the antibiotic when it is used again.

Herbicides and other pollutants mimic antibiotics

What keeps these resistant minorities around? Recall that bacteria are very
small, but there are lots of them; you carry 100 trillion of them. They are also found deep underground to high up in the atmosphere.

Because antibiotics are so powerful, they eliminate bacteria that are susceptible and leave the few resistant ones to repopulate. Having done so, we now have lots of bacteria, and lots of resistance genes, to get rid of, and that takes a lot of time.

As our work suggests, the story is even more complicated. We are inclined to think of antibiotics as medicine and agrichemicals, hand soaps, bug sprays and preservatives as different. Bacteria don’t do this. To them, they are all toxic.

Some are really toxic (antibiotics) and some not so much (herbicides). Bacteria are among the longest lived organisms on earth. Nearly four billion years of survival has taught them how to deal with toxins.

Pesticides as antibiotic vaccines

Our hypothesis is that herbicides immunise the bacteria from more toxic
toxins like antibiotics. Since all bacteria have these protections, the use of widely used products to which they are exposed is particularly problematic. So these products, among others, might keep bacteria ready for antibiotics whether or not we are using them.

We found that both the purified active ingredients and potential inert ingredients in weed killers caused a change in antibiotic response. Those inert ingredients are also found in processed foods and common household products. Resistance was caused below legally allowed food concentrations.

What does this all mean? Well for starters we may have to think more carefully about how to regulate chemical commerce. With approximately eight million manufactured chemicals in commerce, 140,000 new since 1950, and limited knowledge of their combination effects and breakdown products, this won’t be easy.

The ConversationBut neither is it easy to watch someone die from an infection we lost the power to cure.

Jack Heinemann, Professor of Molecular Biology and Genetics

This article was originally published on The Conversation. Read the original article.