Now you see us: how casting an eerie glow on fish can help count and conserve them



File 20180215 124899 101fonp.jpg?ixlib=rb 1.1
Biofluorescence makes researching cryptic species such as this Lizardfish easier and less harmful.
Maarten De Brauwer, Author provided

Maarten De Brauwer, Curtin University

News stories about fish often focus either on large fish like sharks, or on tasty seafood. So it might come as a surprise that more than half of the fish on coral reefs are tiny and well camouflaged.

This naturally makes them hard to find, and as a result we know very little about these so-called “cryptic” species.

Now my colleagues and I have developed a new method to make it easier to study these fish. As we report in the journal Conservation Biology, many of these species are “biofluorescent” – if you shine blue light on them they will reflect it back in a different colour. This makes them a whole lot easier to spot.

Cryptic fish such as the Moray species are easily detectible using this new method.
Maarten De Brauwer, Author provided



Read more:
Dazzling or deceptive? The markings of coral reef fish


Marine biologists try to collect essential information about species so as to help protect them. One of the most important pieces of information is estimating how many of these cryptic species are out there.

Now you ‘sea’ them.

These cryptic fishes are more important for us than people realise. They are highly diverse and hugely important to coral reef health. They are also food for the fish we like to eat, and provide incomes for thousands of people through scuba diving tourism.

These small fishes live fast and die young, reproducing quickly and being eaten by bigger fish almost as quickly. We do know that some species are dwindling in number. The Knsyna seahorse in South Africa is in danger of extinction, while many cryptic goby species in the Caribbean were being eaten by invasive lionfish before they had even been described, let alone counted.

Some cryptic species, such as this thorny seahorse (Hippocampus histrix) are more popular than other species in aquaria, for divers and as the subjects in movies.
Maarten De Brauwer, Author provided

Because cryptic fishes are so easy to miss, their total abundance is likely to be underestimated. When attempting to survey their populations, scientists generally had to resort to using chemicals to stun or kill the fish, after which they are collected and counted. This method is efficient, but it is not ideal to kill members of species that might be endangered.

Developing an efficient, non-destructive way to survey fish would benefit researchers and conservationists, and this is where biofluorescence comes in.

Biofluorescence or bioluminescence?

Biofluorescence is very different to bioluminescence, the chemical process by which animals such as deep-sea fish or fireflies produce their own light. In contrast, biofluorescent animals absorb light and reflect it as a different colour, so this process needs an external source of light.

Biofluorescence is most easily observed in corals, where it has been used to find small juveniles. In the ocean, biofluorescence can be observed by using a strong blue light source, combined with a diving mask fitted with a yellow filter.

Before… a scorpionfish captured without showing its biofluorescence, camouflaged against the rocks.
Maarten De Brauwer, Author provided
And after… the same scorpionfish in an image that captures its biofluorescence.
Maarten De Brauwer, Author provided

Recent research showed that biofluorescence is more common among fish than we previously realised. This prompted us to investigate whether biofluorescence can be used to detect cryptic fishes.

On the glow

We tested 230 fish species through the Coral Triangle to Australia’s north, and found that biofluorescence is indeed widespread in cryptic fish species.

It is so common, in fact, that the probability of a fish being biofluorescent is 70.9 times greater for cryptic species than for highly visible species.

But can this actually be used to improve our detection of cryptic fish species? The answer is yes.

Biofluorescence makes these seahorses much easier to spot.
Maarten De Brauwer, Author provided

We compared normal visual surveys to surveys using biofluorescence on one rare (Bargibant’s pygmy seahorse) and two common cryptic species (Largemouth triplefin and Highfin triplefin). Using biofluorescence we found twice as many pygmy seahorses, and three times the number of triplefins than with normal methods.

This method, which we have dubbed the “underwater biofluorescence census” makes detecting cryptic fishes easier, and counting them more accurate. While it might not detect all the animals in the way that surveys with chemicals do, it has the big benefit of not killing the species you’re counting.

A closer look at three large cryptic fish families (Gobies, Scorpionfishes, and Seahorses and Pipefishes) will tell you that they contain more than 2,000 species globally. The extinction risk of more than half of these species has not yet been evaluated. Many species that have been assessed are nevertheless classed as “data deficient” – a euphemistic way of saying that we don’t know enough to decide if they are endangered or not.




Read more:
Why you should never put a goldfish in a park pond … or down the toilet


As the majority of these cryptic species are likely to be biofluorescent, our new technique could be used to help figure out the conservation status of hundreds or even thousands of species. Our method is relatively cheap and easy to learn, and could potentially be used by citizen scientists all over the world.

The ConversationUltimately, the goal of scientists and conservationists alike is protecting marine ecosystems so we can have our seafood, enjoy our dives, and people can make a sustainable living off the ocean. Small cryptic fishes are essential in making all of this possible, and biofluorescent fish surveys can play a role in studying these understudied critters.

Maarten De Brauwer, PhD-candidate in Marine Ecology, Curtin University

This article was originally published on The Conversation. Read the original article.

Advertisements

Deposit schemes reduce drink containers in the ocean by 40%



File 20180219 75997 dw2qiv.jpg?ixlib=rb 1.1
Uncountable numbers of drink containers end up in the ocean every year.
Shutterstock

Qamar Schuyler, CSIRO; Britta Denise Hardesty, CSIRO, and Chris Wilcox, CSIRO

Plastic waste in the ocean is a global problem; some eight million metric tonnes of plastic ends up in the ocean every year.




Read more:
Eight million tonnes of plastic are going into the ocean each year


One possible solution – paying a small amount for returned drink containers – has been consistently opposed by the beverage industry for many years. But for the first time our research, published in Marine Policy, has found that container deposits reduce the amount of beverage containers on the coasts of both the United States and Australia by 40%.

What’s more, the reduction is even more pronounced in areas of lower socio-economic status, where plastic waste is most common.

Plastic not so fantastic

There have been many suggestions for how to reduce marine debris. Some promote reducing plastic packaging, re-purposing plastic debris], or cleaning beaches. There has been a push to get rid of plastic straws, and even Queen Elizabeth II has banned single use plastics from Royal Estates! All of these contribute to the reduction of plastics, and are important options to consider.




Read more:
Pristine paradise to rubbish dump: the same Pacific island, 23 years apart


Legislation and policy are another way to address the problems of plastic pollution. Recent legislation includes plastic bag bans and microbead bans. Economic incentives, such as container deposits, have attracted substantial attention in countries around the world.

Several Australian jusrisdictions, including South Australia, the Northern Territory, and New South Wales), already have container deposit laws, with Western Australia and Queensland set to start in 2019. In the United States, 10 states have implemented container deposit schemes.

But how effective is a cash for containers program? While there is evidence to suggest that container deposits increase return rates and decrease litter, until now there has been no study asking whether they also reduce the sources of debris entering the oceans.

In Australia, we analysed data from litter surveys by Keep South Australia Beautiful, and Keep Australia Beautiful. In the US, we accessed data from the Ocean Conservancy’s International Coastal Cleanup.




Read more:
The future of plastics: reusing the bad and encouraging the good


We compared coastline surveys in states with a container deposit scheme to those without. In both Australia and the US, the proportion of beverage containers in states without a deposit scheme was about 1.6 times higher than their neighbours. Based on estimates of debris loading on US beaches that we conducted previously, if all coastal states in the United States implemented deposit schemes, there would be 6.6 million fewer containers on the shoreline each year.

Keep your lid on

But how do we know that this difference is caused by the deposit scheme? Maybe people in states with container deposit schemes simply drink fewer bottled beverages than people states without them, and so there are fewer containers in the litter stream?

To answer that question, we measured the ratio of lids to containers from the same surveys. Lids are manufactured in equal proportion to containers, and arrive to the consumer on the containers, but do not attract a deposit in either country.

If deposit schemes cause a decrease in containers in the environment, it is unlikely to cause a similar decrease in littered lids. So, if a cashback incentive is responsible for the significantly lower containers on the shorelines, we would expect to see a higher ratio of lids to containers in states with these programs, as compared to states without.

That’s exactly what we found.

We were also interested in whether other factors also influenced the amount of containers in the environment. We tested whether the socio-economic status of the area (as defined by data from the Australian census) was related to more containers in the environment. Generally, we found fewer containers in the environment in wealthier communities. However, the presence of a container deposit reduced the container load more in poorer communities.

This is possibly because a relatively small reward of 10 cents per bottle may make a bigger difference to less affluent people than to more wealthy consumers. This pattern is very positive, as it means that cashback programs have a stronger impact in areas of lower economic advantage, which are also the places with the biggest litter problems.




Read more:
Sustainable shopping: take the ‘litter’ out of glitter


Ultimately, our best hope of addressing the plastic pollution problem will be through a range of approaches. These will include bottom-up grassroots governance, state and federal legislation, and both hard and soft law.

The ConversationAlong with these strategies, we must see a shift in the type of we products use and their design. Both consumers and manufacturers are responsibility for shifting from a make, use, dispose culture to a make, reuse, repurpose, and recycle culture, also known as a circular economy.

Qamar Schuyler, Research Scientist, Oceans and Atmospheres, CSIRO; Britta Denise Hardesty, Principal Research Scientist, Oceans and Atmosphere Flagship, CSIRO, and Chris Wilcox, Senior Research Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.

Sizes matters for black hole formation, but there’s something missing in the middle ground



File 20180219 75990 oiivgr.jpg?ixlib=rb 1.1
An artist’s impression of a supermassive black hole at the centre of a galaxy.
NASA/JPL-Caltech

Holger Baumgardt, The University of Queensland and Michael Drinkwater, The University of Queensland

So far, all black holes discovered by astronomers fall into two broad categories: “stellar mass” black holes and “supermassive” black holes.

But what puzzles astronomers is why the two extremes – what about intermediate-sized black holes?

Black holes were predicted by Albert Einstein’s general theory of relativity. Their gravity is so strong that no material object, not even light, can escape from their vicinity.




Read more:
Something big exploded in a galaxy far, far away: what was it?


Astronomers have only been able to obtain evidence for their existence in recent decades by studying black holes accreting (attracting) gas from nearby stars and finding fast-moving stars in the vicinity of black holes.

But since 2015 an exciting third way to detect black holes has become available: gravitational waves from merging black holes.

From one extreme…

Stellar mass black holes can have masses between a few to a few tens of solar masses – the mass of our Sun. They are thought to form at the end of the lives of massive stars. When these stars run out of gas from which to produce energy, they leave behind massive remnants that can only collapse into black holes.

So far, astronomers have discovered a dozen stellar mass black hole candidates in the Milky Way, most of which accrete matter from nearby companion stars.

They also detected gravitational waves from several merging stellar mass black hole pairs in distant galaxies.

It’s estimated that our Milky Way alone should contain about 100 million stellar mass black holes, most of which do not have close companions from which they can accrete matter, and which therefore stay invisible.

… to the other

At the other end of the mass scale are what astronomers call supermassive black holes. These are about a million to a few billion times more massive than our Sun.

Astronomers think that almost every large galaxy contains a supermassive black hole at its centre.

The Milky Way, for example, contains a black hole of about 4 million solar masses, called Sagittarius A* (Sgr A*), in its centre. Astronomers can study this black hole by looking at the motion of stars that are close to Sgr A* and are flung through space by the huge gravitational attraction of the black hole.

Is that a supermassive black hole?

Although astronomers have gained a good understanding of the distribution and masses of supermassive black holes in galaxies in the nearby universe, they still do not know where supermassive black holes come from.

Observations show that some supermassive black holes already existed and were actively accreting gas from their surroundings when the universe was just a few hundred million years old.

A composite x-ray and infrared image of the supermassive black hole Sagittarius A* at the centre of the Milky Way.
NASA/UMass/D.Wang et al., IR: NASA/STScI

In 2011 a team of astronomers said they had found evidence of a supermassive black hole that existed only 770 million years after the Big Bang. Then, last month, another team of astronomers revealed what they think could be evidence of a supermassive black hole from when the universe was only 690 million years old.

This creates a problem for theories that assume that supermassive black holes grew out of the stellar-mass black holes left behind by the first generation of stars in the early universe.

There is not enough time for these black holes to have grown to reach the huge masses that we can see in observations of the first galaxies.

The middle ground for black holes

An alternative theory is that supermassive black holes form from so-called intermediate-mass black holes. These hypothetical black holes could have masses from a few hundred to a few hundred thousand solar masses.

Starting more massive, supermassive black holes would need less time to grow to their present sizes. They could also accrete mass more efficiently since the maximum amount of mass that a black hole can accrete is directly proportional to its size.

Intermediate mass black holes could form out of the collapse of very massive stars that might have existed in the very early universe.

Nowadays stars form with an upper mass limit of at most a few 100 solar masses. Conditions in the very early universe might have been more favourable towards building more massive stars and might have allowed the formation of stars of a few thousand or maybe even up to a million times the mass of our sun.

This chart illustrates the relative masses of stellar black holes and supermassive black holes, and the mystery of the intermediate-mass black holes, with masses up to more than 100,000 times that of our sun, remains unsolved.
NASA/JPL-Caltech (edited)

The hunt is on

Astronomers are currently searching for intermediate mass black holes and there are a few potential candidates. Like their more massive cousins they could reveal their existence by accreting material from nearby stars or by the fast motion of nearby stars.

A prime place to look for intermediate mass black holes could be globular clusters, dense clusters of a few hundred thousand of stars to a few million of stars.




Read more:
Black holes are even stranger than you can imagine


Like supermassive black holes, globular clusters are old and are among the first objects which have formed in the universe.

Astronomers – including at the University of Queensland – recently found evidence that such an intermediate mass black hole with about 2,200 times the mass of our Sun could exist at the centre of the globular cluster 47 Tucanae.

They did this by studying the acceleration of pulsars (compact remnants of dead stars that formed with about 20 times the mass of our Sun) in the globular cluster.

The ConversationIf more of these can be found, they might provide the missing link between stellar mass and supermassive black holes and could shed light on how supermassive black holes have formed.

Holger Baumgardt, Associate Professor, The University of Queensland and Michael Drinkwater, Professor of Astrophysics, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Common products, like perfume, paint and printer ink, are polluting the atmosphere



File 20180215 131000 1ie7l5j.jpg?ixlib=rb 1.1
We need to measure the volatile compounds that waft off the products in our homes and offices.

Jenny Fisher, University of Wollongong and Kathryn Emmerson, CSIRO

Picture the causes of air pollution in a major city and you are likely to visualise pollutants spewing out of cars, trucks and buses.

For some types of air pollutants, however, transportation is only half as important as the chemicals in everyday consumer products like cleaning agents, printer ink, and fragrances, according to a study published today in Science.

Air pollution: a chemical soup

Air pollution is a serious health concern, responsible for millions of premature deaths each year, with even more anticipated due to climate change.




Read more:
Climate change set to increase air pollution deaths by hundreds of thousands by 2100


Although we typically picture pollution as coming directly from cars or power plants, a large fraction of air pollution actually comes from chemical reactions that happen in the atmosphere. One necessary starting point for that chemistry is a group of hundreds of molecules collectively known as “volatile organic compounds” (VOCs).

VOCs in the atmosphere can come from many different sources, both man-made and natural. In urban areas, VOCs have historically been blamed largely on vehicle fuels (both gasoline and diesel) and natural gas.

Fuel emissions are dropping

Thanks in part to more stringent environmental regulations and in part to technological advances, VOCs released into the air by vehicles have dropped dramatically.

In this new study, the researchers used detailed energy and chemical production records to figure out what fraction of the VOCs from oil and natural gas are released by vehicle fuels versus other sources. They found that the decline in vehicle emissions means that – in a relative sense – nearly twice as much comes from chemical products as comes from vehicle fuel, at least in the US. Those chemicals include cleaning products, paints, fragrances and printer ink – all things found in modern homes.

The VOCs from these products get into the air because they evaporate easily. In fact, in many cases, this is exactly what they are designed to do. Without evaporating VOCs, we wouldn’t be able to smell the scents wafting by from perfumes, scented candles, or air fresheners.

Overall, this is a good news story: VOCs from fuel use have decreased, so the air is cleaner. Since the contribution from fuels has dropped, it is not surprising that chemical products, which have not been as tightly regulated, are now responsible for a larger share of the VOCs.

Predicting air quality

An important finding from this work is that these chemical products have largely been ignored when constructing the models that we use to predict air pollution – which impacts how we respond to and regulate pollutants.

The researchers found that ignoring the VOCs from chemical products had significant impacts on predictions of air quality. In outdoor environments, they found that these products could be responsible for as much as 60% of the particles that formed chemically in the air above Los Angeles.

The effects were even larger indoors – a major concern as we spend most of our time indoors. Without accounting for chemical products, a model of indoor air pollutants under-predicted measurements by a whopping 87%. Including the consumer products really helped to fix this problem.




Read more:
We can’t afford to ignore indoor air quality – our lives depend on it


What does this mean for Australia?

In Australia we do a stocktake of our VOC emissions to the air every few years. Our vehicle-related VOC emissions have also been dropping and are now only about a quarter as large as they were in 1990.

Historical and projected trends in Australia’s road transport emissions of VOCs.
Author provided, adapted from Australia State of the Environment 2016: atmosphere

Nonetheless, the most recent check suggests most of our VOCs still come from cars and trucks, factories and fires. Still, consumer products can’t be ignored – especially as our urban population continues to grow. Because these sources are spread out across the city, their contributions can be difficult to estimate accurately.

We need to make sure our future VOC stocktakes include sources from consumer products such as cleaning fluids, indoor fragrances and home office items like printing ink. The stocktakes are used as the basis for our models, and comparing models to measurements helps us understand what affects our air quality and how best to improve it. It was a lack of model-to-measurement agreement that helped to uncover the VW vehicle emissions scandal, where the manufacturer was deliberately under-estimating how much nitrogen gas was being released through the exhaust.

If we can’t get our predictions to agree with the indoor measurements, we’ll need to work harder to identify all the emission sources correctly. This means going into typical Australian homes, making air quality measurements, and noting what activities are happening at the same time (like cooking, cleaning or decorating).




Read more:
Heading back to the office? Bring these plants with you to fight formaldehyde (and other nasties)


What should we do now?

If we want to keep air pollution to a minimum, it will become increasingly important to take into account the VOCs from chemical products, both in our models of air pollution and in our regulatory actions.

In the meantime, as we spend so much of our time indoors, it makes sense to try to limit our personal exposure to these VOCs. There are several things we can do, such as choosing fragrance-free cleaning products and keeping our use of scented candles and air fresheners to a minimum. Research from NASA has also shown that growing house plants like weeping figs and spider plants can help to remove some of the VOCs from indoor air.

The ConversationAnd of course, we can always open a window (as long as we keep the outdoor air clean, too).

Jenny Fisher, Senior Lecturer in Atmospheric Chemistry, University of Wollongong and Kathryn Emmerson, , CSIRO

This article was originally published on The Conversation. Read the original article.

Utopia or nightmare? The answer lies in how we embrace self-driving, electric and shared vehicles



File 20180207 74482 ixvtf9.jpeg?ixlib=rb 1.1
Four major disruptions of urban transport are set to transform city life, but exactly how remains uncertain.
Taras Makarenko/Pexels, CC BY

Jake Whitehead, The University of Queensland and Michael Kane, Curtin University

Emerging transport disruptions could lead to a series of nightmare scenarios and poorer transport systems unless we have sensible and informed public policy to avoid this. Of course, some foresee a utopian scene: self-driving electric vehicles zipping around our cities serving all our transport needs without road accidents or exhaust fumes. But the shift to this transport utopia might not be as straightforward as some think.

In a newly published paper, we explore some potential problems linked to vehicle electrification, autonomous vehicles, the sharing economy and the increasing density of cities. We examined what could happen if these four trends are not all properly managed together.

Much has been written about the potential benefits of these disruptions:

  • electric vehicles powered by renewable energy could cut costs and fossil fuel emissions, and eliminate the significant impacts of pollution on public health and the environment

  • shared vehicles could reduce transport costs and traffic

  • autonomous vehicles could eliminate traffic accidents, reduce congestion and increase mobility for everyone

  • increasing urban density could bring significant economic benefits through growth and efficiency gains when people and businesses are closer together.

However, the interplay between these trends could also result in nightmare scenarios. We developed a Future Mobility Disruption Framework to investigate what could happen if even one of these trends is not actively managed.

The interactions of transport disruptions need to be anticipated and managed together.
Kane & Whitehead 2018, Australian Planner, Author provided

Four nightmare scenarios

Our research identified four potential nightmare scenarios.

Nightmare 1: vehicle electrification + autonomous vehicles + increasing urban density

If policy fails to support and manage a shift away from private vehicle ownership towards car-sharing, several negative impacts are likely. In this scenario, electric cars will be cheaper to run and still privately owned. This could encourage more people to drive and create more traffic.

The convenience of self-driving cars with low operating costs might also encourage a shift away from traditional public transport and could even cause its collapse.

Nightmare 2: autonomous vehicles + increasing urban density + shift towards sharing economy

If people shift from private car ownership towards shared, autonomous vehicles, significant transport cost savings could be possible. By replacing public transport systems, shared vehicle services could arguably provide cheap transport for all.

While these benefits are obvious, without vehicle electrification, the use of fossil fuels would significantly increase emissions. Though a reduction in emissions is plausible with a shift away from private vehicle ownership, the low cost and convenience of shared vehicles could lead to higher demand and more trips, thus increasing emissions. This pollution would increase rates of premature deaths and diseases in our cities, and worsen the impacts of climate change.

Nightmare 3: increasing urban density + shift towards sharing economy + vehicle electrification

We would again see a shift away from private vehicle ownership towards shared, electric vehicles. This would reduce transport and pollution-related health costs However, in this scenario, the vehicles would not be autonomous.

The shared vehicle fleet would require human drivers. This would result in higher costs, less efficiency and more accidents. Ultimately, this would be a barrier to the long-term sustainability and widespread use of shared vehicles.

Nightmare 4: shift towards sharing economy + vehicle electrification + autonomous vehicles

So what would happen in the face of three of the transport disruptions occurring without increasing urban density? Electric and autonomous vehicles would significantly reduce transport costs. Combined with the availability of shared services, this would lead to a substantial shift away from private vehicle ownership towards shared, electric, autonomous vehicles (SEAVs).

These vehicles would be efficient, safe and convenient, with minimal environmental impacts. At first this would seem like the ideal scenario to aim for. However, it ignores the potential impacts on urban form and density.

Without policies supporting urban density and public transport, a shift towards SEAVs would probably encourage sprawling, car-dominated cities as people would have fewer reasons to live close to work. SEAVs would be cheap and convenient. They could pick people up from their front door and drop them directly at their destination. People would likely not be as concerned with road congestion as they could carry out other activities during the trip – even working during the drive.

If people feel less restricted in where they choose to live, they might opt for larger houses and lots, further away from cities. This would not only place additional demands on infrastructure but also have a significant impact on the natural environments surrounding our cities.

This form of lower-density living would discourage active transport options, like walking and cycling, which would have negative health impacts. Urban sprawl could also have negative economic impacts as people and businesses spread out and lose the benefits of being close together.

Managing disruptions as a whole

Each of these four trends could independently yield many benefits. However, examination of these nightmare scenarios reveals that, without holistic planning and policy support for all four disruptions, negative unintended consequences are likely. Planners and policymakers must consider how these disruptions will interact.

As detailed in our paper, a range of possible policy interventions is available for managing the risks associated with these trends. These include reform of road taxation, supportive regulation and integrated planning.

Only a holistic approach to managing these disruptions will enable us to arrive at a future transport utopia.


The ConversationYou can read more about these transport disruptions in a forthcoming book, Three Revolutions.

Jake Whitehead, Research Fellow, The University of Queensland and Michael Kane, Director, Innovation and Economic Strategies, Economic Development Queensland; Research Associate, Curtin University Sustainability Policy Institute, Curtin University

This article was originally published on The Conversation. Read the original article.

Latest twist in the Adani saga reveals shortcomings in environmental approvals



File 20180214 174959 14v10ys.jpg?ixlib=rb 1.1
Adani faces court over allegations of concealing the amount of coal water released in Caley Valley Wetlands last year.
Ian Sutton/flickr, CC BY-NC-SA

Samantha Hepburn, Deakin University

It was reported this week that the federal Environment Department declined to prosecute Adani for failing to disclose that its Australian chief executive, Jeyakumar Janakaraj, was formerly the director of operations at a Zambian copper mine when it discharged toxic pollutants into a major river. Under the federal Environmental Protection Biodiversity Conservation Act, Adani is required to reveal the environmental history of its chief executive officers, and the federal report found Adani “may have been negligent”.

The revelations come as Adani faces down the Queensland government in the planning and environment court, over allegations the company concealed the full amount of coal-laden water discharged into the fragile Caley Valley Wetlands last year.

These concerns highlight some fundamental problems with the existing regulatory framework surrounding the long term utility and effectiveness of environmental conditions in upholding environmental protections for land impacted by mining projects.

How effective are environmental conditions?

In 2016, the federal government granted Adani a 60-year mining licence, as well as unlimited access to groundwater for that period.

These licences were contingent on Adani creating an environmental management plan, monitoring the ongoing impact of its mining activities on the environment, and actively minimising environmental degradation.

But are these safeguards working?

In 2015 Advocacy group Environmental Justice Australia reported several non-compliance issues with the Abbott Point Storm Water Dam, such as pest monitoring, weed eradication, establishing a register of flammable liquids, and implementation of the water monitoring plan.

More recently, in late 2017, significant amounts of black coal water were discovered in the fragile Caley Valley Wetlands next to the mine. Adani stands accused of withholding the full extent of the spill, redacting a laboratory report showing higher levels of contamination.

Adani seems to have released coalwater into the wetland despite it being a condition of its environmental approval that it takes sufficient care to avoid contamination. Its A$12,000 penalty for non-compliance is relatively small compared with the company’s operating costs.

In this instance, the environmental conditions have provided no substantive protection or utility. They have simply functioned as a convenient fig leaf for both Adani and the government.

Who is responsible for monitoring Adani?

Adani’s proposed mine falls under both state and federal legislation. Queensland’s Environmental Protection Act requires the holder of a mining lease to plan and conduct activities on site to prevent any potential or actual release of a hazardous contaminant.

Furthermore, the relevant environmental authority must make sure that hazardous spills are cleaned up as quickly as possible.

But as a project of “national environmental significance” (given its potential impact on water resources, threatened species, ecological communities, migratory species, world heritage areas and national heritage places), the mine also comes under the federal Environmental Protection Biodiversity Conservation Act.

Federal legislation obliges Adani to create an environmental management plan outlining exactly how it plans to promote environmental protection, and to manage and rehabilitate all areas affected by the mine.

Consequently, assessment of the environmental impact of the mine was conducted under a bilateral agreement between the both the federal and state regulatory frameworks. This means that the project has approval under both state and federal frameworks.

The aim is to reinforce environmental protection however in many instances there are significant problems with a lack of clear delineation with respect to management, monitoring and enforcement.

Does the system work?

Theoretically, these interlocking frameworks should work together to provide reinforced protection for the environment. The legislation operates on the core assumption that imposing environmental conditions minimises the environmental degradation from mining. However, the bilateral arrangement can often mean that the responsibility for monitoring matters of national environmental significance devolves to the state and the environmental conditions imposed at this level are ineffectively monitored and enforced and there is no public accountability.

Arguably, some environmental conditions hide deeper monitoring and enforcement problems and in so doing, actually exacerbate environmental impacts.

For example, it has been alleged that Adani altered a laboratory report while appealing its fine for the contamination of the Caley Valley Wetlands, with the original document reportedly showing much higher levels of contamination. The allowable level of coal water in the wetlands was 100 milligrams. The original report indicated that Adani may have released up to 834 milligrams. This was subsequently modified in a follow-up report and the matter is currently under investigation.

If established, this amounts to a disturbing breach with potentially devastating impacts. It highlights not only the failure of the environmental condition to incentivise behavioural change, but also a fundamental failure in oversight and management.

If environmental conditions are not supported by sufficient monitoring processes and sanctions, they have little effect.

Environmental conditions are imposed with the aim of managing the risk of environmental degradation by mining projects. However, their enforcement is too often mired by inadequate andopaque enforcement and oversight procedures, a lack of transparency and insufficient public accountability  

The ConversationWhile the Queensland Labor government considers whether to increase the regulatory pressures on Adani, by subjecting them to further EPBC Act triggers such as the water resource trigger or the implementation of a new climate change trigger, perhaps the more fundamental question is whether these changes will ultimately improve environmental protection in the absence of stronger transparency and accountability and more robust management and enforcement processes for environmental conditions attached to mining projects.

Samantha Hepburn, Director of the Centre for Energy and Natural Resources Law, Deakin Law School, Deakin University

This article was originally published on The Conversation. Read the original article.