Australian Government proposes package of AI reforms

Simon Newcomb, Alex Horder, Nicole Steemson and Audrika Haque
06 Sep 2024
7.5 minutes

The Australian Government has released a proposal for mandatory guardrails to be adopted by Australian organisations in developing or deploying high risk AI systems. Alongside that, the Government has also released an AI safety standard with very similar guardrails that can be voluntarily applied to any risk setting.

Details of the proposed regulation and the new standard are set out in a Proposals paper for introducing mandatory guardrails for AI in high-risk settings and a Voluntary AI Safety Standard released by the Commonwealth Department of Industry, Science and Resources yesterday.

The Proposals Paper is open for consultation for four weeks (closes 4 October) and the Standard commences immediately.

The proposed regulations are set to have a wide application and Australian organisations can start to get ready for this by understanding the proposed regulations and also by evaluating and uplifting their approach to managing AI risks. They should also consider implementing the Standard both to prepare for regulation of high-risk AI and also as part of their broader management program for all levels of AI risk.

Below is our summary of the proposed regulations and the new Standard and what we think are the main implications for Australian organisations.

Why is AI regulation needed?

Public trust and confidence in AI in Australia remain low. An effective system of AI regulation that mandates the responsible use of AI should improve public trust and encourage AI adoption. However, the Government has concluded that our current regulation of AI is not fit for purpose (based on its consultation process).

Australia has no AI-specific regulation. We do have many existing laws that regulate AI technology in a non-specific way such privacy, intellectual property, competition and consumer protection, corporations law, online safety, anti-discrimination, criminal and sector specific laws. But there are gaps and uncertainties, such as unclear accountability for actions taken by AI, insufficient obligations for transparency about AI systems, and gaps in enforcement measures. Also, many of the existing laws focus more on liability for causing harm rather than on preventing the harm by imposing “ex ante” (before the event) risk management obligations on those in the AI supply chain developing or deploying the technology.

The Government also believes that mandatory regulation is needed because self-regulation (such as voluntary compliance with Australia’s AI Ethics Principles) is not enough and business still has a way to go. According to Minister Ed Husic:

“The Responsible AI Index 2024, commissioned by the National AI Centre, shows Australian businesses consistently overestimate their capability to employ responsible AI practices... 78 per cent of Australian businesses believe they were implementing AI safely and responsibly but in only 29 per cent of cases was this correct... The Index found that on average, Australian organisations are adopting only 12 out of 38 of responsible AI practices.”

AI regulation has been on the Government’s agenda since the release of its discussion paper “Safe and responsible AI in Australia” in June 2023 which was then followed by an interim response paper in January this year signalling that mandatory regulation would follow. So, the Proposals Paper marks the next step along the lengthy path towards regulation.

Mandatory guardrails for high-risk AI

The Proposals Paper sets out the expectations of the Australian Government on how Australian organisations should develop or use AI safely and responsibly. It outlines 10 mandatory guardrails that focus on:

  • testing – to ensure AI systems perform as intended;
  • transparency – about product development with end users, others in the AI supply chain, and regulators; and
  • accountability – for governing and managing the risks.

At a high level, the guardrails require organisations developing or deploying AI systems to:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks.
  3. Protect AI systems and implement data governance measures to manage data quality and provenance.
  4. Test AI systems to evaluate model performance and monitor the system before deploying it and once deployed, in order to ensure that it remains fit for purpose.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight, which requires organisations to ensure their personnel with oversight understand the AI system, monitor its operation and intervene as needed.
  6. Inform end users regarding AI-enabled decisions relevant to them, interactions with AI and when end users are being provided with AI-generated content.
  7. Establish processes for people impacted by AI systems to challenge use of AI or AI-driven outcomes, including by implementing internal complaint handling processes and providing readily available information to impacted parties.
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks, including by sharing data in relation to adverse incidents and significant model failures, and disclosing risks associated with certain AI systems.
  9. Keep and maintain records to allow third parties to assess compliance with the mandatory guardrails, including records relating to the design specifications of the relevant AI system, as well as its capabilities and limitations.
  10. Undertake conformity assessments to demonstrate and certify compliance with the mandatory guardrails, which assessments may be performed by the developers themselves, as well as third parties or Government entities/regulators, both before placing an AI system on the market, and periodically after that to ensure ongoing compliance.

What is “high-risk” AI?

The mandatory guardrails will only apply to high-risk AI, leaving lower risk AI free from additional regulatory burden.

In terms of the technology covered, the regulation will cover both AI systems (systems that include or use AI models) as well as “general purpose AI” (GPAI) models (which can be used for any purpose).

To work out whether an AI technology is high-risk, the Government proposes two categories of approach:

Known or foreseeable uses

When a use case is known or foreseeable, it’s then possible to determine whether the use case is high risk. As one approach to that, the Proposals Paper sets out principles that can be applied. It proposes that, in designating an AI system as high risk, regard must be had to:

  • risk of adverse impacts to an individual’s human rights;
  • risk of adverse impacts to an individual’s physical or mental health or safety;
  • risk of adverse legal effects, defamation or similarly significant effects on an individual;
  • risk of adverse impacts to groups of individuals or collective rights of cultural groups;
  • risk of adverse impacts to the Australian economy, society, environment and rule of law; and
  • severity and extent of those adverse impacts outlined above.

Another approach noted in the Proposals Paper would be to have a specific list of use cases. If we follow the EU, these might include use cases like biometric identification or monitoring/influencing emotions, critical infrastructure, educational admissions or assessment, employment matters, access to essential public or private services, public health and safety, law enforcement or use in the courts.

Highly capable, general purpose AI

GPAI models are what many of us have experienced since the rise of generative AI. They are AI models that can be applied to a very wide array of use cases, and with that, the also carry a wide array of foreseeable and unforeseeable risks.

The Proposals Paper defines a GPAI model as:

"An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems."

For example, the model GPT-4 from OpenAI is a GPAI. In contrast, ChatGPT is an application or AI system that uses the GPAI.

The most powerful and highly capable GPAI models may be harder to control and could have emergent risks that have not been identified or managed. So, the perceived risk of a GPAI model increases with the level of capability of the model.

There are various options to designate the capability of a GPAI model as high-risk. If we follow the EU or US, we would set a threshold based on the amount of compute needed to train a model and assume anything over that level is highly capable and therefore high risk (eg 10^25 FLOPs or 10^26 FLOPS have been used in the EU and US respectively).

GPAI models may also need some different guardrails to other AI systems where the use cases are foreseeable. So, the Proposals Paper also asks for feedback on what guardrails should apply to GPAI models.

What about prohibited AI?

We have seen the EU’s AI Act prohibiting entirely, certain AI use cases deemed to have an unacceptable level of risk such as subliminal manipulation of decisions, social scoring, biometric identification in public for law enforcement (with limited exceptions) and predictive policing.

The Proposals Paper leaves the door open for this type of outright ban but also doesn’t put forward any case to do it.

Potential forms of regulation

“We are at the pointy end of consultation now... we need to sort out the shape of the guardrails, which includes the possibility of an Australian AI Act.” – Minister Ed Husic (to ABC News)

Another challenging legal issue is how we would implement any new AI regulation given that we already have many existing laws that apply to AI. The Proposals Paper sets out some options:

Adaptation of existing legislative and regulatory frameworks

This option would review relevant legislation to identify and address gaps and incorporate the mandatory guardrails into existing frameworks, which may enable greater consistency across different laws and would limit disruption to Australia’s regulatory ecosystem by allowing the mandatory guardrails to be implemented in consideration of specific regulatory contexts. The Proposals Paper also notes the possibility of non-legislative options (such as directions or other instruments).

However, this approach also carries a risk of the exacerbation of inconsistencies between different legislative regimes and regulatory environments and would be likely to lead to regulatory siloes.

Development of new legislative and regulatory frameworks

This option would involve the creation of framework legislation for the implementation of the mandatory guardrails via amendments to existing legislative and regulatory frameworks, with responsibility for enforcing such new laws falling on relevant sector regulators. This would allow for the preservation of existing frameworks while providing a consistent approach to AI regulation across one central piece of law.

However, the downsides of this option include that it is limited by the scope and powers of existing legislation and could result in differences in sectors’ enforcement approaches.

A new cross-economy AI Act

This option would mirror the approach taken in the EU and would be the simplest way to apply standardised mandatory guardrails, via a single piece of targeted legislation. It would allow for a uniform monitoring and enforcement regime by a single AI regulator.

The downside of this approach is that it risks creating regulatory duplication – where more than 1 layer of regulation applies at the same time. To minimise this, the Proposals Paper contemplates that our regulations would have carve outs for some sectors where existing regulation is strong enough.

The new AI regulator with powers to enforce the Act could either be a completely new regulator or an existing regulator with expanded powers.

Interoperability with other countries

Helpfully, the Proposals Paper acknowledges that the proposed regulation should be interoperable with other jurisdictions. Concerns have been raised about the burden of multiple different AI regulatory schemes applying in parallel when other jurisdictions give extraterritorial effect to their laws (like has happened with the GDPR).

Many of the concepts in the paper have resulted from a review of approaches from comparable jurisdictions – mostly notably the EU and Canada and, to a lesser extent, the US.

The guardrails are also intended to align with ISO42001:2023 AI Management System so compliance with that standard will also help with compliance with the regulations.

The Voluntary AI Safety Standard

The Standard aims to provide practical guidance to Australian organisations developing or deploying AI. Unlike the proposed mandatory guidelines, the Standard can apply to any level of risk.

The first nine guardrails in the Standard closely align with the mandatory guardrails in the Proposals Paper. Number 10 deals with stakeholder engagement in place of conformity assessments in the mandatory version.

The Standard also provides guidance for any current high-risk initiatives while the new regulations are pending. For those likely to be regulated, it may be worth complying with the Standard as a means to prepare for the regulations.

Given that organisations are more likely to procure AI systems from third parties than to develop them internally, the Standard also includes guidelines on how to implement the guardrails in procurement.

Implications for organisations (and what you should do now)

The proposed regulations are set to have a wide application given that they will apply not only to the developers of AI technology but also to organisations that deploy it. Australian organisations can start to get ready for this by understanding the proposed regulations and also by evaluating and uplifting their approach to managing AI risks to the required level. They should also consider making a submission during the consultation period.

Implementing the new Standard seems like a sensible step towards this. While there’s no guarantee that the final regulation will conform to the Standard, it appears that the Australian Government’s current intention is that they will be very closely aligned.

Outside any high-risk setting, organisations also need an appropriate level of risk management in developing or deploying AI systems and the Standard also offers guidance for how this might be implemented.

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.