Australia’s AI regulatory landscape: Are existing laws enough?

Kirsten Webb
13 Nov 2024
3 minutes

AI can impact a range of economic activities and boost productivity growth, but the challenge is to balance the benefits and flexibility of our current principle-based laws against any evidence of harms they cannot address, and to avoid a multiplicity of potentially overlapping regulation.

There is currently a raft of consultations on potential regulation of AI, both internationally and in Australia.

This short paper notes some of the benefits and risks of AI noted by the Australian Government and regulators, the range of consultations and studies underway considering potential regulation of AI and makes some observations about the principles that should underpin consideration of potential regulatory change.

Benefits and risks of AI

Consultations on potential regulation of AI acknowledge the benefits of AI but also identify potential risks. For example, benefits of AI that the Australian Government has recognised include:

  • quickly doing simple tasks, like writing emails or summarising documents;
  • driving automation in manufacturing, agriculture and the care sector, where acute skill shortages persist;
  • Improving decision making;
  • Introducing new ways of tailoring services.

Potential risks the Australian Government has identified as AI becomes more pervasive include “scaling up errors and biases in way that cause real harm to people”.

The most recent working paper published by the Digital Platforms Regulators Forum (DP-Reg) (working paper 3 “Examination of Technology – Multimodal Foundation Models” (MFMs)) considers the benefits, risks and harms of MFMs specifically, and how they intersect with the regulatory remit of each member of DP-Reg (the ACCC, the OAIC, the ACMA and the eSafety Commissioner).

DP-Reg says there are common themes arising from the potential harms of MFMs, for example:

  • people may struggle to tell genuine content from AI generated material without clear disclosure and labelling, and
  • MFMs can use personal information to produce highly personalised content at scale, making content more persuasive and increasing risks such as the spread and amplification of misinformation, terrorist propaganda, and scams.
  • challenges for enforcement.

Objectives

To guard against risks arising from AI, the Australian Government has stated that it is taking steps to make sure AI systems in Australia are safe and reliable, to:

Do Australia’s existing laws need to change to sufficiently guard against these risks?

There is an important question of whether Australia’s existing laws are capable of achieving these objectives, and whether there is evidence of harm or potential harm that warrants the introduction of new regimes.

For example, the most recent consultation is a review of AI and the Australian Consumer Law, commenced on 15 October 2024.

The Australian Consumer Law is a principles-based law that applies economy-wide across Australia and to date has proved flexible enough to apply to emerging technologies such as apps and social media.

In this current consultation, the Australian Treasury is seeking views on whether the Australian Consumer Law remains suitable to:

  • Protect consumers who use AI;
  • Support the safe and responsible use of AI by businesses.

It focusses on AI enabled goods and services, defined as “goods and services which, when made available to consumers, involve a consumer directly interacting with an AI system.

The Consultation Paper defines an AI system as:

“A machine based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate output such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. The tools or techniques AI systems can employ very widely and may include: machine learning, computer vision, natural language processing, expert systems and speech recognition”.

A security system which uses facial recognition is an example of an AI enabled good and an online chat bot to assist with consumer queries is an example of an AI enabled service.

Treasury is seeking views on how the ACL applies to AI enabled goods and services, asking questions about:

  • How the existing principles apply;
  • The remedies available to consumers when things go wrong;
  • The mechanisms for allocating liability among manufacturers and suppliers.

AI enabled goods or services are supplied across a wide range of industries, as demonstrated by the list of examples in the Consultation Paper, which include: smart home devices (smart speakers, smart phones, smart TVs and wearable devices); automotive (AI navigation systems); health care (online telehealth services and AI diagnostics); education and training (AI tutoring platforms, language learning apps); entertainment (streaming services and gaming); business solutions (customer relationship management; web analytics); and transportation and logistics (ride sharing apps and fleet management).

It is clear from this list that AI can impact a range of economic activities and boost productivity growth, as the consultation paper acknowledges. The challenge for those considering whether any law change is required is to balance the benefits and flexibility of our current principle-based laws, against any evidence of harms that existing laws are not capable of addressing, and avoiding a multiplicity of potentially overlapping regulation, and regulators that could have the effect of limiting the undoubted benefits that AI is capable of delivering.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.