Government use of artificial intelligence – new horizons (and risks)

Angie Freeman, Monique Azzopardi and Mason Britton
26 May 2022
Time to read: 5.5 minutes

Although AI holds much promise for governments, they should undertake a comprehensive risk assessment of any AI solution and monitor if it is operating legally and as intended.

Explore the risks and opportunities of generative AI

Key takeaways for Government entities considering AI

The wrongful use of AI can result in potentially significant reputational damage and financial cost to governments.

Before implementing a new AI solution, Government entities should have regard to the relevant risks and considerations identified above, including by undertaking a comprehensive risk assessment in relation to the proposed solution. If the AI solution is adopted, government entities should regularly monitor whether the solution is operating legally and as intended.

If government entities are in any doubt, they should seek legal advice to ensure AI solutions are legally robust and comply with all applicable laws and any relevant government policies.

Increasingly, governments throughout Australia are turning to artificial intelligence (AI) to manage resources, improve efficiencies and enhance public services. Despite its benefits, the use of AI comes with several unique risks and challenges which governments should consider before utilising AI.

What is artificial intelligence?

In brief terms, AI is the development of computer systems which perform complex tasks in a manner which mirrors human intelligence; for example, machine learning. To solve these complex tasks, AI evaluates input data using an algorithm to produce a desired output. The algorithm, which is the evaluative process and computational elements of the AI, determines the function of the AI and the output generated.

AI algorithms will generally either be programmer-driven, that is, the evaluative criteria is specified by human programmers, or data-driven, that is, the evaluative criteria is not specified by a programmer but is instead generated by the AI from sample data sets.

The use of AI by governments

Both the Commonwealth and State governments are increasingly investing heavily in, and embracing, AI technologies and solutions.

AI has recently been used (or has been flagged for use) by government entities throughout Australia in a range of areas, including (among others):

  • public facing services, including AI based traffic monitoring systems and chat-bots;
  • fraud prevention and law enforcement;
  • national security and defence (including the detection of physical and cyber security breaches and assisting Commonwealth Defence training and military capabilities). For example, delivering AI augmented scenarios using virtual reality and training simulation technology;
  • to identify and monitor threatened species (including through the use of drones and AI analytics);
  • health services; for example, the use of AI by NSW Health to detect sepsis in hospitals;
  • public transport services; for example, using AI to develop predictive algorithms to assist in road safety and to reduce road accidents; and
  • COVID-19 management; for example, the use of AI to provide customers with insights into the COVID-19 risk of their train journey and how full the train is likely to be at a given time.

The benefits of AI to government and the public

AI can provide potentially significant benefits to governments and the users of their services. AI can provide time and cost savings for government entities by increasing efficiencies, productivity and by reducing resourcing requirements. Further, AI, if used correctly, can reduce the potential for human error.  

AI can also increase the quality of the public services provided by government, including by providing more efficient operations, “smart cities” and smarter technology solutions. In certain contexts, AI can also improve health services and outcomes.

The risks of AI

While there are several benefits of AI, there are also potential risks and adverse consequences.

Bias and other ethical issues

The use of AI raises a number of ethical challenges, especially where AI is deployed to make decisions which can potentially adversely impact the rights and interests of individuals. Although AI can reduce the element of human cognitive biases, it has the potential to introduce algorithmic biases and/or to operate unfairly based on flawed algorithms.

By way of example, there was a flawed algorithm in the Commonwealth’s “RoboDebt” scheme where the process used by the AI algorithm made certain incorrect assumptions resulting in some requests for the payment of money which was not in fact owed.

The NSW Government’s AI Ethics Policy identifies that “the best use of AI will depend on data quality and relevant data. It will also rely on careful data management to ensure potential data biases are identified and appropriately managed.”

It is strongly recommended that the outputs of AI are tested thoroughly in a test environment before a live version of the program is released and that data models, as well as the outputs of AI are audited regularly to ensure no incorrect assumptions are being made in the evaluative process. Further, as per the Commonwealth’s AI Ethics Principles, processes should be in place to ensure accountability for AI systems and their outputs.

Privacy and data security

AI presents heightened privacy and data security risks because it requires input data to operate, often requiring large volumes of data for algorithmic models to be generated. Where data is being provided to an AI system and that data has not been properly de-identified, it may contain personal information such that various privacy obligations are activated.

As such, it is important for government entities to clearly identity whether the AI system is using any personal information or only aggregated and de-identified information. Where the AI system uses any personal information (or where any personal information has been inputted into the AI system), government entities should ensure that the AI system and the associated project utilising the AI system is developed in a manner that complies with all applicable privacy laws. In addition, government entities should carefully assess whether, and ensure, that any personal information and other sensitive or confidential data sets are securely stored and protected. Wherever possible, in creating AI systems, governments should focus on privacy by design principles and should undertake a privacy impact assessment if any personal information is involved.

Intellectual property

As with all technology projects, where governments engage a contractor to develop AI solutions, governments should ensure that they either obtain ownership or licence rights of the intellectual property rights in the solution. This will be especially the case where the AI solution underpins critical government systems or technology solutions. The intellectual property rights should be clearly delineated within the relevant contract so that there is no ambiguity over who owns the relevant intellectual property and what the licensing rights in respect of the intellectual property are.

Failure to consider intellectual property rights at the outset may result in an agreement being reached where the government entity does not have the requisite rights to investigate or modify the AI solution. Such an agreement could also inhibit the government entity’s ability to correct any underlying defects in the AI solution.

Using AI in decision-making

There are legal limitations in respect of the use of AI in government decision-making processes. The extent to which AI can be used in decision-making processes (including to issue decisions) is dependent on legislative powers, as well as a number of factors, including whether the AI itself is making the decision or being used to assist a decision-maker in forming a conclusion. In Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79, the Full Court of the Federal Court found that an automated decision that resulted in the issuing of a computer generated letter was not a “decision” because it lacked the relevant "mental process of reaching a conclusion". Therefore, the Court concluded that the Deputy Commissioner was not bound by the computer issued letter.

Whenever making a decision or setting up processes to make a decision (whether by AI or otherwise), government agencies should have regard to general administrative law principles and procedures, as well as all applicable governing legislation (and associated legislative and legal powers) to ensure that all government decisions are legally valid and enforceable. Government agencies should also have regard to all relevant government policies and guides (for example, the Commonwealth Ombudsman’s Automated Decision-Making Better Practice Guide).

How should government entities approach the use of AI?

Guidance for the Commonwealth

Although the Commonwealth has not yet published any mandatory guidelines for the use of AI, voluntary  guidelines and principles have been released, including the provision of eight ethics principles set out in  Australia’s Artificial Intelligence Ethics Framework and the associated AI Ethics Principles.

Further, where Commonwealth entities seek to use AI for administrative law decisions, careful consideration should be given to the Automated Decision-Making Better Practice Guide.

To the extent possible, Commonwealth entities should seek to comply with these relevant frameworks and guidelines to ensure the AI utilised is appropriate and to assist the Commonwealth in becoming "a global leader in developing and adopting trusted, secure and responsible AI", consistent with Australia's AI Action Plan.  

Guidance for States/Territories

While some States and Territories have released AI related ethical policies and guidelines, most have not made such policies and guidelines mandatory. One notable exception is the State of NSW.

NSW has introduced a comprehensive and mandatory AI Ethics Policy and AI Assurance Framework for the use and adoption of AI by NSW State government agencies.

Of note, the AI Assurance Framework, which came into effect in March 2022, provides a self-assessment framework which (subject to some exceptions) NSW entities must apply to projects that contain an AI component. In summary, this framework requires consideration of a range of matters throughout all stages of the AI project, including:

  1. general benefits;
  2. general risks; and
  3. the five mandatory principles specified in the NSW AI Ethics Policy:
  • community benefit;
  • fairness;
  • privacy and security;
  • transparency; and
  • accountability.

The AI Assurance Framework also requires an assessment of legal compliance with applicable legislation, such as the State Records Act 1998 (NSW) and privacy and anti-discrimination laws.

While other State and Territory government entities are not required to follow NSW guidelines and policies, it is recommended that, to the extent applicable, they adopt a similar approach to ensure the risks of using AI are properly considered.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.