Procuring AI: Risks organisations must consider

Simon Newcomb, Nicole Steemson
13 Nov 2024
3 minutes

Procuring AI solutions presents new and unique risks that organisations must consider to ensure the successful implementation of an AI product.

Artificial intelligence is transforming organisations. As the IT industry continues to incorporate features and develop new applications enabled by AI, organisations are increasingly focused on procuring these technologies to unlock the benefits. But procuring AI solutions presents new and unique risks and challenges that organisations must consider to ensure the successful deployment of an AI solution. Fortunately, these risks and challenges can be mitigated or addressed throughout the procurement and contracting process with effective governance, consideration to data, IP and privacy issues and legal compliance.

Governance

Effective governance is essential for maximising the benefits of AI and reducing potential risks, helping to build trust both within your organisation and with customers.

The supplier's governance and risk management processes can significantly impact the technology you’re implementing. For example, if a supplier lacks effective data governance, this could inadvertently expose your organisation to allegations of misuse of data or problems with accuracy or bias in decisions, heightening your risk of the legal and reputational damage that comes along with this.

It's important to evaluate the supplier's governance frameworks during the procurement process. The supplier’s practices should be aligned with your organisation's own processes to minimise risks and ensure a seamless integration. You might also consider whether to require certification under a recognised AI standards (such as ISO 42001).

AI governance should be clearly outlined in your contract, either by adopting an existing framework or requiring the supplier to develop one.

Capability and performance

AI’s capabilities can be broad and its results, based on complex probability calculations, are often unpredictable by humans. For this reason, when defining requirements for the solution in your contract, it can be effective to address the desired outcomes that the technology will provide. This will vary depending on the application but may include outcomes relating to relevancy, accuracy, bias, system performance and scalability.

It is also likely that an agile or iterative approach to implementation is going to work better for many AI solutions where it is difficult to specify detailed requirements from the outset. And the contract should provide for continuous testing throughout the life of the application as the technology and data change over time.

Data, IP and privacy

Large volumes of data are central to the success of any AI system, but this introduces new risks for organisations, including risks relating to IP, privacy, data residency and training on organisational data.

In terms of IP risks, AI output that reproduces source or training material could infringe copyright laws. There are many copyright cases working their way through the courts where creators are alleging that AI companies have infringed copyright by training or using models based on their content. You might seek to shift this risk to the supplier in the contract.

Breaching privacy laws is another risk. Both input and output data may contain personal information, creating both legal and reputational risks for organisations. For example, Australian privacy laws require organisations to take reasonable steps to ensure that personal information used or disclosed (including generated contact) is accurate, up to date, complete and relevant. The solution should be designed to meet privacy requirements.

Data residency is a common consideration in the procurement process. While some AI application suppliers host data in Australia, many use offshore large language models. Contracts should address the rules around any data leaving Australia, and if data is processed offshore, you might seek to require it to be immediately deleted after processing.

Training on organisational data is another risk to be aware of. Suppliers may want to use client data to improve their AI systems. Organisations need to assess whether this is appropriate based on the sensitivity of their data. You might choose to introduce additional safeguards in the contract, like de-identifying or aggregating the data.

Legal compliance

AI in Australia is currently regulated by a broad array of existing laws, but the Federal Government is considering new rules for organisations using high risk AI. International laws, such as the EU AI Act, might also apply if the AI is used by consumers outside Australia.

Contracts should require suppliers to follow all relevant laws and provide legally compliant deliverables. Specific compliance measures that may be required by future laws are also beneficial for contracts. This includes transparency to ensure users know when they're interacting with AI or when outputs are AI-generated, a record keeping obligation to keep records of conversations, source content and metadata, and contestability so users have a process to raise concerns about AI-generated outputs.

Further practical guidance for the procurement of AI is contained in the Voluntary AI Safety Standard recently issued by the Australian Government. 

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.