New OAIC privacy guidance on AI: what you need to know
While the Guides are focused on privacy compliance, compliance with the Guides will also assist organisations to comply with the voluntary guardrails contained in the Voluntary AI Safety Standard recently issued by the Australian Government.
The Office of the Australian Information Commissioner (OAIC) has issued Guides to help organisations navigate and understand how the Australian Privacy Act 1988 (Cth) and the associated Australian Privacy Principles (APPs) apply to artificial intelligence (AI), and to outline the regulator’s expectations in relation to the development and deployment of AI. The Guides also include practical tips to promote privacy compliance in the adoption of AI, and are divided into two sets:
- one for businesses deploying commercially available AI products; and
- another for developers using personal information to develop and train generative AI models.
Privacy in developing and deploying AI
AI systems are fundamentally data driven, and personal information may be contained in both input data and in generated output.
In the Guides, the OAIC underscores the importance of robust privacy governance and safeguards in the development, application and use of AI. Further, in a 21 October 2024, media release about the Guides, the OAIC reiterated that, addressing privacy risks arising from AI, including those associated with generative AI, is a top priority for the OAIC. The OAIC expects organisations to take a cautious approach, assess risks, and prioritise privacy in their AI use.
These Guides align with the OAIC’s focus on promoting privacy in relation to the development and deployment of AI and other emerging technologies and improving compliance. The OAIC has articulated that it is prepared to take action to ensure privacy compliance. The Guides also align with the Commonwealth Government’s broader focus on the responsible and safe use of AI technologies.
Who can benefit from these Guides?
While any organisation or government agency considering adopting AI can benefit from these Guides, they are targeted at organisations that must comply with the Privacy Act and:
- are planning, designing, and building generative AI, including compiling datasets for training or fine-tuning models; or
- are using commercially available AI products or systems.
What should organisations do?
The Guides serve as a timely reminder of the importance of ensuring that the selection of AI solutions and the use and deployment of AI is done in a privacy conscious manner. The Guides provide valuable guidance and supporting artifacts, such as checklists, to assist in this process.
To enhance the adoption and acceptance of AI in a privacy-safe and legally compliant manner and to build community trust, in light of the Guides, organisations should consider the following:
- Adopt a risk-based approach to the adoption and deployment of AI: Organisations should adopt a risk-based approach to the adoption and deployment of AI solutions (particularly when considering the intended purposes of adopting the AI). For example, AI systems collecting or using “sensitive information” (for example, health information) or those involved in making decisions that have a significant impact on people (with or without human intervention) are likely to be a high-risk privacy activity and may require the adoption of more robust safeguards.
- Be careful what data you input into AI systems: The Guides recommend that personal information is not input into publicly available generative AI tools as a matter of best practice. Also, the Guides highlight that privacy obligations will apply to “personal information” input into AI systems, even if that personal information is publicly available.
- Consider the application of privacy laws to generated or inferred information: The Guides state that personal information includes generated or “artificially inferred” information. This could create compliance challenges for organisations – particularly if compliance frameworks assume that data is collected from a reliable source or that there is nothing that the organisation can reasonably do to verify it. In particular, APP 10 requires organisations to take reasonable steps to ensure that personal information is accurate, up-to-date and complete. Inaccuracy is a major risk inherent in AI systems. It may be challenging for an organisation to demonstrate compliance with APP 10 in respect of information inferred or generated by an AI system – particularly if that information is then used/disseminated by staff within the organisation or incorporated into other material. In the Australian Government’s response to the Privacy Act Review Report (September 2023), it agreed in principle to amending the definition of personal information under the Privacy Act (including so that the definition extended to information that “relates to” an individual). The Australian Government also agreed in principle to clarify that “collection” covers information obtained from any source, including inferred or generated information. However, such amendments have not yet been incorporated into the Privacy Act. It will be interesting to see if, and to what extent, the next tranche of proposed reforms to the Privacy Act clarifies the scope and meaning of key concepts such as personal information and collection under the Privacy Act.
- Use of data held by entities: Entities covered by the Privacy Act should consider the original or primary purpose of collection when seeking to use data held by them containing personal information as an input into an AI system (eg. for grounding). Under APP 6, entities may only use or disclose personal information for the primary purpose for which it was collected, unless the entity has consent from the individual or another relevant exception under the Privacy Act applies. Practically, if an entity intends to train or ground an AI using large quantities of existing data, it is prudent for it to consider either not using any personal information or to deidentify that data prior to use (in a way that is not susceptible to reidentification).
- Be transparent: Where entities collect, use or disclose personal information in the context of the use of AI (for example, in an AI chatbot), they should be transparent about such activities within their privacy policies and privacy collection notices.
- Develop comprehensive data management policies and procedures to classify and oversee the collection, storage, use, and transfer of personal and sensitive data throughout its lifecycle and in the context of the use of AI products and solutions. This approach supports in tracking and reporting on data used across AI products and systems, identifying accessible data stores and assessing associated data related risks. It also promotes transparency about data usage practices, supports privacy compliance and fosters stakeholder trust. Furthermore, implementing proper checks and balances to manage data helps to ensure the consistent quality and accuracy of data fed into the AI systems and minimises the risk of any adverse privacy impacts, biases and inaccurate AI responses.
- Maintain an AI inventory: It is recommended that entities maintain an AI inventory that details the AI products and systems used across their organisation, including any personal information each AI product or system accesses and the methods of access. Additionally, entities should regularly audit and check to ensure that the collection, use, disclosure and processing of personal information in the context of AI products and systems remains in accordance with the APPs and the organisation’s broader obligations at law.
- Build a privacy compliant program: Developers of AI solutions and entities procuring such solutions should ensure that AI systems are designed and built with privacy compliance at the forefront. Additionally, users of AI should ensure that they have a privacy compliant program in relation to the use of AI within their organisation. This includes establishing clear policies and procedures in relation to data minimisation, obtaining informed consents where required under the Privacy Act (and especially in the context of sensitive information or secondary uses of any personal information), and providing individuals with “opt-out” options. Consistently performing critical privacy activities, such as updating privacy policies and conducting privacy impact assessments, also helps identify and manage privacy risks, demonstrates compliance and builds trust.
- Conduct security risk assessments: Under the Privacy Act (APP 11), regulated entities have an obligation to take “steps as are reasonable in the circumstances” to protect personal information from misuse, interference and loss and from unauthorised access, modification or disclosure. To support compliance with this obligation, we recommend that entities conduct thorough security risk assessments across all AI components, including source code, data storage, network infrastructure, and user interfaces and put adequate controls around it. When purchasing AI products or systems from vendors, entities should carry out comprehensive due diligence of the vendor and the vendor’s AI offering to uncover any potential security risks, examining aspects such as hosting location, access control, data encryption, and regulatory compliance (among others).
- Train AI models in a privacy compliant manner: The OAIC Guidance on Privacy and Developing and Training Generative AI Models notes that developers using data, particularly larger volumes of data, to train generative AI models should assess whether or not the information includes any personal information. If so, developers should evaluate that any such information is accurate and is being used consistently with the Privacy Act.
Stocktakes and assessments of AI deployment and usage should be done on a continuous basis to identify any new privacy touch points and to support the ongoing responsible, legal and ethical use of AI products and systems. In addition, entities should ensure that their deployment and use of AI is consistent with any additional obligations that they may have at law (including pursuant to any applicable international laws, such as the EU General Data Protection Regulation).
While the Guides are focused on privacy compliance, compliance with the Guides will also assist organisations to comply with the voluntary guardrails contained in the Voluntary AI Safety Standard recently issued by the Australian Government.