Artificial intelligence and legal practice: new guidelines for practitioners

Greg Williams, Stephanie Khan, Alex Corsaro, Lilly Langford
11 Jul 2024
2.5 minutes

With the rapid evolution of artificial intelligence (AI) tools and their increasing use across the legal industry, the Victorian Supreme Court has recently released guidelines in relation to the responsible use of AI in litigation. While the Queensland Courts have published Guidance for AI use by non-lawyer litigants, this marks the first instance of an Australian court specifically addressing the role of AI for legal practitioners.

Although it's too early in Australia to see any uniform laws or even practice notes regarding the use of AI in court proceedings, these guidelines are the first step in that direction and are expected to set a standard that judges from other courts may look to and depend on.

Overview of the Guidelines

The Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation provide for a list of principles designed to assist both legal practitioners and self-represented litigants when using AI in conducting proceedings.

While the Guidelines recognise that AI is a broad concept and that certain AI tools are already widely used in search engine and information management systems, they also mark a pivotal step in the Supreme Court of Victoria's acceptance that more advanced, generative AI tools will be used in litigation. The sorts of tools which appear to be contemplated by the guidelines are those which can generate bespoke and tailored work-product from particular inputs such as text prompts (such as ChatGPT).

The Guidelines underscore the importance of transparency, responsibility, and caution in integrating AI into legal processes, and are a reminder that legal professionals need to carefully consider how they are using AI in a way which is consistent with their professional obligations to their clients and to the court. When using AI tools, legal practitioners and self-represented litigants should:

  • understand the capability of AI tools including their limitations and propensity to generate inaccurate work-product or results;
  • be aware of the risks with upholding and preserving confidentiality and privacy;
  • disclose the extent of the assistance provided to legal practitioners by AI programs where appropriate to ensure other parties in the litigation process or the court are not misled about the nature of the work being undertaken in the litigation;
  • only use AI programs to assist in completing legal tasks subject to the obligations imposed on legal practitioners, including the obligation of candour to the Court, including to ensure that documents prepared and submissions made have a proper basis as imposed by the Civil Procedure Act 2010 (Vic);
  • identify where appropriate the particular AI tool used to provide context for judicial officers (this is specific to self-represented litigants and witnesses); and
  • check that AI generated text is not out of date, incomplete, inaccurate, incorrect, inapplicable to the jurisdiction and biased.

Key takeaways for legal practitioners

The Guidelines emphasise that particular caution is to be applied when generative AI tools are used to assist in the preparation of affidavit materials, witness statements or other documents created to represent the evidence or opinion of a witness. In any case, witness statements should reflect the person's own knowledge and words. Similar considerations arise for data compiled for use in expert reports or opinions, which also must remain compliant with the Expert Witness Code of Conduct.

The Guidelines also recognise that in appropriate instances legal practitioners should proactively consider the use of AI options to improve productivity and efficiency noting the expectation that lawyers be adept with common technologies. The Guidelines give Technology Assisted Review (TAR) as one such example. TAR is an AI tool which uses machine learning for large scale document review in order to classify which documents within a large pool of documents are likely to be relevant to the case (and therefore discoverable) based on iterative learning from a smaller set of initial human-reviewed documents.

Notably, legal practitioners are still accountable for ensuring the accuracy of documents created with AI assistance, and for exercising judgment and professional skill in reviewing any work-product that has been prepared with assistance with AI before the work is provided to the Court. Errors or omissions in AI-generated content are the responsibility of the legal practitioner signing, certifying or filing the document, the fact that a document was prepared with the assistance of generative AI will not be an excuse for errors or omissions nor absolve legal practitioners from their obligations to the Court.

While generative AI can be beneficial in legal proceedings, it's essential that legal practitioners have a solid understanding of the value-adds, but importantly also the shortcomings, of the AI tools which the practitioner is proposing to use.

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.