Media: The legal minefield around AI in Australia
ChatGPT and other generative AI technologies have taken off in recent months, with individuals and businesses using the technologies to create content. Simon Newcomb, Intellectual Property and Technology Partner, talks to Sean Aylmer from Fear and Greed about the wide range of legal issues businesses need to consider when it comes to ChatGPT and other AI.
Get in touch
Transcript
Sean Aylmer
Welcome to the Fear & Greed Daily Interview. I'm Sean Aylmer. AI (Artificial Intelligence) in the form of ChatGPT or Google's offering, Bard, has taken off in recent months. In fact, ChatGPT had one of the fastest tech adoptions ever with tens of millions of people using the technology to create written content. Other platforms allow a similar experience with images, but the explosion in the use of AI has created a legal minefield, not just over the intellectual property created and used in the process, but a range of other issues as well. And it's something businesses need to be aware of. Simon Newcomb is the technology and intellectual property partner at Clayton Utz. Simon, welcome to Fear & Greed.
Simon Newcomb
Thanks for having me. Great to be here.
Sean Aylmer
Let's start with governance, AI governance. What AI governance is needed? What's the framework we should be thinking about to address all these issues?
Simon Newcomb
So look, there's a lot of benefits of generative AI and people are really focusing on that and exploring how they can use it in their business. But there's also many risks and people are also concerned about that as they can see how AI can affect people's lives. And there's a broad range of concerns really across fairness, bias, and discrimination from the training data, particularly where AI is used to make decisions. Privacy, security, accuracy, transparency, and accountability. There's a thing called the black box problem where it's hard to tell how an AI is making a decision. So it's not very transparent. And of course, safety all the way from having reliable information up to existential risks from super intelligent AIs. So I guess one way to think about governance is that really there's a need for businesses and large organisations, all organisations to have a social license to use AI to address those concerns. And there are some good frameworks to help with AI governance. The Australian government has published Australian AI ethics frameworks, and there are some other frameworks from other governments and NGOs.
Sean Aylmer
Okay, so there are frameworks, and we talk about a social license, which is not necessarily regulated. Do we need AI specific regulation? Because certainly there are calls out there for specific regulation. If we do, what would it look like?
Simon Newcomb
Yeah, so I guess people are really getting concerned that the existing laws don't go far enough, and self- governance is really not enough. There's a race by the AI companies to develop and release powerful new models. And so there are calls for AI specific regulation from governments. Governments around the world are including ours, are looking at this. And almost counterintuitively, I think the AI industry is also calling for it because they see it as an important part of breeding trust in the technology. If we can look around the world at what's going on, the Europeans are a fair way down the track with this regulation called the AI Act. And the way that works is that it classifies AI systems by the level of risk. So systems with unacceptable risks are prohibited. For example, social scoring systems that lead to discrimination like the one that has been reported on in China. And then there are high risk AI systems that are permitted, but then they have regulations that require things like registration of those systems, testing and human oversight and accountability. So examples of those sort of high risk systems are things like critical infrastructure, education assessment, biometrics, use in recruitment, immigration, law enforcement, and in the courts. What that act also does interestingly, is it requires a system of any level of risk to disclose if it's interacting with a human, that it's an AI. So you have to know that you're dealing with an AI and this regulation like the GDPR will probably become a global standard because it's got extraterritorial application and very large fines for breach.
Sean Aylmer
Just on that, it's important. I mean, I think of accounting standards and you have two kind of main sets of accounting standards, which always cause friction. In AI, is this an opportunity to get one overarching set of rules?
Simon Newcomb
Look, I imagine that's sort of holy grail of regulation. It doesn't often tend to happen that way because you have different people with different interests and views, and you're right that you do often end up having to manage to a patchwork of regulation. And that's certainly happening in privacy globally now. And I guess there is a fair potential for that to happen with AI specific regulation as well.
Sean Aylmer
Stay with me, Simon. We'll be back in a minute.
Sean Aylmer
My guest this morning is Simon Newcomb, Technology and Intellectual Property Partner at Clayton Utz. Okay, so we've sort of been talking big picture. Let's talk about some of the particular issues that affects people using AI. IP is a big one, intellectual property. So who owns the AI generated content?
Simon Newcomb
Well, in Australia at the moment, no one. The thing is that under our copyright legislation, and that's the type of IP right that protects texts and images and music and so on, there's a requirement for a human author. And where the content's generated by an AI, there's no human author, and so there's not going to be any copyright. Now you might say, well, what if I put a really detailed prompt in there? Is that really me creating the work rather than the AI? And look, that's arguable for highly detailed prompts, but in most cases it's really just giving the AI an idea. And that's not enough from a copyright perspective. Copyright protects the expression rather than the idea.
Sean Aylmer
So can we breach copyright if we're using AI? If I'm using AI, can I be breaching copyright?
Simon Newcomb
Well, that is possible. And the IP infringement issues are really a bit of an existential risk I suppose for the whole industry here, because training and AI involves scraping a huge amount of data from existing sources. So terabytes of data from the internet, from books and journals and so on. And these large models then use that to create the technology, the models, the neural networks that create the content. And the thing is that some content creators are not very happy with that because they're not being compensated or acknowledged. And so there are some cases underway, and these cases at the moment are against the AI creators, the technology companies. So there's one against Microsoft and OpenAI over GitHub, which is the software source code. There's one over artworks against Stability AI, and some other image generators. And there's another case by Getty Images, the stock image company over photographs. And they're all alleging that their content's been taken without permission and used to train the models. Now, you can sort of by extension say, well, what if I then generate content that comes out of an AI that's reproducing something that came from the training data? And that's where I think that it is possible that you could be infringing content by doing that. It's complex though, because the training data is not at least directly stored in the model. It's being stored in our brain with synapses and neurons with millions of interconnections. And these cases are really going to turn on some of the big exceptions in the copyright legislation. So in the US they've got this concept of fair use where you can create transformed works that don't overly harm the original work. And so that's going to be a big question. Does training an AI creating a new technology like this fall within those copyright exceptions?
Sean Aylmer
Wow. Okay. So there's a massive area. What about privacy?
Simon Newcomb
Yeah, so look, similar concerns in a way in that, again, training these models up involves a large scale collection and processing of personal information. And many people are concerned about privacy. And indeed so are regulators. You might have seen recently that Italy temporarily banned ChatGPT over privacy concerns. And in Australia we had a high profile privacy incident with that company, Clearview AI, which you might remember, scraped up a whole lot of people's images and created biometrics of them. And that was found to be in violation of our privacy act. And some of the issues for these types of technologies are that people aren't notified, they put their information up, say on the web, and they expect it to be used in that context, but maybe not in other contexts to turn up in ChatGPT. The models can either collect or infer sensitive information about people. And also we have some changes coming through in our privacy laws that are going to have some bearing on the AI technologies like new laws to do with automated decision making and a new right to have your personal information erased.
Sean Aylmer
That's almost as complex as IP privacy. What about we bring it back to the business world a bit? I mean, so a transaction, an M&A deal, for example. Are there implications from AI for that sort of thing? If someone is undertaking some sort of business deal or M&A, should they even be worried about AI?
Simon Newcomb
Yeah, it's a good question. Look, we are changing some of our approach in the way that we handle those transactions. So in M&A transactions to acquire AI businesses, we've been asking some additional questions in the due diligence that are about AI and we've added some warranties to our share sale agreement to deal with some AI specific issues because some of these issues could affect the valuation of the investment. And similarly in procurement transactions, there are some unique considerations when procuring AI systems because they work differently to traditional rule- based systems.
Sean Aylmer
What about things like employment?
Simon Newcomb
Yeah, well look, lots of people are concerned, I suppose, about changes in their roles or potentially their job even not existing anymore. And so employers really need to manage the way that they deal with their workforce and they communicate with people. And in some cases, they have obligations to talk to people before doing these types of changes in their business processes. And our workplace experts are advising that employers should be having conversations with employees as early as possible. And there are also issues of discrimination and bias in using AI in performance management systems where, for example, in using it for recruitment or in managing under- performance.
Sean Aylmer
It just sounds to me, Simon, that AI is going to eventually or inevitably is probably a better way of putting it, touch all sorts of aspects of our business life and our home life too.
Simon Newcomb
I think that's right, Sean.
Sean Aylmer
And so if you had advice to a business now, how do you get ahead of the curve in AI?
Simon Newcomb
Yeah, so I think, look, I would be giving, certainly we are in our business, giving early guidance to people and encouraging them to explore, but doing that safely. So making sure that they don't put confidential information into ChatGPT, and very importantly, ensuring that a human reviews everything that is produced by it because it is prone to getting things wrong. Then sort of going on from there, I think businesses should understand the types of issues we've been talking about and develop appropriate frameworks to manage them or modify their existing frameworks to build these types of issues in, adding it to your cybersecurity program, for example. And then I think in the medium term, there's potential for much more tailored projects where organizations start to use their own data to produce more relevant and targeted and accurate services by fine- tuning models. And in those sorts of projects where there's big business process changes or much more targeted solutions, I think that they really need to incorporate legal compliance by design in those projects from the outset.
Sean Aylmer
Simon, thank you for talking to Fear & Greed.
Simon Newcomb
It's been great to be here. Thanks, Sean.
Sean Aylmer
That was Simon Newcomb, Technology and Intellectual Property Partner at Clayton Utz. This is the Fear & Greed Daily interview. Join us every morning for the full episode of Fear & Greed, Australia's most popular business podcast. I'm Sean Aylmer. Enjoy your day.