Generative AI Miniseries - Ep 3: Data Protection in the New Frontier of Generative AI: Insights from Legal Experts
Ep3: Data Protection in the New Frontier of Generative AI: Insights from Legal Experts.
In this third episode of our Generative AI Miniseries, host Will Howe (Director of Data Analytics) speaks with Steven Klimt (Partner, Banking & Financial Services) and Lyndal Sivell (Special Counsel, IP & Technology) to discuss data privacy concerns in the new frontier of generative AI. As ChatGPT and similar technology products are fuelled by both personal and organisational data, where do the risks lie and how do we protect data?
This series takes a deep dive into a number of topics related to generative AI and its applications, as well as the legal and ethical implications of this technology, and provides practical takeaways to help you navigate what to expect in this fast-evolving space.
Subscribe to the generative AI series to receive future episodes Explore the risks and opportunities of generative AI
Related Insights
- Ep1: Generative AI & intellectual property – whose line is it anyway?
- Ep2: The workplace and employment implications of Generative AI – Risky business?
- Ep4: Fighting fraudsters: How Generative AI is both enabling and preventing fraud in a game of cat and mouse.
- Ep5: AI Powered Public Renaissance: The Dawn of Transformative Government Services.
- EP6: The Future is Now: Next-Gen Lawyers sound off on ChatGPT, ethics, and the future of Generative AI in law
Get in touch
Transcript
William Howe
Hi, and welcome to the Clayton Utz generative AI vodcast series I'm your host Will Howe I lead Clayton Utz data analytics team where we're building with generative AI and these new technologies. Really exciting episode planned today where we'll cover a whole bunch of interesting topics around privacy. Very pleased to be joined by two of my colleagues. First is Steven Klimt. Steven is a partner of many years standing in our financial service, regulatory practice, which includes privacy. His many years of experience giving advice on privacy matters in relation to changing business practice of his clients on different data flows, disclosure documents, policies and including in relates to complaints made to the OAIC. He's been a member of the law council of Australia's privacy committee since it was formed. Also joining me is Lyndal Sivell. Lyndal is a special counsel in our technology and intellectual property practice. Her practice is centred around complex technology transformations. She's a particular interest in data. Lyndal studied a later practice for many years in London where she helped a number of clients implement the GDPR. So she has firsthand influence on that. Lyndal worked closely with clients on privacy, data commercialisation and data governance matters. So thank you and welcome Steven and Lyndal, very excited to have you on for this. And maybe let's start by having a bit of a conversation about data and how this is fuelled by data. And so maybe let's have a think about the bones of where this all comes from. Lyndal, it's all about the data, isn't it?
Lyndal Sivell
Yeah, it's an interesting thing, you know, we're all getting swept up in it, right? It's super exciting even this week with the launch or the, you know, the announcement around GPT four. It's super exciting to think about these products, but I think sometimes we forget just how they're constructed and what you need to build a product like this. And it's very much the case that these sorts of products exist because they are, as you said, well, they're fuelled by data. So they have immense language models that sit behind these products. To create it and make it work, you need to first establish it through lots and lots of data. But then you also need to have the ability to improve. You need to keep, keep, keep ingesting more and more and more to build a solution and a product that works effectively. So it's pretty amazing to think about that. Of course, there's pitfalls in everything, right? know, if you're thinking about how to construct these things and how much data was consumed, you know, even initially to build these sorts of products, it's absolutely immense. And it's highly likely that personal information comprises a set of that. It's, you know, be really difficult to argue that there wasn't some personal information that was swept up in that process. It's, you know, even when we think about I think even going forward, you know, it's all we're tech lawyers. We get super exciting when we see new product launches and things like that. But you think about GPT four, right? That's even stretching it further. It's looking now at photographs and images and all sorts of things. So it's an absolute array of data that's coming into play and some of it is certainly personal information, something that I kind of is still grappling with and yeah, particularly when we put more of an ethics lens on our human rights lens and certainly a privacy lens, which Steven and I are going to talk to today is very much about, you know, how is that done? You know, it's not like open AI rang me up and said, excuse me Lyndal can I please have some of your data to, to build this product. So it's a question there's a question there about that sort of thing. And you know, there's different ways of thinking about it. There's obviously lawful elements, potentially unlawful elements. But then again, you know, is it the right way to do things and how can people confirm what data of theirs is in these solutions?
William Howe
Well, to that point, I mean, surely it's already covered under some of the existing regimes. Steven, from your perspective, how does the Privacy Act apply to this generative AI?
Steven Klimt
Well, generative AI simply works in many senses by effectively scraping data from various sources. And that data is Lyndal pointed out, likely to include post information or I'd put it slightly differently. It's going to be very difficult to exclude personal information from the data that scrapes and this is either done directly or by virtue of aggregating the different sources of the information scrape and so can gen AI it and potentially its users will be subject to the Privacy Act if personal information is collected. So and what that means is it has to be stored somewhere for a point in time, rather than really being accessed and there will be issues as to it may well be that GeAI itself doesn't saw anything but its clock but the people who use it will. But many of it, it's highly likely that there will be some collection of personal information and stuff. Personal information is collected. That's when the Privacy Act steps in. Then there's all sorts of obligations under the Privacy Act that need to be considered by those who are collecting information in the course of using artificial intelligence. So if, for example, the information collected is sensitive information and sensitive information and information about a person's political beliefs, health information about a person, it's information about a person's religion, it's biometric information. It's all sorts of there are a number of other things to. Generally that information under the current Australian privacy law can only be consent collected with the individual's consent. That you've got an eye application that's collecting information and it may even be aggregating information to create information that's sensitive information. How would you ensure that the data subject concern, individual concerned has given consent? Similarly, there's an obligation that personal information must be collected directly from an individual unless it's unreasonable or impracticable to do so. And with AI, scraping information from various sources and individuals probably have no idea that the personal information is being collected through an AI application, let alone how it's being used by an AI application. And so then uses of AI. I have gen, AI need to say, well it's unreasonable impracticable to use that to, to collect the information directly from the individual. Then there's sorry Lyndal.
Lyndal Sivell
I was just going to say I had this really interesting thing the other day with someone, you know, stepping aside from the pure legal side of things, they called it extraction fairness and I hadn't really thought about it that way before, but it's interesting to think and I think some people think because, you know, some of the things you've mentioned, Steven, then, you know, some people think, oh, it's publicly available, so it's OK. Like it's no longer personal information or personal data. And in some jurisdictions that can be the case, but it's few and far between. You know, in most cases, just because something is publicly available and it is being scraped or whatever it may be, it doesn't detract from the fact that it still is personal information.
Steven Klimt
No, not absolutely. And I'll probably talk I'll talk later on a little bit about the proposed reforms. And that'll be the definition of person information which even expanded to capture information collected.
William Howe
Well, when we talk about this personal information that is being captured, obviously there's the scraping process where they train the model, but we as users are actually typing information into the system as we go. And Lyndal, notwithstanding the fact that obviously the law does need to catch up in some areas around this. What information are we actually putting in there? And to Stevens point, how does the Privacy Act apply?
Lyndal Sivell
It's interesting. Some people would have seen a stark improvement on open AI privacy policy a couple of days ago to deal with this in a more compliant manner. But certainly, that's know, it's something on our minds, particularly as lawyers, you know, we're always chasing down risk rabbit holes. And that's something that we're constantly thinking about. You know, and AI it's interesting. It's fantastic. You know, it's got so many amazing applications, but we just have to be really conscious about what prompts, you know, what, what information we're putting in there and what our clients are putting in there. And that may change over time. You know, it depends. We might have our own instance of these products going forward where we can ensure that there's appropriate data governance around it and we can lock it down and make sure that the products comply or assist us to on our compliance journey. But at the moment, you know, I certainly tremble in my boots thinking about perhaps what people are inputting into this, and I think in the latest version. So there's been a lot of pushback around this transparency and what these sorts of players are doing with the information that's being input. And before it wasn't particularly clear, I think open AI and others took the view that, look, it's not really our problem. We're being really clear about it. You need to be careful as a user. But I think now they've appreciated the backlash and are starting to change their ways might be too broad, but certainly in terms of their documentation, they're starting to indicate quite clearly the types of personal information that is being collected. And they made it a lot clearer, particularly since I think Tuesday around how that's going to be used. And it's very apparent that anything you put in there is, as I sometimes say, a bit of a free for all. Like it's very hard to get back and to potentially make changes to it. And it's certainly going to be used. And in the new privacy policy for open AI as an example, it makes it clear that it's not going to just be used to improve the existing product. It's going to be potentially can be used to create new products and conduct certain other activities. So it's something to be incredibly mindful of as we all go on this journey of using these new products. And you certainly don't want to be left behind. You know, I think some people are too risk averse around this, but it's just making sure that we're sensible and appreciate the risk and put appropriate governance in place. I did notice a quirk, which I think I mentioned to you, bill, the other day about how as open eyes, an example that previously put in there is something around how they may share some of your actions with other users. I did notice that that's been removed. So I suspect as part of the feedback that's been received is that was perhaps going a little too far or wasn't clear enough in terms of a onward use or future use of information that's been input into the product.
Steven Klimt
But I mean, I think happens at an important point. One of the obligations under the current legislation is to give a disclosure notice at all before the time person information is collected or reasonably practical to do so, do it as soon as practicable afterwards. And amongst the things that need to be contained in the notice is the organizations or entities to whom the information, the post information is usually disclosed. And so, I mean a number of points arise in connection with gen AI and AI generally in relation to that. So if information is being scraped from various sources as part of, as part of gen AI, how do you give individuals concerned the 885 notices? And it doesn't sound like it's terribly, terribly, terribly an easy thing to do. And you maybe fall back on me. It's not practicable to do so. But then even if it is, and even in relation to users, if information collected is just and can be disclosed, there's potentially, I don't know, almost an unlimited means of disclosure of information that's collected depending upon how users choose to use it. And so how do you possibly encompass all the disclosures that could have been made. So App5 notices in the context of Gen AI and AI applications more generally are very tricky things to deal with and I'm not certain it's probably it may be inaccurate, but for ChatGPT, for example, to say I'm not going to disclose your post information that you give me in order to establish an account to third parties is one thing. But in terms of information obtained through use of the application, maybe another thing completely.
William Howe
Well, it's interesting some of that change. And Lyndal, you talked about how the open AI privacy policy has changed recently and sometimes the technology companies have been accused of moving quickly before the law has caught up to them. Maybe this is that growing and maturing a little bit. And I guess here in Australia we actually have our own legislative change coming up. And Steven, be interested in your thoughts. We've got some foreshadowed amendments to the Privacy Act. How do you think that's going to impact generally?
Steven Klimt
The attorney general released a paper a month or so ago, saying that this is the intention of what should be done and that's open for comment at the moment. They've intentionally said we're not going to specifically deal with and regulate directly. That's for another day and that's going to be dealt with separately. But there's all sorts of provisions in there and all sorts of reforms that will impact upon open AI and gen AI anyway. First thing is, there's going to be a general, fair and reasonable standard for the handling of personal information. So therefore, the way you would think that could potentially and has to bear upon the right would have some bearing upon the way in which the application actually operates. Because in terms of collecting post information and what it does with the post information, you would see if there are no parameters built in there to ensure that it's handled fairly and reasonably, then those using it and those potentially even providing the software may be breaching the legislation. Similarly, there's an obligation for privacy risk privacy impact assessments for high risk activities. So again, users and that depending upon the way in which it's used, it may impact it may require a privacy impact assessment. There's obligations to record the purposes of collection before at the time of collection. That's what's proposed. And so, again, when you're collecting information through AI, you don't necessarily know the purpose of collection at all before the time you collect it. The proposals also refer to automated decision making systems, which of which I could be part, and they say they should be described in privacy policies. And individuals have rights, the rights to request meaningful information about how the automated decisions are made. The proposed legislation also deals with targeting of individuals or targeting of individuals based upon characteristics, and says that individuals should be given an unqualified right to opt out of targeting. Now, how could they possibly be given such a right, or how would such a right work in the context of an AI application that potentially doesn't even at the start have any idea of the individuals concerned who may be the subject of targeting? And then finally, there's specific obligations in relation to de-identified data that's held. There's obligations to break to data breach, report breaches of data breaches in relation to unidentified de-identified data. And there's also obligations to take reasonable steps to. There will also be proposed obligations to destroy or destroy de-identified data. So again, or delete de-identified data, again that would present significant challenges in the air context. So really, it will be interesting to see how these developments pan out and what ultimately turned into law. But it probably creates a great deal of legislative obligations that people involved in either developing it, creating it, selling it or using it will need to be conscious of AI and it may even be a case of the law, even current regulation not really being up to speed with developments in the marketplace.
William Howe
So it sounds like there's a lot of change and uncertainty. Lyndal I guess from your perspective. What do we do with all this change?
Lyndal Sivell
It's tricky and I think I think I think that is actually it is tricky. You know, it's certainly in Australia and we're not alone in this happens all the time, right. You know, technology advances and the law can't keep up. It lingers behind. And it's that constant challenge of trying to it really struggles to keep pace with technology. And I, I think, you know, I think that with that, obviously, it really needs to be some there needs to be something there, right. To give everyone confidence around these sorts of products. So and that's not just from a user perspective, that's also from the developer perspective and from a commercial perspective that at the moment, because there isn't really any specific regulation in Australia, it can be challenging for both sides to work out what to do and what the parameters are around how to, to use and develop these technologies. I think at the moment, you know, we're talking about privacy today and there's obviously other regimes that touch on AI so there's things like discrimination law, surveillance law, those sorts of things will also at the moment, you know, deal with AI because a lot of those sorts of laws are technology agnostic. So they drafted in a certain way that if a new technology comes along, you can slotted in and AI is regulated through those mechanisms. But there is a real push, you know, there is a real drive to think about and to contemplate actually having some AI specific regulation in Australia. And this is something that's been floating around for some time. It's not just because of ChatGPT or other things, it's been flagged from various elements of society, both from developers, you know, from developers, but also from, you know, things like the Human Rights Commission and those sorts of elements as well, to say, look, we really need to have a think about this because, you know, a lot of AI will be completely not a problem. Right you know, many applications just it doesn't matter. They're super low risk. But there's always that moment that there are those elements. And perhaps like in the generative AI space where things are higher risk and they can have a detrimental effect on society in a way positive and negative like anything in life. But that's where you really need the regulation to kick in and to provide some parameters around these sorts of things. This is really important, particularly for vulnerable people. For instance, safety, you know, online harms, all these sorts of things. It all feeds into the discussion about what we should do around AI regulation. You know, Australia, we like to observe. Sometimes what's happening in other jurisdictions and we're certainly observing what's happening in Europe at the moment with the AI Act. It's interesting because the AI Act is taking a very risk-based approach to these things, a new category Swanned in a couple of weeks ago. So I think there's now five categories perhaps there, but they are certainly range from, you know prohibited essentially things that are to out there are they dangerous things that are prohibited. So AI systems that are prohibited. And then you move in, you move down the scale similar to high risk, you know, and then low risk and so on and so on. So and there has been some discussion about where does giant AI fit in this space. You know, is it one of those ones that arguably could be higher risk, but is it all the time? So I think that's put a bit of a spanner in the works for our European colleagues at the moment, particularly the European commission, about how to deal with this, because they're really trying to bed down the AI Act at the moment and get that out. It's it was meant to be a Christmas present, but now it's stepping into first quarter, second quarter and so on. So I think it's an interesting space to be in and it's not easy regulation is not easy trying to balance it out.
Steven Klimt
But also, I would say there's unfortunately a natural tendency of regulators to say, well, we've got to get something in. So the Europeans are rushing to get some AI legislation in place. And what regulators tend to do is they tend to say, OK, well, I'm just going to put that in and not really consider where it fits in the overall context of regulation. And so take the privacy regulation. Privacy regulation may impact or is likely to impact upon AI in a number of different ways that we've discussed today. But if there's separate AI legislation, I think that separate AI legislation should be formulated having regard to what's in the privacy legislation, any other legislation that affects AI and to the extent that you want the separate AI legislation to cover the field, then you should repeal or ameliorate the aspects of the other legislation that still also doesn't apply. Simply slotting in a new piece of legislation on top of an existing regime and not changing the existing regime I think is very lazy regulation. And I think in our rush to doing things, there's a real risk that could happen.
Lyndal Sivell
Well, I don't know if there's been a rush. I think it's been going on for quite some time in Europe, that's for sure. And that makes sense, right? Because you have so many member states that need to contribute to that debate and reportage and do certain things. But it will be interesting to see if there's a rush in other jurisdictions too, to catch up in a way or to manage the environment that we're in now, because AI is it's here. And it's interesting and it's popular. And, you know, I can imagine somewhere you're thinking about AI.
Steven Klimt
European in relation to AI regulation, try and do what they've done with privacy, which is essentially trying to impose a European standard on the entire world. Because that's effectively the European approach in the GDPR and it's an approach that we're in Australia in privacy appear to be appear to be buying hook, line, and sinker.
Lyndal Sivell
And I think it's something that from a client perspective we're certainly having questions about, you know, this is coming in Europe. It has long tentacles. You know, an example of a long tentacle is that even if, you know, the AI system is here in Australia, everything's done in Australia, the entities in Australia. But if the output is used in the EU, then it's captured by the AI act. So I think that's certainly on and lawyers minds and if run out there as to how do we prepare for this coming forward. There's going to be a long, you know, 12 month implementation period, obviously a 36 months for other elements of the act as well. It's certainly, certainly interesting. I'm excited to see what happens.
William Howe
There's a lot ahead here. So I think we just got a chance to just scratch the surface and no doubt more to come in the conversation around privacy and an emerging legislation. But Thank you to Lyndal and Steven for the conversation today around privacy. Thank you to our watchers and our listeners for being on this journey with us. We've got a number of other great episodes planned within this general, this generative Al vodcast series. So look forward to seeing you at the next one.