‹ All episodes

Emerging Cybersecurity Risks

Navigating the Legal Challenges of Artificial Intelligence with Scott Koller of Baker & Hostetler LLP

👉 Bridging the gap between the legal sector and AI 

👉 Why the legal world is struggling to keep up with rapid technology advancements 

👉 The question of AI ownership and copyright issues

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, our guest is Scott Koller, a skilled privacy and data security attorney and Partner at Baker & Hostetler LLP. Join us as we navigate the legal challenges posed by artificial intelligence (AI), delving into the associated risks and possible future solutions. We further explore the ownership and copyright challenges that are currently facing the court systems and how these could trigger court reform. Tune in to discover how different Global Perspectives on AI regulations can help bridge the gap between the legal sector and the quickly developing technology sector. 

 

The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. 

 

The touch points of our discussion include:

  1. Bridging the gap between the legal sector and AI 
  2. Why the legal world is struggling to keep up with rapid technology advancements 
  3. The question of AI ownership and copyright issues
  4. Asking whether trust and security in AI models are due to biassed data 
  5. Unpacking the global perspective on AI regulations 
  6. The impact of GDPR on AI 
  7. The need for responsible development of AI technologies

 

Scott Koller Bio:

Scott Koller is a skilled privacy and data security attorney whose practice focuses on data breach response and security compliance issues. A Partner at Baker & Hostetler LLP, Scott has extensive experience on privacy and data protection issues, including data breach response, cybersecurity risk management, incident response planning and preparedness, vendor management, and regulatory investigations.

 

Scott Koller on LinkedIn

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

 

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

 

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh.

 

Joel – 00:00:17: And Joel Yonts. Join us as we dive into the development of AI, the evolution in cybersecurity, and other topics driving change in the cyber risk outlook.

 

Max – 00:00:29: Thank you for joining us today on this fantastic podcast, Emerging Cyber Risk. Today, we’re going to be discussing the legal aspects of artificial intelligence and the risks associated with artificial intelligence. As always, we have an awesome guest today, Scott. We’ll get to know him here in a minute. But there are a lot of things to consider when it comes to legal and artificial intelligence. So before we dive into that, Joel, welcome to the podcast. How are you doing today, Joel?

 

Joel – 00:00:57: Oh, doing fine, doing fine. A little frazzled. It’s a great time to be an AI in cybersecurity, but it’s kind of interesting trying to keep up with the news; everybody’s questions, how does it affect everything? It’s a pretty wild ride right now.

 

Max – 00:01:12: Yeah.

 

Joel – 00:01:12: I guess you’re experiencing it, too.

 

Max – 00:01:14: Yeah, I think a lot of Chief Security Officers are facing these tough questions, and I can’t imagine the world of a general counsel, what legal teams have to deal with. So, without further ado, Scott, I know you’ve been practicing cybersecurity and law for quite some time, but for our audience who don’t know you, tell us a little bit about your background. Tell us a little bit about your cyber background, and then also tell us a little bit about what you’re currently doing.

 

Scott – 00:01:40: Well, certainly. So, I’m a Partner in the Law Firm of Baker Hostetler. I’m based here in LA. I describe myself as a privacy and Data Security Attorney, which, when I first started 15 years ago, you describe privacy and data security. That was kind of everything. And today, it’s diversified a little bit more. There are now different avenues of compliance. There’s litigation. There are all aspects of privacy and data security. But that’s really kind of where I developed my practice. I’m still doing that work. It focuses primarily on information security compliance, governance, and instant response. So, a lot of breach response work.

 

Max – 00:02:16: Awesome, Scott. I want to put a plug-in for you because I think one of the most unique things we heard about you is you were actually a prior practitioner. Incident management, right? You had some experience with actual information security where we see a lot of attorneys and people that have the certification, but they don’t have the experience to back it up. So that’s one of the things I found out about you. And I was like, man, we got to get this guy on the show.

 

Scott – 00:02:39: Well, absolutely. You know, I ran an IT consulting firm years ago, and it was really out of that experience that I developed kind of my information technology background. I’ve got 15 different computer certifications ranging from networking to computer forensics, all of which are very helpful in my day-to-day legal job of advising clients in those same areas.

 

Max – 00:03:01: That’s awesome, Scott. So yeah, I know you don’t like bragging, but that’s incredible that you have that many certifications and you’re a practicing attorney. So very cool.

 

Scott – 00:03:11: I think one of the biggest benefits is it’s kind of like speaking a foreign language; you can speak the same technical jargon as the IT folks and then bridge the gap between the business and legal world. So it definitely helps.

 

Joel – 00:03:21: Well, yeah, when you deal with incidents, it’s very interesting. I’ve done a lot of incident work myself and it’s a very interesting space in that it impacts companies and organizations at a business level and legal level, but it’s highly technical. So, being able to speak to both sides, I think that is pretty awesome to bridge that gap.

 

Max – 00:03:40: Yeah, you’re a rare bird, Scott, is what we’re trying to say. So, with that, Scott, I know you deal with a lot of different issues when it comes to legal matters at your firm. What have you seen out there when it comes to artificial intelligence? Is that an emerging practice within the legal field itself? Tell us a little bit about the current state of things from your perspective.

 

Scott – 00:04:00: Absolutely. I’ll tell you right now; it’s a little bit of the Wild West right now because there are so many new technologies, and the legal world is still struggling to catch up, not to mention Information Technology, how to approach it from a secure fashion, but also how does it fit within the greater organization? Has it fit within the existing legal framework? So many of the cases that are being discussed in the matters involving AI are really at the forefront. They don’t have a lot of precedent to look back on to say, well, this is how the statutes interpret, or this is how the case law is going to play out.

 

Joel – 00:04:31: I think it’s interesting when I look at this space, and actually, I’m writing a book on cybersecurity protection of AI, and I’m close to the finish line. This morning, I was actually writing the section involving trust. And so it’s interesting when you look at trust in that there’s a legal definition of trust, which is, you know, trustworthy AI that we’ve heard a lot of talk about. But there’s also a cybersecurity risk. Has it been tampered with? Has it been altered? And my impression is a lot of the legal discussions have been around, you know, how was it constructed? Does it have a bias? Does it have data? Not so much as has it been compromised by an attacker or third party? Is that kind of the state that you’re seeing?

 

Scott – 00:05:08: More than that, I’ll build on that. Not just trust hasn’t been modified, but how do you go about verifying if it has been modified? I mean, some of these neural networks of artificial intelligence are so complex, and they build upon each other and layer upon layer. How would you know if that bias was introduced? How would you know if a malicious code has been introduced? It’s not like a normal software program where you can go back and, okay, here’s the source code; I’m going to see exactly the steps that take place. We’re just not seeing that capability on the artificial intelligence side. So, how do you evaluate those risks? It becomes very difficult. And that’s one of the key challenges that so many of my clients are facing.

 

Joel – 00:05:48: Yeah, absolutely. Actually, that’s an active area of research of mine, digital forensics of some of these AI models because you’re right if you had had a compromised database or a memory system, you do digital forensics, you dump and extract that data. But neural networks are so complex. There is no way to unwind that data and get to the core. It’s an interesting problem.

 

Scott – 00:06:08: And right now, I haven’t seen any practical applications or practical exploits. And here’s the reason why. It’s so complex that even the threat actors at this point don’t know how to insert it in a way that wouldn’t be glaringly obvious, but it’s only a matter of time. And you know what the answer is going to be is they’re going to use another artificial intelligence to figure out how can I insert this into this model to ensure that it’s not detected or to make it more difficult to detect.

 

Max – 00:06:35: Scott, you know, a lot of people have very similar concerns as where, yes, you’re going to need another AI model to counter the one that you’re trying to protect against. But when it comes to legal matters of this, right, you mentioned there’s not an existing legal framework. There are not any prior cases. Can you give us an example of when this type of phenomenon may have happened in other fields, other scientific fields? And how does the legal profession handle something like that when there’s nothing to lean on, right, to gather prior information on?

 

Scott – 00:07:06: So the standard, at least in the legal field, is we’re going to look back at other scenarios and try to impose the same framework and the same interpretation as best we can on a go-forward basis. We see that particularly in the area of intellectual property and musical composition, where you go back where, ok, well, on one hand, you have somebody who’s able to compose this information entirely themselves. Did they rely on some other melody or composition that was previously done? And then you have one interesting company that was going through, and they say, ok, well, we’re going to try to copyright every possible musical combination and melody to subvert that framework. And you see, the court cases kind of struggle to handle those types of examples. I would say it goes back to copyright, this idea of, okay, who is really controlling? Who’s really creating this piece of intellectual property, this piece of art? The most famous example I think of was the selfie of a monkey where a cameraman managed to set this camera up and put it into the situation. Okay, well, I’m going to give this to this monkey. And the monkey just kind of looked at him and took a selfie of it. Well, they said, well, who really owns that copyright? Well, say, well, it wasn’t actually the photographer who pressed the button. It was a monkey, and the monkey can’t own the copyright. So it’s kind of in a limbo situation. That’s also what we’re facing here, especially when it comes to AI and the AI creation of images. The court system right now, or at least the trademark office, is saying, look, you can’t copyright an image just created by AI because you didn’t do the creation of it. It was an artificial intelligence model that actually did the art. Never mind the fact that you provided the prompts that guided it toward the creation of that. You can’t copyright it. And then now they’re coming back to it. They’re saying, ok, well, how much can I modify? How much input does it take from a human’s perspective until I can now say, ok, well, this is my art, like from a Photoshop perspective, like, ok, how much of the Photoshop do I need to be manually creating? Did I actually create that digital piece of art, or did Photoshop create it? And they’re really struggling to handle that.

 

Joel – 00:09:09: I think one of the problems we have is our definitions of artificial intelligence are going to have to evolve because right now, we’re seeing everything like humans are the only intelligent thing on the planet. Then that’s obviously the way it’s always been. AI is not a computer programmer’s work. It is an algorithm that allows the computer to truly learn. So, as it gets stronger and stronger, it’s going to truly be a different form of intelligence. So, can it be possible that an AI model owns a copyright? Because it truly did create something. I know right now, generative is kind of an additive approach, but we’re moving into the space where AI can rival human intelligence actually be truly creative for the first time. So, what do you think we’re going to have to do from a legal standpoint from that perspective?

 

Scott – 00:09:56: I think when it gets to a certain level, the way the court system has to evolve, at least in my opinion, is you almost have to treat it in either one of two ways. Either A, you treat it as a tool, and the human is providing the input enough so that the ownership or the trademark flows back to the human that’s providing the input. Or you almost treat it like an employee, a contract-for-hire situation where I’m hiring this AI to do some work for me. But as part of that contract, I retain ownership rights of it, of not the AI itself, but the work that was generated from the AI. 

 

Max – 00:10:31: The work and maybe even the memory bank that you’re creating, right, I would say, because you’re adding value to that automated thing, whatever it is. Whereas a human, yeah, they get to keep their own tacit knowledge. But with this, how far would the courts go?

 

Scott – 00:10:45: And that’s where it almost breaks down because if you do apply ownership to the learning of it, then the problem with that is that now you have the same challenges that so many organizations are coming back with: How did you train these models? Did you train it on a publicly available database? Like from an image perspective, you know, did you go to Getty Images and just download their entire database? And that’s how you learned it from. And if you base it off of that, then you’re going to have really tricky ownership of that memory bank. I almost feel like you established the model. It reaches a certain point, and then you almost have a snapshot. It’s like, ok, now everything else that is developed or learned on a go-forward basis that is now the ownership of this person or this entity.

 

Joel – 00:11:30: I find that interesting, say I as a human. And by the way, I have zero artistic talent, so there’s no danger of this. But if I was able to go look at these public forums and look at all these public images and memorize it, mimic the techniques enough, and start generating my own art from what I learned from that, do you think that’s a different model then? What would it be seen differently?

 

Scott – 00:11:49: So that’s the argument that’s being played now. They’re saying, look, this AI learned off of the masters, learned off of publicly available information on the internet. Therefore, I, as the AI creator, should not have to provide any ownership to all those that form the basis of that intelligence. That’s the argument that they’re saying right now. And that’s being challenged. They’re saying, well, no, you created this intelligence. You stood on the shoulders of all these other artists. Therefore, I, like those other artists, deserve a piece or ownership of that AI intelligence. And that’s a struggle. Which one really applies? And if you base it off of the tool model, then it seems like, ok, if it’s based off a tool, these parts of the tool, because you can have very complex tools, some of them have partial ownership and partial intellectual property and patents on different aspects of the tool. And you can trace that back if you use it almost like an employee basis where this is their knowledge that they’ve developed, then it’s a different model. You say, ok, well, as long as you didn’t base it off of the intellectual property of the ideas, ok; therefore, that’s just like having an employee who studied, learned how to do this. Therefore, that’s an employee for hire or an employee for the contract. I should have ownership of that, solely independent from any of the people that he learned from. It’d be very much like hiring an employee but having to pay residuals back to their university, their elementary school, and their grammar school because that was what they used to develop that skill set in the first place. It just doesn’t make sense in the employee model. It fits in the tool model.

 

Max – 00:13:23: Yeah, in the employee model, I can see because the artificial intelligence is kind of bidirectional. It’s learning off the internet with the base of data that we’re arguing about. And then it learns from the person that’s interacting with it. And then you actually are increasing the efficacy and the efficiency of that tool because it’s literally learning off of the interactions. So very interesting challenge. You mentioned something of interest, I think, to everybody in the United States. The court system has to evolve. Right. So, in the context of this kind of problem, which is an emerging problem, we quite don’t understand how to look at this. How should a court system evolve to manage these kinds of technology challenges? Because, man, I can only imagine this is just the tip of the iceberg. Right. We’re just getting started on this.

 

Scott – 00:14:07: That’s a good question. I don’t know if I necessarily can say this is how they should do it. I think the way in which you have to develop it, at least in my opinion, that makes it work is there need to be some guidelines to it, and there needs to be the ability to create a separate intellectual property, a separate creation that you have some ownership of. I think the existing model now where they say, ok, this was created entirely by AI, this image, therefore this image can’t be copyrighted. I don’t think that that is practical in the long run. The way in which you fuel the continued development of AI is there needs to be some sort of future ownership rights for that information. So what I would say is, again, going back to a little bit more about that tool and employee model, if you can have a company and say, look, we train this model based on publicly available information, there is no proprietary or intellectual property of others. And you’ve trained that AI model. If you can get that to a certain point, then you can almost freeze that knowledge and say, ok, this is that AI model. Now you take that to a certain company. Let’s say it could be a law firm. It could be an accounting firm, whatever. Let’s say it’s to a law firm. I now train that AI at that point with all of my legal correspondence and the documents that I’ve created. Okay, so now I have an AI model that was generated originally purchased, but I’ve modified it and further trained it with my own intellectual property. I should be able to have ownership of the outcome of that AI model on a go-forward basis, and I should be able to, I would say, either have ownership of trademark or copyright, the outcome of that AI model, as if it was either an employee that I hired and trained or a tool that I created. And if you can have that, I feel like that’s the only way both the court system and the business world can reconcile these competing issues.

 

Joel – 00:16:03: I think that makes so much sense. I really do. Max, you raised the question, how is the court system going to view this? Or, well, Max, you’re coming at Scott. There’s a debate amongst AI scientists and engineers about how we should really look at AI, and there are competing theories there. And, you know, for a long time, we talked about the Turing test and is it human-like. Long since ago, that stopped making sense. The latest theory is the rational agent theory. And then that is that it’s really focused on can the computer operate autonomously, make decisions in less than ideal conditions, make judgments and decisions, and produce outputs that it learned from its various inputs. And it creates a more general definition. And as it gets more and more powerful, I believe we’re going to move into a situation where we’re going to have to start treating them like entities. So it’s almost like if you don’t want the AI to learn something, you’ve got to have a non-disclosure agreement or equivalent to it. Because if I’m having a conversation and you tell me you’re intellectual property without no any agreement, you’re the intellectual property attorney. I would assume that means I could use it if it’s reasonable in that sense. So it sounds like, you know, treating them more like an individual is going to be. And there’s actually a lot of talk about dropping it artificial altogether and coming up with a new one because it’s really no longer artificial intelligence. It is an intelligence.

 

Max – 00:17:23: There’s a term I learned is a term of art, right? There definitely needs to be a new vocabulary set on how we even describe this thing, but also, Joel, based on what Scott just shared, what it reminds me of is that, you know, we need to break apart the model data set separation as an input. So if I, as an entity, is providing this thing, empowerment to learn, and that’s what I own, it’s almost like maybe the technical community, they haven’t really even thought about that. How do you separate a data set? Because the goal is to learn and learn faster and then provide much more value to a broader set. But now, if we’re treating it almost like an individual or some sort of entity, and we need to separate its data set, we could easily get into an area where, I know this happened in the EOD, where the benefits of Cloud are not there because you can’t really scale because of all the risk associated with the classified information, as an example, right? High assurance: we can’t scale, we can’t take our data and send it to China. It makes sense. When it comes to AI, you know, I’ve never seen law get ahead of things, but this could be one of those areas where we need to get ahead of it. But if we do, we could just lose the benefit of making something of intelligence.

 

Joel – 00:18:41: Max, I think we’re already getting to the point where you have to really account for the data that goes in because of all of the AI trustworthiness issues, you know, having to prove how a decision was made and that it’s not a bias model, especially in certain areas. I mean, do you think that’s the beginning of what could be tracking the data more closely that goes into these models?

 

Max – 00:19:00: I have no idea. I mean, it could be, right? Like transparency, if we know transparency, the lineage of the information, how it mutated, how it changed, maybe we can track the separation of this, right? But yeah, in my opinion, the courts definitely have to evolve and not get ahead of this thing, but we have to have some sort of guidance out there, right? In terms of what to do, how do you even because at least in the United States, this is just my view, being originally from India, we have to have ownership. There’s a strong desire to have property. That’s what makes the United States so great that you can own things, really own them, right? And then when you have a tool like this artificial intelligence, you don’t necessarily want to work with it because who does it belong to, right? That’s still a big question out there.

 

Scott – 00:19:49: Absolutely, who does it belong to? How best can I retain the value of whatever it is I’m creating? That’s very much just being struggled with right now.

 

Joel – 00:19:56: Well, there’s a lot of people struggling with understanding really how it works, and it makes people very confused. I’m curious about your thoughts. like I won’t name, but there was a lot of bad press about a social media company that was pushing really bad content to individuals like racists or whatever, making associations. And they were just being dogged in the media because of all the algorithms they built. And they didn’t build any algorithms; they just imported the wrong kind of data, right?

 

Scott – 00:20:23: Yeah, and that reminds me, I want to say there was a chatbot. I don’t know which, it may have been Microsoft may have been another company, but there was a chatbot that was designed to mimic and communicate with the people that it chatted with. And what happened was so many people would take very, could be racist or very discriminatory language or even hostile language, and they would communicate that way with the bot. The bot learned from it, and it almost took on the same biases as the text that it was using to create it. And it was thinking, okay, well, I’m just creating this algorithm to do it. And therefore, it results in a, what’s the phrase from a programmer’s perspective, garbage in, garbage out. That’s the same context here.

 

Max – 00:21:01: Bias in, bias out, right?

 

Scott – 00:21:03: Exactly. And you get that from a training model. You’re gonna get that from all other algorithms. And there was a politician I want to say that was complaining about TikTok. And they were saying, when I opened up TikTok, I see all this inappropriate content, okay? And then somebody said, well, it’s based on the videos that you liked and are looking at. So, what does that say about you?

 

Max – 00:21:23: Yeah, and funny you mentioned that, Scott. I think this will be a topic for another show, but the Department of Defense just removed TikTok from their network. It’s not allowed, right? And I think it has a lot to do with what Joel mentioned is people just genuinely don’t understand, right? We’re pulling for information based on what we like. It’s feeding our desires. And so if our desires are not good, it’s just gonna feed us to that, right? I think with AI, right? We can see that happening if you just let it loose on the internet.

 

Scott – 00:21:54: Exactly. And although I think with that particular issue with TikTok being removed, it wasn’t so much that it was the algorithm itself; it was the fact that they have control of it and this idea that it was going out to China. There are a lot of US companies that are collecting and monetizing that same sort of information here in the US. And there’s been no kind of objections to that, at least not at the defense level.

 

Max – 00:22:15: Yeah, definitely a totally different topic, but yes, the heavy hand of the Chinese Communist Party, right? It just kind of scares everybody. But on that note, right? I believe that if we don’t figure this out in the US, at least from the legal perspective, other countries are already making moves, right? I’ve seen the Chinese AI policy. We mentioned this on one of the podcasts where do not subvert the state, right? That’s a policy statement by the Chinese government. Scott, have you seen any other country put any guidance out there? Are you seeing anything out there from a legal side, or is any other country moving ahead to kind of address some of these challenges?

 

Scott – 00:22:53: So yes and no. I mean, I think the most clear example of something very similar is what happened with ChatGPT in Italy. So originally ChatGPT was banned from all IP addresses originating from Italy. And the idea was that look, we don’t have enough information about what it’s doing. It’s a violation of the GDPR in the EU. We don’t know what it is that you’re doing with this information. There was quite a bit of attention brought to that. And I think Italy was one of the first ones that really did it. They disabled it. And then I wanna say it was a couple of weeks later, they did reenact it. And they reenacted after OpenAI announced a set of privacy controls, and they disclosed a little bit better information. So, they made it fit within their own existing privacy framework of the GDPR. So that happened to Italy. I haven’t seen any other countries that have really taken that same approach, but I think it’s just a matter of time as it becomes more widespread, as its value becomes clear, and more and more users start to use tools like ChatGPT.

 

Joel – 00:23:56: Yeah, you’re talking about GDPR. That’s a very interesting point. And privacy, there’s a lot of control that goes into where information can go, obviously. However, some of these models have been built on what is GDPR-regulated material. And so you’ve built a model, and it’s supposedly obfuscated, but then now along comes attacks like inversion attacks or member inference attacks where I can pull that information out. So, do you see that these standards or laws changing soon to try to put protections from that from happening?

 

Scott – 00:24:28: I think absolutely. I don’t think GDPR was really developed in the AI world. It was developed prior to its widespread availability and usage. And so it hasn’t really understood and appreciated how it works as a product. First and foremost, it’s this idea of your right to have access to your data, your right to modify that data, and your right to delete that data. Those are three rights that are incorporated in the GDPR, which, in an AI world where it takes that information, builds upon it, and learns from it, it’s simply impossible. You can’t practically remove that data. You can’t practically delete that data without undermining the entire AI model. So, I don’t think we’ve seen the last of those sorts of enforcement actions. I think OpenAI  Should expect something in the future as the EU struggles to make it fit within that existing framework once you’ve created those rights. How are you going to have those consumers exercise those rights in the context of an AI model?

 

Max – 00:25:27: Yeah, I think inference is going to be challenging, right? We could do a whole bunch of data masking and obfuscation, but at some point, the thing needs to be learned from the real data set, and that’s where it becomes challenging. But I can imagine right when GDPR was taking off; everybody was talking about sovereign clouds. We heard that terminology quite a bit, right? Keep my data in my country. Don’t let anybody touch it. I can see the same kind of buzz going around with sovereign AI. Your AI within your country, but it’s not very smart because your country isn’t that smart, right? It’s not learning from the world, right? Like it’s got all the biases for your country, and that’s it. You know, I was listening to another podcast with the creators of AI, Sam Altman and Lex Fridman; very interesting. It’s like a two-hour session. And I think the creator of AI technology, a lot of these high-end shops, they’re looking to be involved with government affairs. They have to be because they’re the ones with the expertise. The government can impose rules, but the government itself doesn’t have this expertise. So it’s almost like they’re trying to strike some sort of an entity that would arbitrage between nations on how to train data properly.

 

Scott – 00:26:43: And it’s a smart approach because I think they recognize the issues and the challenges that some of their predecessors have had before. And the example that I’m using is the idea of social media. If you remember social media, they’d operated very quickly, got into behavioral advertising, they’re like, we just want to grow, we want to scale fast. And we’ll worry about the privacy concerns later on down the road. And then now, later on down the road, they’re worrying about it. They’re getting these large fines and these large assessments and class action litigation. I think the OpenAI approach is they’re trying to work hand in hand with some of those governments in advance so that they can operate within those frameworks and not be subject to the same sort of after-hours of regulatory scrutiny that their predecessors in the social media have encountered.

 

Joel – 00:27:28: I think one of the problems we’ve got is how I opened this discussion when I was talking about how little frazzled I am. Is that AI a shotgun that is it just suddenly exploded and it’s touching everything? It’s not like in one specific area where we can talk about generative images. It is every single technology that can be automated is heading that way. So, focusing on an implementation layer is like a little bit of whack-a-mole unless you try to move upstream. And that’s really challenging because you’re moving into the abstract world.

 

Scott – 00:27:57: Yeah. I think the extra challenge is you have a lot of companies that are saying that they’re AI when they’re really some sort of algorithm or automated process. They’re going along to it.

 

Max – 00:28:05: Yeah. They want to patent math, right? You can’t patent math.

 

Joel – 00:28:10: But I mean, it’s crazy though, because, you know, even in this podcast, we’ve talked about changing government. We talked about changing regulations, changing laws, changing corporations, and even nomenclature. We’re arguing about what AI is. And so it’s like we’re moving and changing every dimension all at once, which is the worst possible scenario from a scientific perspective.

 

Max – 00:28:31: Yeah, nobody’s really had time to deliberate on it, but if we don’t talk about these things, in my opinion, we’re never gonna be able to solve it, right? And I think everybody is deliberating from different perspectives at the same time. But yeah, on your side, Scott, have you guys, in your practice or other legal firms that you interact with on different cases and whatnot, 10, 15 years ago, nobody was practicing cybersecurity legalese until there’s a whole bunch of standards we have to deal with and there’s flow down requirements from the government on the contracts, are you seeing that in different law firms that there is a new practice emerging of just handling the artificial intelligence, vernacular, legalese contracts? Is that an uptick that you’re seeing in your world?

 

Scott – 00:29:18: Absolutely, we’re getting calls almost every day about potential new clients that want to know more about it. They’re asking, okay, how can I use ChatGPT in my business is very common. How can I use AI in general in my business? How can I defend my business against other AI that might be coming to eat my lunch? And then some of them are like, after the fact, they’ll say, okay, well, we’ve already been using AI. How can we best protect ourselves? How can we best secure our intellectual property on a go-forward basis? Those are the main three categories of questions that a lot of my clients are approaching me with. Some of them are saying, look, we wanna build our own AI. How can we do it that maintains the ownership within the company as well? And we’re working on contracts, working on formulating kind of a strategy to address those in light of the fact that, and I say this with every client, there is uncertainty. There’s gonna be a risk. We don’t know how the courts are going to interpret the sort of contractual provisions that we’re creating. And we’re doing our best to best protect the company with what we have today.

 

Max – 00:30:21: Yeah, Joel, I know you’re getting some of these questions. I think last few weeks, you were telling me, hey, I’m writing an AI policy, right? Because from a Ciso and then down towards the implementation perspective, it’s coming from the C-suite, right? Because I know in our business, I have trained everybody on how to use the basics of ChatGPT as a research assistant, right? Not necessarily feeding it information, but I can only imagine in the legal world and even in the security offices, this is gonna be the new hot thing. Like, how do we handle this? We can have a policy, and we can have a contractual instrument, but how do we actually get down to enforcing whatever we intend to state here on this paper?

 

Joel – 00:31:01: You’re exactly right. And again, I spend a fair amount of time talking with executives right now, similar to Scott, because the reality is it is such a competitive edge that if you delay too long to get on the AI bandwidth, you’re going to be at a distinct competitive disadvantage more than likely because it’s gonna accelerate the rate at which you can deliver the product or the quality or the uniqueness of it. So it’s kind of like one of these situations where you have to get on, but people don’t know how to get on because the road isn’t paved. So it’s a really tricky spot, and it takes a lot of thought and careful steering. Is that how you would characterize it as well?

 

Scott – 00:31:37: Absolutely. They’re asking us some questions. And again, if they don’t get on the AI bandwagon, they’re either going to be left behind, or they’re just going to not have a business to develop from. And one thing I do want to say is the clients that I have are two, the ones that are saying, look, we’re going to take the easy solution is we’re just going to not use it. We’re going to ban it. We’re going to block it. Okay. That’s kind of the head-in-the-sand approach. The more sophisticated clients are the ones that are asking, yes, how can we use this? How can we integrate it into our business? Because they recognize the value and the long-term benefit of these AI tools.

 

Max – 00:32:09: Scott, we can go on forever with you on this. But if you had one takeaway for our listeners who are either working with general counsels or who are general counsels or chief security officers and other leadership that’s dealing with this, what would be a key takeaway? What advice would you give them when it comes to dealing with artificial intelligence?

 

Scott – 00:32:30: That’s a very good question. As you know, lawyers tend to be risk-averse. We try to minimize that risk. I think AI is a situation where I would be counter to that. I would say, look, the risk of not exploring it as an opportunity is too great. It is going to be transformative not only just the business world but in every aspect of our economy and technology. So this is a situation where. You don’t want to be risky first. You want to embrace it, learn from it, and learn how you can integrate it and utilize it within your business, or at a minimum, protect against the risks associated with these sorts of AI models. You don’t want to be in a situation where there are other companies that we didn’t even consider the risks of it, and therefore, now their employees are utilizing it, disclosing intellectual property externally, losing control of some of that information because they didn’t have that AI policy in place in the first place.

 

Joel – 00:33:24: Yeah, that’s what I call covert adoption because they’re going to do it for you.

 

Scott – 00:33:28: Yeah, they’re going to do it for you, but they’re going to do it in a way that even creates more risks to the organization. That’s what we want to avoid.

 

Joel – 00:33:35: Absolutely. I have one question that I definitely wanted to ask before I got off that going back to your roots, Scott, you have a heavy incident response background, and now you’re talking legal and AI. So, bring those two together. So if I talk about what group that loves chaos, that loves this current state of unsettled, unsure environment would be threat actors. So where do you think the first major incursion is going to occur where the threat actor really does create a massive cyber security incident using AI?

 

Scott – 00:34:09: Here’s where I think that’s going to be, first and foremost, the sophisticated actors that we’re seeing nowadays; AI is great, but it’s not the same skill level. It’s not the same insight as you would see some more sophisticated threat actors. So I don’t see that they’re going to quite identify new zero-day vulnerabilities or anything like that, at least not yet. Where I see the threat actors utilizing it is more of a broad-scale approach. They’re going to use the AI tools to kind of have a wider impact to attack more systems in a more, I don’t want to say less sophisticated manner but to automate their attacks on a wider population of potential victims. Now that’s the initial stage. The initial stage is going to be more attacks with less sophistication, but there’s going to be more of them. Now the real key, and this is where we’re going to see very much similar to what I want to say the Samsung situation where. You have a coder that’s going to say, okay, I need help understanding this software code. They feed that code into the AI model. They say, okay, AI model, where are some risks or vulnerabilities with this code? Okay, the AI model says, oh, here are some places that you can best fix it. Okay, now what they’ve done is they fed that kind of source code into this AI model. And they used it to either help with their programming or fix some other code. And then what’s going to happen is that now that that same source code gets incorporated into those systems. And then later on, you’re going to find an enterprising threat actor that’s going to take the same approach. You’re going to say, I’m looking to hack XYZ company. I’m looking for a vulnerability. Where the AI tool really is going to come into place, where they can build upon the source code that they were provided and then find those vulnerabilities, say, hey, have you tried out these vulnerabilities? Or this might be an exploit that might work. That’s going to be a really interesting development that’s years down the road, but that’s a big risk.

 

Joel – 00:36:03: I think that model evasion, you know, you have to exploit a model. Model evasion is a big thing, especially if the model is being used to detect bad things. All you need to do is understand enough about the model to get past the algorithm and classify something the wrong way. I can see that being a huge avenue for that, as well as public datasets. That’s a whole other topic I’d love to talk to you about.

 

Max – 00:36:24: Yeah. The stage one is script Kitty Haven. Yeah. Then the inflection point, I think that’s where evasion and then even advisement, here’s how you do it.

 

Scott – 00:36:35: In the past, if you remember, we had pieces of Malware antivirus. Everybody had an antivirus program on their computer, and that program was signature-based. It was, okay, if you have these pieces of code, we’re going to flag this as Malware. Then what the bad actors started doing is they say, okay, well, we’re going to run our Malware across 15 different antivirus programs. Just to make sure they can’t be detected. And that was a signature base. Now, what we found is that some of the higher cybersecurity companies are offering behavioral-based or what they describe as AI-based, whether or not it is or not, but they have these tools that they call behavioral-based threat analysis, where they’re saying, okay, well, we’re gonna base it not on the signature, but by the behavior of the threat actor. And that’s how we’re gonna flag it as being malicious. And then you’re gonna see, I think, the same way we saw the arms race, you’re gonna see the threat actors taking those same AI tools. And saying, ok, what behavior? Can I do that’s not going to set off these red flags? And getting back to your kind of mali evasion and how it played out in the past is probably how it’s going to play out in the future.

 

Joel – 00:37:40: Absolutely. It’s going to be an interesting ride, for sure.

 

Max – 00:37:43: Very interesting. Scott, I feel like we could do another one with you, but we want to be respectful of your time. I know our audience; they definitely enjoyed these conversations. But man, we just want to thank you for coming on the show and just giving us your perspective from a legal standpoint.

 

Scott – 00:37:58: Yeah, happy to be here.

 

Max – 00:38:02: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:38:13: Make sure to search for cyber in Apple Podcasts, Spotify, and Google podcasts, or anywhere else podcasts are found. And make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

 

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More

X