On this episode of the Emerging Cyber Risk podcast, we cover the recent meeting that President Biden had with some of the top AI cybersecurity leaders in the industry. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.
Join us as we discuss what an AI policy is, how do you know if your company needs an AI policy and what contents would go inside of this policy. AI is here to stay, and whether you are using it or not, you need to help prepare your team and develop policies around the use of AI.
The touchpoints of our discussion include:
Get to Know Your Hosts:
Max Aulakh Bio:
Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.
Max Aulakh on LinkedIn
Ignyte Assurance Platform Website
Joel Yonts Bio:
Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.
Joel Yonts on LinkedIn
Secure Robotics Website
Malicious Streams Website
Resources:
Max Aulakh 00:04 – 00:17 Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh.
Joel Yonts – 00:17 – 00:46 And Joel Yonts. Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook. Welcome to another episode of Emerging Cyber Risk Podcast. I’m your host, Joel Yonts, and with me as always, Max Aulakh. Hey, Max, I know today’s topic is AI policy. It seems to be something that everybody’s asking me about. And I know you have a lot of experience in this space. You want to kind of run through and talk a little bit about what we’re talking about today?
Max Aulakh – 00:47 – 02:14 Yeah, kind of bottom line up front, you know, there’s a lot of AI tools out there, there’s a lot of buzz about AI, but when it comes to how do we manage it, what’s the intentional use of AI, what’s the role of general counsel, and what are some of the elements that security practitioners and leaders should be considering, those are some of the things we want to get to. So, you know, what we can talk about is just figuring out what’s the intent and the kind of policies have we seen out there. So from my perspective, Joel, maybe this is where you can add in a little bit. A lot of AI policy is being dictated by the tools that we’re already using, right? So today, like we’re on this call using Zoom. We know if we look at Zoom’s terms and conditions, they’re feeding all of this information into their artificial intelligence tooling. And I can’t imagine being a general counsel where you’re discussing personal business, intellectual property, your business strategy, and now all of a sudden you’ve got this thing that’s listening to you, right? So this is kind of the impetus of why I think when you’re acquiring or using different tools, we’re starting to see policy. And then the other area is around HR. Joel, I know you had some experience in your book, right? It talks about displacement and workforce management, those kinds of things. But that’s kind of what I’m seeing as the hot topic right now around AI policy.
Joel Yonts – 02:14 – 03:39 Yeah, I mean, I think you’re spot on there, Max. AI is seeping in through the cracks. All of the policies and agreements that we have with our existing vendors, they seem to now be AI enabled. And certainly it’s changing the nature of how those tools work at times. So that I agree with you. That’s a big driver. On the HR side, I think we’re starting to see and we’ve been concerned for some time that all this AI enablement, all this automation is going to start displacing human workers. It’s a deep topic in itself, and I think we’re going to, on an upcoming podcast, kind of dive into the nuances of this. But certainly, there should be some HR policies that are focused on how to just really manage the ethical treatment of the human workforce and how’s that balanced with the AI automation pieces. It’s probably a combination of human resources and organizational development. But there’s certainly some things that needs to be done there now. But as we move more into this automation, it gets more sophisticated, it’s going to really need to ramp up and maybe ramp up fast. So that’s from an HR perspective. But I know we’ve talked about getting AI from services and existing applications from an HR perspective, but also business enablement. That’s another area where we’re bringing AI in house. And I know, Max, you’ve had some experience or worked through some of this before. Tell us about policies in that space.
Max Aulakh – 03:39 – 04:33 I think this is this along with cybersecurity practitioners are going to be the first ones to actually try to do something about it, right? So we’re seeing a lot of retroactive stuff happening, like with Zoom and a little bit later on HR, the impacts to displacement. But when it comes to policy writing and policy development, I have seen IT professionals that are building new capabilities, trying to embed functions, as well as information security professionals. I think we had Laura on the call, she was mentioning Oh yeah, you know, if you’re in cyber, your developers are coming to you and asking you, and you can’t just be the person who says no all the time, right? So I think enabling some of these business functions and capabilities is going to be the key aspect on how we develop the policy and the approach to writing, which we’re going to get to.
Joel Yonts – 04:33 – 04:53 So if I hear you correctly, what you were talking about earlier was almost like procuring or consumer consumption of AI. But then there’s this whole other category of AI enablement where either inside tech companies or non-tech companies are building their own capabilities, embedding it in. And is that what you’re saying is the other category?
Max Aulakh – 04:53 – 05:37 Yeah, because, you know, with open AI, that’s Some people can simply call that as an API or a third-party problem. But if you’re a tech company and you’re developing software, you’re building your own language models, that’s a very different type of policy. Traditionally, this would be baked within the software development lifecycle, but you might be actually using components off the shelf and bringing those in just like platform developers do, right? So, any of those capabilities, I think we’re gonna see that as one of the first people who are asking for a policy, hey, what do we do? How can we stay within the confines of our standards and rules? I see that as a big piece to building the reason for a policy.
Joel Yonts – 05:38 – 06:03 Now, that makes a lot of sense. And I know you’ve been tracking the Microsoft Copilot stories and some of the other things out there. And it’s almost like that’s a blend of the two. It’s a consumer consumption, but it gives the ability to put together some really elaborate workflows and automations that kind of straddles that. So I think a lot more organizations are going to be faced with solving some of these at a policy level than just the tech companies, wouldn’t you think?
Max Aulakh – 06:04 – 06:45 Yeah, that’s kind of up to this next area, Joe, which I think is going to be critical too. So you have sophisticated users of AI, technicians and developers and things like that. And then you’re going to have general consumers of AI as just normal business users, like your normal workforce that’s enabling any kind of business function. And they don’t even know that they’re using AI. It’s so seamless to them, right? I know that’s not what we think about today and we might call that shadow AI, but if we don’t enable those users, I think it’s a missed opportunity. Joel, what are your thoughts? I think that’s going to be a separate type of policy altogether.
Joel Yonts – 06:45 – 07:24 Absolutely. I think the rise of the digital assistant, we covered in a story a few episodes about Walmart rolling out 50,000 digital assistants for their non-retail team members. That kind of AI that’s almost like a shadow employee that follows you around and helps you, it just really takes the embedding of AI at a new level. And it gives a lot of autonomy to the workforce to use as they see fit. which can be quite dangerous. So certainly having some policies to draw the boundaries of where and what should be used and how it should be incorporated into day-to-day operation, it’s going to be pretty important.
Max Aulakh – 07:24 – 08:01 Yeah, that begs the question, like, who are we writing these policies for and who is writing the policy? Who’s the main author? So if we look at an enterprise, if I’m working in Walmart and I’m just using a digital enabling tool, I have no interest in developing a policy, writing a policy. So trying to figure out who is going to be actually developing this and who’s going to be a key collaborator and stakeholder is going to be one of the first steps. So if you’re looking at developing, you know, this kind of documentation, it’s not going to come from essentially the people who benefit from it the most, which are going to be the end consumers.
Joel Yonts – 08:01 – 09:28 Yes, absolutely. And I think that when I will, I’ve had to help companies solve this a number of times already. And the very first place that I see a lot of people starting is a general policy that goes out. It’s like an amendment to the acceptable use policy or a standalone acceptable use policy for AI that’s intended to go to the entirety of the workforce, or at least the workforce that has digital access. And that covers you know, what solutions that you can use, because they can pick up solutions easily in a web browser that’s not even installed on their PC and start using it. So I think that’s going to be, that is a big category of risk and something that needs to be covered. And then the other goes back to who is building these solutions. You know, we talked about IT as a strong development organization has a need to have a number of policies around how to business enable AI into enterprise processing systems. But there’s also a middle ground where you may have departmental people that have super users that are automating pockets of capabilities, maybe the Microsoft Copilot level, and there’s other technology besides that. So there’s certainly some opportunity for departmental-level policies as well, but they all should be governed under a central place, I would think, which comes back to who are the people that should be governing this? Max, what are your thoughts on that?
Max Aulakh – 09:28 – 10:37 Yeah, I think today we’re gonna have IT practitioners already using AI. I think we hear that with our friends all the time. Hey, I use this co-pilot, I use this tool. And you wonder, you know they are using this for their work benefit. I mean, we’ve already seen that with a couple of incidences. But the legal teams, right, for simple use case would be procurement. Okay, we got a corporate account for Zoom or any other AI enabling tool, ERP, right? I think we’re going to start to see maybe even a contract flow down clause on how you manage this. But emerging from that, I think legal is going to have a big job with displacement, as we talked about. We also looked at the procurement side. And then legal has always worked closely with information security teams for all sorts of different kinds of frameworks and risk management. So I think whether technology people go first in terms of developing at a department level because they’re trying to be cautious, legal is going to have to get involved just to provide the overarching guidance and framework on what is permissible, what is not permissible.
Joel Yonts – 10:38 – 11:18 Yeah, that makes a lot of sense. And this is not necessarily that much different than what we’ve seen with other policies. Legal has an opinion, procurement has opinion, and it has expertise more than opinion, IT and security. But I think that when we start looking at AI, a lot of the guidelines are different because the subject’s different. I know we’re going to get into some of the elements that differentiates that later as we go through the podcast. But there’s also one thing that I wanted to kind of throw out is different approaches to putting these policies in front of folks. And we talked about already separate versus embedded. Do you have some thoughts on whether we should create a whole new set of policies or how we should embed these?
Max Aulakh – 11:19 – 12:29 Yeah, I think it’s going to be different for each and every company. We’ve seen it both ways, right? As practitioners, you’ve seen it where it’s this long, monolithic document, at least in the government, that nobody reads, but it meets the requirement, not the intent. And then you have these more operational documents that are at department level, maybe even technology-specific. In my opinion, we’re going to see a hybrid approach until a regulator comes in and says, I want to see these policies according to this framework. you’re going to see a pattern all over the place. But I think, Joel, as we go through this, one of the things we can provide the audience is that structure, right? Because some people may not have that external force. And I don’t think we’ll see that from a regulatory perspective. They have an internal desire, so they don’t end up losing their intellectual property. I think one of the items we’ll talk about is how do you structure this, right? What are some of the key elements that can start? But it can sit either way. We can have standalone documents or baked into the existing documents that already exist within an organization.
Joel Yonts – 12:30 – 14:01 Yeah, and that’s what I’m seeing as well. And I know you have a heavy defense industrial and government background where policies have to be named a certain thing and there’s a lot more rigidness around policy. And what I’ve seen in the general private sector is as long as you have something written somewhere and it’s accessible and it’s authorized as a policy, it could be written on a napkin and scanned and it serves as a policy to some degree. And I think the big deciding factor is around synergies. Certainly, if you centralize policies around the topic, you have people’s focused attention. So maybe there is a need for an AI-specific security awareness or user acceptance to really get the focus. But other ways, you want to have more synergy with the corresponding enterprise program. For example, if you’re talking about AI data security, you might want to embed the things that we’re going to talk about from a data security perspective in your existing data security policy, just because you have synergies and processes and tools and so forth. So I agree with you, it depends. And it really comes down to what kind of synergy do you want to get when you’re putting these out. But either way, I think you need to have a way to look at it, consolidate the information the way you chose not to do it. For example, I didn’t make that sound very clear. If you decide to distribute it, you need to know where all your AI policies are in these distributed policies so that you can speak to them holistically if you need to. That’s a much better way of saying that.
Max Aulakh – 14:01 – 14:55 Now, that makes perfect sense. And so I think regardless of the distribution, there’s going to be two key areas, and we want to go through the elements of these two key areas. So what we call traditionally in information security land, acceptable use, what is allowed, what is not allowed, because we know everybody’s using this. That’s not a secret. Putting some guidelines out there, putting some stern statements out there, if that’s what it needs to be. We’ve seen things that have happened with Samsung, right, and other organizations. So I think walking through the acceptable use, in terms of what needs to be within that acceptable use, what are the elements of it is going to be the key. So, Joe, why don’t you help us walk through kind of the key areas, and then we can break those down for those that are listening. I think this is going to be a policy that’s going to be applied. It doesn’t matter of the business, they’re going to need this.
Joel Yonts – 14:55 – 15:35 Very good, very good. And I think, and it’s true in all policies, but especially in AI policies, you got to start with definitions and define the space because What I have found is that AI is a very catchy buzzword that gets applied to all kinds of things. It’s largely a black box and also not all policy statements to apply to all types of AI. For example, large language models may be much different than computer vision applications. And so establishing a good set of definitions that defines what it is that we want to enact policies around is probably a pretty good step. And I think that has to be the foundation.
Max Aulakh – 15:35 – 16:15 I think that’s going to be the most difficult part too, Joel, because I think right now, even the practitioners Even to some degree, us on this podcast, we’re learning new things every day, every time there’s a new terminology. And I think whoever writes this, they’re going to have to dissect and not conflate the intent if the intent is for a broad audience. You’re going to have to simplify the language potentially even, depending on the business, right? Imagine if you work for a manufacturing shop compared to Google, which is a tech company. You’re going to have totally two different languages, and I think definitions are going to be the key part of this.
Joel Yonts – 16:16 – 17:03 Yeah, and I think that definitions on the types of AI is going to be important, but there’s also some lower-level AI processes that are going to be important. For example, if you have an AI implementation that uses machine learning, even the general audience will probably need to know the concept of training, because I don’t know how you can talk about the risk of LLMs training on your input data unless they know what training is, and be able to apply that lens. Is this solution that I’m looking at going to use my inputs to train the model for future use, general use? And I think that it has a couple of different dimensions. So, like you said, it is very tricky. What about the roles and responsibilities? That’s always an important part of a policy. What are your thoughts on that?
Max Aulakh – 17:04 – 18:14 Yeah, I think there’s a lot of key areas, the scoping and a lot of other things, but roles and responsibilities in the early days of policy development is going to be really important because imagine you have this new category of content. and you have questions. Where are you going to go to those questions? So I think traditionally you’ve had the security risk management team tracking all of these exceptions, exceptions to the policy, exceptions to the rule. So I think legal and information security is going to be the primary roles to start with. And then, of course, this isn’t IT, this is an information technology of some sort. So I can imagine, you know, team members from IT as well being part of describing how do they contribute to this policy, right? What are some of their roles and responsibilities? Maybe they’re enabling some of this. So I think it’s going to be very important to include those three key stakeholders I’m sure there are others, Joel, that are escaping me right now, but I think those are gonna be the primary that everybody’s gonna gravitate towards when they need something answered.
Joel Yonts – 18:16 – 19:29 Absolutely. And I think that’s going to be the key when you think about pushing this policy out. And there’s operational aspects of it, just day to day, enforcing it and looking for exceptions and people potentially not following the policies and so forth. But then when things don’t align completely and the questions arise, and I think about how an average user, because this is going to go to an abroad organization, and I was going to name it, but I’m not going to offend any areas, but there’s some areas and some roles in the companies by their definition are not very technical, but yet they have a digital presence. So how do they have a chance of being able to apply this without some real practical application. I think that’s going to be the other key that goes along with the roles and responsibilities. When we start to talk about the other things that are part of this, we need to have who to go to if there’s questions, but there needs to be a section that distills pretty quickly into, here’s the categories of products that are allowed for AI use in the company, and maybe very specific ways it can be used. I think that’s gonna be important too, because leaving it up to a non-technical person to apply definitions and roles is gonna be pretty challenging, I think.
Max Aulakh – 19:29 – 20:50 Yeah. Yeah. And again, I go back to the concepts we’re used to, because we’re kind of creatures of habits here, right? Whether we could or bad habits, but there is a concept of approved products lists, at least within the defense community, and approved software list. So I can imagine there being an approved, you know, AI capabilities list. How are we procuring them? Where are we procuring them from that are available because they have been vetted, they’ve been communicated to the broad audience, as well as the people have been trained. So I think we’ll start to see that where these categories of products are, you know, you need to capture what is allowed, what is not allowed in terms of categorization. as well as what might be prohibited, right? Like, so if you’re a software development firm, you might be okay with using AI to generate code, or you may not be. You may say, hey, this is our core intellectual property, our stance is that none of this is open source, and we are not going to push it out into the public. So to de-risk yourself, you might say for software development activities, we do not use AI. But for image generation, we do. Whereas Adobe, I don’t think they would want to use AI for image generation, especially if they don’t own that capability.
Joel Yonts – 20:50 – 22:04 Right. I was going to use that example of Adobe. Actually, they have Firefly. And that’s so now they have a generative AI capability. And I think you hit on it’s what product. And sometimes the answer is not technical. For example, and I’m not trying to say which one’s better, but if you’re talking about image generation, the company may have a policy if you’re going to produce products or images that are going to the consumer space that you may want to assign intellectual property labels to internally the company. A company may have a policy that says you can use Adobe because of the Firefly button and not stable diffusion and mid-journey, for example. Now, a technologist may say, well, my stable diffusion has a better image quality. But with Adobe now, they’re training their models on all images that they have licenses for. So they will indemnify you on the use of these images, which legally enters the picture and overrides even if there’s a technical edge in another way. And that’s the thing that an average user wouldn’t be able to understand. And that’s the reason some of these things need to be built out and really codified. But the problem is they change so fast, Max.
Max Aulakh – 22:04 – 22:39 I think we’re going to come to a point where we’re going to see a faster cycle of updates, because traditionally, when we look at policy, The old adage kind of goes, you write it once a year and you do your annual reviews and that’s it, at least on the statements, right? And I think the applicability and the scope is going to shift so fast, which impacts the next key area of actual statements, actual prohibited activities, right? But I think the policy review cycle traditionally, which has been a year to two years, that’s going to change. That has to change.
Joel Yonts – 22:40 – 23:15 Absolutely. I think it does. And even if we, the list of approved software would certainly be external to the policy, but even if it’s the policy statements around it and the definitions are going to radically change over this period of time. So I agree, it’s going to constantly change. So we’ve talked a fair amount about what types of policy and some of the content that would go in it, but let’s kind of go down to the next level. Let’s talk about some of the elements that would go into either a policy or embed another policy. What are some of the first things that comes to mind, Max, when you’re thinking about building an AI security policy?
Max Aulakh – 23:15 – 24:39 So that was for our acceptable use, right? And I think another one is going to be cyber security teams, AI in context of strictly security, right? And here, what I see is we’re still going to have definitions because we got to understand the technical material, but it might be at a different depth. It might get specific to algorithms, it might get specific to certain types of functions within AWS. Hey, Lambda is allowed for serverless computing to process this imagery, to process… And these are the kinds of definitions that you do not need to tell your general population for acceptable use. So the pattern of policy writing is going to be very similar, starting with just understanding the terms, right? But here, the level of depth is going to increase. But also, you know, you mentioned it’s all tied to your business. right, where we’re talking about scope and things like that in the acceptable use. Here, security teams really care for external factors, you know, compliance comes to mind. We’ve got like thousands of different regulations out there, right? So I think, you know, security related kind of external compliance obligations are going to be the key, you know, key element of this, Joel.
Joel Yonts – 24:39 – 25:54 Absolutely. And I think when you think about compliance, it really comes in two forms as I’m thinking about it. The one, as you say, our listeners have a wide variety of regulatory compliance and legal compliance standards that must be enforced within the company. There’s probably language that defines how computer, what roles computers can play in technology and automation and how human oversight needs to be involved in certain things. Well, AI may significantly disrupt those things. Because as we start moving some of these things, some of these oversight and autonomy elements to AI, unless we go back and ensure that the policies address it, we’re going to be immediately out of compliance. So I think that’s going to be a piece of it. And then there’s also a number of emerging compliance standards that are AI specific. Right now, a lot of them is around explainability and transparency. But there’s a number of other ones that are being entertained on legislative floors across the world, AI safety being one of them. And there’s several other ones. And I think that’s going to be a fast evolving space. So in the AI security policy, I would imagine that both tackling the emerging compliance, as well as how do we comply with our existing is going to be a big part of it. Anything else you’re thinking about in that space?
Max Aulakh – 25:54 – 27:12 Yeah. And so here, I think, you know, that we might see separate documentation as explainability and transparency gets a little bit more fleshed out, right? What does this mean? I would imagine the data governance officer, other risk officers and management team members, they might even split this out because right now we understand what HIPAA means because it’s like 100 years old, 20, 30 years old. But we don’t really understand what AI explainability means when it comes to taking action. That is still being developed, so I could imagine the definitions and the external pressures being defined at different varying degrees. But when it comes to information security, I can see You know, things like the existing secure SDLC policy needs to be updated because of HIPAA, FedRAMP, PCI, ISO. Because now we’re enabling, you know, our tool set is using some sort of an AI component. And we’re already seeing that, at least within the FedRAMP space, the government has allowed and disallowed certain types of AI tools. Right? And so if you are planning on using it or are going to use it, that’s going to impact your technical security policy immediately.
Joel Yonts – 27:12 – 27:50 Yeah, I think that makes a lot of sense. And when you mentioned the STLC, that really resonated with me. That’s something, again, going back to the book I wrote about, that the STLC changes significantly. I won’t go into that here, but building an AI-enabled technology has some significant differences that you’re going to be out of policy if not. And I think that’s going to be a big thing to tackle as we go forward. But those are the compliance side of things. There’s also cybersecurity. And well, OK, a lot of people says you’re if you’re compliant, you should be secure. But I don’t know the two quite exactly. But I think that’s going to be another big area, don’t you think, Max?
Max Aulakh – 27:50 – 28:49 That is. And I think this is where since we don’t have an A.I. cybersecurity framework out there, this is where if you’re listening in, you’re wondering, how do I assemble statements? How do I assemble the right kind of information in the right order so that it makes sense until we get a little bit more fidelity and standards? This is where I think we’re going to spend a lot of time developing the actual meaningful statements when it comes to information security policy. So Joel, you know, I know we’ve talked about this and some of this information might be in your book that I think, when did it come out? Joel, didn’t it just come out recently? Joel Soriadis October 16th. Okay, awesome. So tell us a little bit about like some of these areas, the risks that are associated. How can we frame this up for those that are listening when it comes to writing information security content for artificial intelligence, you know, securing those types of things?
Joel Yonts – 28:49 – 30:37 Certainly. And if we’re trying to divide the cybersecurity risk landscape of AI, there’s a lot of different things that enter the vocabulary definitions. A lot of times data poisoning is reused a lot, and that’s certainly one of the risks associated with it. But what I’ve found is if you boil it down There are four general categories of risk, and it might make sense to list those in the policy, AI security policy, to one, establish the definition so that you can reuse them over and over because this is going to be a living document. And no matter whether you’re talking about an LLM, a computer vision, or an automated decisioning system, or a robotic system that has AI capabilities within it, it really comes down to evasion, alteration, exfiltration, and disruption. Now, disruption, I’m going to work backwards on that list. We know what that is. It is attacks designed to disrupt, take offline. It may be anarchist or it may be something around cyber extortion and ransom. Certainly that will be live and well with greater impacts. Exfiltration, a lot of what we talked about so far about loss of data, you know, the putting data in as inputs that later become training and made publicly available, that’s part of a larger category of exfiltration. And then evasion and alteration, it really comes down to evasion is trying to trick your AI into taking an action or a classification that circumvents the model and how it operates to achieve a nefarious output, an alteration is actually compromising the system to change it. So I think those should be the basis. And there’s a lot of things we can say about it, but I’ll pause right there. What are your thoughts on that classification, Max?
Max Aulakh – 30:37 – 31:40 Yeah, I think, you know, so evasion, alteration, exfiltration, and disruption, they line up with the CIA model to some degree, the confidentiality, integrity, availability, but just said another way. And the other thing is that these line up with your traditional risks that we’re trying to cover when it comes to cyber insurance. Right? Because a lot of the cyber insurance industry, they might create a new AI policy, but we know the cyber insurance industry, if the experts don’t understand, they’re surely not going to understand. Right? And so it becomes a paperwork game to some degree. To avoid that, we want to line up the framework with the existing risks that the business understands. Right? And that’s what this framework reminds me of. It just lines up really nicely. And it’s not very, very heavy to understand. We don’t have 20 domains, like in other other cybersecurity frameworks. It’s good for where we are, but what we understand today.
Joel Yonts – 31:40 – 33:11 Absolutely. And I think that makes sense. And there’s so many different things that you can articulate in this. And just I was going to throw out a couple that might make it make more sense. Under alteration, that’s where a data poisoning attack would be. And there might be statements around if you’re going to train or to use as training data set a specific feed of information, that it’s protected both from a confidentiality perspective and integrity. And the integrity is checked at every point along the way leading into the training environment. That’s a very simple one around guarding against a data poisoning attack and there’s so many more. One more, we’ve talked about exfiltration but there’s also a number of different attacks like a model member inference attack where you can extract potentially the inclusion of an individual, a definable individual within a data set that it was trained on original PII information which can be a significant risk and there’s techniques like a differential privacy that you can add noise to prevent those things from happening. And a policy statement on an exfiltration says, it may say something along the lines of, if you’re going to use PI and expose it externally, you have to have these certain people involved for approval. It also has to have safeguards like differential privacy or other things to guard against member inference if you do. Those would make good statements to help against those key risks. And just to add a little clarity there.
Max Aulakh – 33:11 – 34:10 Yeah. No, I think it’s great because this offers kind of a flexible way to think about it. As the world starts to understand different attack vectors that are available, they need to fit within some sort of a context. right? And this kind of provides that. I can imagine maybe a couple of years from now, just like we have the MITRE ATT&CK framework, I could see something like this being developed just for AI, which covers some of the technical security policy pieces. That’s kind of what this reminds me of to some degree, where there’s an actual ATT&CK vector, then there’s also a mitigation strategy depending on depending on the actual tool set and things like that, which kind of gets to the next important part of the policies are around the mitigations of these key risks, right? So, okay, you’ve got evasion and alteration and those kinds of things. What are some of the key mitigations that should be implemented, but also should be baked in within the security policy itself?
Joel Yonts – 34:11 – 34:35 Absolutely. You know, we talked a fair amount about data security already. That’s going to be a big key around encryption, integrity, validation, data governance, getting the right people involved. I would imagine that’s going to be something. And again, when we were talking about it, we don’t want to get in the weeds because it’s a policy. but at a high level, describe those structures around it. So I think data security would be one. Max, what else do you think we should put into mitigation?
Max Aulakh – 34:35 – 35:18 Yeah, I think not, you know, this is a policy, not a procedure, not a standard, not a specification, right? So I think access control always comes to mind in terms of who can access this information. When we look at the traditional security thinkers, we’re always thinking about, well, we got to identify and then guard the access to some degree. So, I can’t imagine not having those types of statements within this for your network, the training environment, the test production environment, and then, of course, exploration environments, environments where you’re just kind of learning from the data itself, right? And I think access control is going to be a key part of the mitigation, but also the policy itself.
Joel Yonts – 35:18 – 36:28 Now that makes a lot of sense. I think there’s some AI specific things. I think that access control and data security is the two biggest areas. And first off, before I talk about some of the other AI stuff, I think you’re spot on. And these are not new, but they have a new lens. They have new definitions. They have new clarity about how they should be applied. And they should be put in the policy about how we’re going to mitigate the risk associated with But there are some AI-specific ones. Without going into a lot of details, a lot of these AI components have a specific way to monitor performance. This is used by data security people, I’m sorry, data scientists to ensure that it’s achieving its objectives with precision and accuracy. But those can also be good indicators of a potential security compromise, certainly if things start misbehaving. Connecting a way to ensure that if there’s anomalies detected there, that it would be escalated and connected back to the security team would be one. And I think the other one on AI anomaly detection is making sure there’s human oversight in the right things. I mean, what’s some of the things that comes to mind when we should put a human on the side?
Max Aulakh – 36:28 – 38:06 Yeah. I think this is going to be the most challenging piece because we’re all tempted, as automation lovers, right? We’re all tempted to just throw AI on top of everything, right? But I think we’re going to need to intelligently think about where human One of the areas I can think of is just data validation. So if you’re developing a tool set, how do you know if what the AI is providing back to you is actually validated? Some sort of a validation of output. is going to need to be reviewed by human. And right now, that’s what we’re seeing with even open AI to some degree, right? They’ve got, you know, up, down, do you like it? Is this accurate, right? Some sort of inference for them to gather that feedback. I can’t see us escaping from that, right? And it all depends on the criticality of the system. Internal risk management, maybe you need very little, but other types of decisions, financial reporting, you’re going to need a lot of human input. I don’t see how they’re going to be able to get around those kinds of things. But I’m glad you mentioned that because all of the other things that we mentioned, access control, data security, we as cybersecurity professionals have dealt with that for many, many years now with a different lens, but this anomaly detecting through humans’ oversight, I think we’re going to find that to not only be challenging, but also very critical. That’s going to be a key component of an AI policy, I would imagine.
Joel Yonts – 38:06 – 38:27 Yeah. And I’ll go ahead and say that I believe that AI will be in a position to review itself here shortly. So I’ll go ahead and throw that out as just to say there might be a separation of duty, which we are used to in certain things. I may have a separate AI that’s geared toward just watching the AI to make sure it stays in line. So, you know, we’ll see how that shakes out.
Max Aulakh – 38:27 – 38:29 Yeah. I like to say, jokingly, who’s checking the checker?
Joel Yonts – 38:30 – 39:05 Yeah, there you go. There you go. And I think a lot of this is going to come down to, and we haven’t talked a lot about it, but I’ll just kind of mention the concept of understanding the impact of these risks. We talked about the risks. We talked about mitigations. Assigning the right mitigation and the level of mitigation is going to come down to something we all know is about potential impact. And so if something is going to be substantial in potential impact up to the loss of human life, certainly you’re going to want a human involved in the review on that. But all right, so now we’ve got a policy, we’ve got a structure. What do you think some of the other things that we should talk about to get this out?
Max Aulakh – 39:05 – 39:55 So I think some of the minor areas are always like purpose, scope, and who you’re talking to, who you’re writing this to, your audience. Those are going to be important topics to discuss. But I think as the main content gets fleshed out, sometimes you’re writing with one audience in mind, but then you realize, based on the intent that you’re trying to write to, it actually is a totally different audience. So I think towards the tail end of this, when the actual policy is fleshed out, you’re going to have to write, who is this for? Who is going to read this? And then restate the intent through a purpose statement, as well as a scope. Confine it to an area so that it’s actually actionable, so it doesn’t come across as a fluffy document that nobody looks at. I think those are some of the things that round off any policy document.
Joel Yonts – 39:55 – 40:31 No, that makes a lot of sense. I was just going to say, when I think about the other thing, when you’re developing policies, you want them to be as readable by the audience and consumable and all the different, you know, to have the maximum effect. But in the end, it needs to be written in a way that satisfies legal and regulatory obligations, and it can be used as a foundation to build capability. So it doesn’t necessarily always have to directly equate to security awareness or awareness. So that’s a pretty big gap that has to be crossed as well. Thoughts on that?
Max Aulakh – 40:31 – 41:26 Yeah, I think, so let’s say we’ve written these out, right? Let’s say even if we have Chad GPT spit these out, which I’m sure that you can take our transcript and Plug it into chat GPT with all the information that we’ve added, it’ll pop out the policy. Now, nobody actually knows about it, right? This is where I think it’s similar to garbage in, garbage out at an organizational level. You could write all this out, but mobilizing this throughout your organization is going to be the key. Through education programs, baking it in, I think that’s going to be a big challenge after everybody agrees on paper what we’re trying to accomplish. Getting it out to the entire org is going to be a job of everybody, right? Not just security people, because this thing cuts across like no other. Everybody’s going to have to somehow participate. Yeah.
Joel Yonts – 41:26 – 41:57 And the value associated with this AI technology is going to be such a draw that it’s going to be coming from all angles. I think it’s going to be, I agree with you, we’re going to have to push it out. And it’s probably going to be one of those things that we have to, we’re going to have to stay diligent about continually to communicate and remind people about, because while there’s great, great advantages to AI enabled technologies, there’s also risk. And that’s going to be something that we’ll have to constantly bow just because the value proposition is so high in a lot of these situations.
Max Aulakh – 41:57 – 42:20 So that sounds like another episode, right? I think maybe we do an episode around AI education for the organization, right? Because if you are a C-suite leader, you have these set of documents. Now, how do you get them mobilized and how do you actually educate the masses? That’s going to be a huge challenge. And man, that will be an exciting topic for maybe a next show.
Joel Yonts – 42:23 – 42:39 Well, this has been good for me to kind of run through these things with you, Max. And I know this is a living thing, so I’ve really enjoyed this conversation and diving into these components. But I think we just, it was the tip of the iceberg to what’s going to really need to be done to solidify this in policy, I think.
Max Aulakh – 42:44 – 42:55 Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.
Joel Yonts – 42:55 – 43:08 Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.