On this episode of the Emerging Cyber Risk podcast, we discuss the AI planning that is going into 2024 and how this may affect our business. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.
Join us as we discuss the upcoming year’s initiatives and what you, as a business leader, should be planning for concerning AI development. AI is here to stay, and whether you are using it or not, you need to help prepare your team and develop safeguards around the use of AI.
The touchpoints of our discussion include:
Get to Know Your Hosts:
Max Aulakh Bio:
Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.
Max Aulakh on LinkedIn
Ignyte Assurance Platform Website
Joel Yonts Bio:
Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.
Joel Yonts on LinkedIn
Secure Robotics Website
Malicious Streams Website
Resources:
Walmart Rolling out AI
OWASP
Scott Kollar
Adobe Photoshop AI
Zoom’s Latest Terms of Service
Secure Intelligent Machines
Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh
Joel – 00:00:18: and Joel Yonts. Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook. Welcome to another episode of Emerging Cyber Risk Podcast. I’m your host, Joel Yonts, and as always, Max Aulakh co-host is on it as well. Hey, Max, how are you doing?
Max – 00:00:37: Hey, good, Joel. How are you doing today? So, what’s our exciting topic for today?
Joel – 00:00:41: Well, today, inspired by the time of year, it’s the annual planning cycle. And so when we think about it, executive leaders planning for the upcoming year, what should we be talking about from an AI perspective? And so today, I think we’re going to get into what are the people and leaders going after right now. What’s the common trends? What should you be preparing for in the next year? What are some of the ways from a resource and technology ways to prepare for those inevitable changes? I think that’s what we’ve got on the docket today.
Max – 00:01:12: I think that’s fantastic. It is Q4, right? And a lot of people are looking at annual planning and those kinds of things. And, you know, one of the things I think we should quickly talk about why is this even important? I know some of the organizations are rolling out generative AI capabilities, but I think this is going to become a very important topic. So it’s very important for all of us to kind of plan for 2024 in this way.
Joel – 00:01:36: Absolutely. I think one of the big news headlines that grabbed me was when Walmart rolled out or planning to roll out AI digital assistant for all 50,000 non retail employees. I think we have a clip on that specifically.
Max – 00:01:51: Yeah. Yeah. I, Walmart, right? Behemoth. So, this is going to change the way consumers are interacting. And then the other thing that’s important, I think, is if security practitioners and the C suite leaders. are not really preparing for this, we’re going to end up in this thing called shadow AI where people are using it, but yet they don’t know, management doesn’t know, so let’s get right into it, right, let’s talk about some key initiatives, Joel, you know, I think these are some of the key initiatives we want to lay out that if you’re a business leader or C suite leader, you should be thinking about as it comes into the next year.
Joel – 00:02:29: Absolutely. Before we get into that opening question around these initiatives, your experience as a consumer and business owner, I mean, what has been your experience with AI coming out of you and all the products and services?
Max – 00:02:43: Oh man, it’s all, what we’re seeing is just click-through. Whether we want to use it or not, it’s coming at us. Almost every single business product that we’re leveraging today from finance, legal, marketing, it’s all getting integrated. And, you know, whether I tell my team to leverage it or not, whether they know that they’re using it or not, it’s already happening.
Joel – 00:03:06: And I think that’s the key. You will have an AI strategy, even if it’s, I’m going to do nothing and everybody adopts what they want. And so the better approach that we’ve seen in IT is to get ahead of that curve. Building an AI strategy is one of the first initiatives that we’re going to have to go after.
Max – 00:03:23: Yeah, I think, Joel, to your point, right, not making a decision is making a decision. And I think stepping back when it comes to AI strategy is the key. So, Joel, how would one start on that, right? What are some key elements and things to consider when we’re talking AI strategy for the company?
Joel – 00:03:41: I think one of the most foundational one is How are you going to put these technologies at work inside your company? And there’s really three strategies. There’s the take the services as they come at you, which may not be a bad, a bad way of going about it if you don’t have a lot of bandwidth to devote toward this, the public cloud type SaaS solutions, and what does that mean? So that’s, you know, how and where will you use those? But for those that wants to be a little bit further out over the hood, there’s a lot of really interesting private cloud and on-premise solutions that can bring it back in house and mitigate some of the risks that we’ve seen so far that we’ve experienced this past year with data leakage and so forth. So I think that’s going to be one of the first big decisions.
Max – 00:04:25: Yeah, I can imagine, right, just selecting if you want to. Get it kind of on-prem or just consume AI as it comes to you. And what I’ve seen a lot of it, at least from our side, is just coming at us, right? But some of the larger organizations, I think, because of data security concerns, they’ll have to make an intentional choice: where do we want to host this? Yeah. And you also mentioned there’s a few other elements to this AI strategy, but I think this is one of the key areas is where do we want this thing to operate out of?
Joel – 00:04:47: And I think that, you know, tabbing, diving in just a little bit further. One of the things that’s really interesting is. And I think it’s a really smart play. There’s some of the big platform as a service providers are lining up to offer really AI platforms as a service. You’ve got the Microsofts and Amazons, for example, and I think it’s probably many of the other large hosting solutions are following suit. Microsoft is unique in it though. And that they invested heavily in open AI. And as we know, it’s one of the most prevalent out there today and one of the most feature rich and it gives them the exclusive ability to host basically chat GPT or open AI technology inside an Azure cloud for a private instance. And I think that is a pretty smart play for the, especially people that are already Microsoft customers. And then. They’re using that to launch Copilot and it’s in beta right now. And it’s a pretty impressive looking anyway. I haven’t had personal experience, solution that connects a lot of the office apps with a lot of the generative AI technologies in a way that’s secure and keeps it under your control. So, and a plan to get out and to, to invest and to set up that. Really sets the stage to mitigate a lot of the risk. We’ve seen data loss being a big issue and so forth. I think the other thing I want to throw out is if there, for more advanced companies that are planning to build their own AI models, there are again, ways to build either in private cloud or on prem really AI farms that are hosting places where you can host your models collectively manage them and secure them. So I think those are some great strategic moves to put the people and processes around if you want to get ahead of it. Otherwise you really need to invest in that vetting out your software as a solution categories for the other time.
Max – 00:06:50: Yeah, I, I think it’s going to be quite difficult to compete with a private cloud from Microsoft, but we are going to have some diversity and uniqueness of not just the algorithm, but the AI model itself. And I know Microsoft has made leaps and bounds, not just with investing, but also within the public sector. I think they’re the, one of the first ones, I could be mistaken, but they’re one of the first ones, I think that’s going to be applying this in a large, you know, sense, almost like OpenAI did, but within the U.S. government, within a public private cloud kind of setting, because they’ve already gotten their ATOs, which is an approval to operate. I can see a lot of companies following suit, but in terms of getting an AI strategy, what’s gonna happen if somebody says, you know, we don’t want that, we want to build it on our own? We want to do everything on our own, right? Cause there are going to be some of those out there as well.
Joel – 00:07:42: Certainly. I mean, I think that’s, that is a great endeavor and I think it’s a very rewarding one, but it’s a steep hill to climb. And one of the things is just getting the people resources, the knowledge in house, cause there’s such a premium right now on this, so I’ve heard there’s such a skill shortage. And so if that is the plan. Starting now to go after and recruit those individuals for hire or working with a partner to supply those contract resources is going to be probably even more important than the technology. The technology, it’s about resources. As we know, there’s a lot of resources that go into building these models and efficiently models. So I think those are a couple of the big things and along with that, building out the to build an internal function. You’ve got to plan the cooling and the power section of it as well, because as you start building these at large AI, as far as well, these GPUs, they’re going to require a tremendous amount of power and cooling as well. So that stuff doesn’t, doesn’t happen overnight and planning ahead on how to put that in place is going to be important.
Max – 00:08:46: And Joel, I think as part of this, right, as people are looking at building a strategy, there’s always the, what’s it going to cost me component. And I think. Right now, I’ve seen all sorts of numbers out there in ranges of 100, 000 a months spend ot billions of dollars over the years to train and tune. So I don’t think a lot of people quite understand the burden and the lift to get there. But there are massive companies out there that are looking to gain an edge. And I think we’ll see a mixture of that when it comes to developing an AI strategy.
Joel – 00:09:18: Yeah. And I think that going back to kind of the business side of things as technologists, we jumped to the technology piece for a second, but going after and exploring what the business enablement value is going to be and put a business value to it’s going to be very important to really sell these big numbers. And I think there’s a lot of things from a productivity perspective, as well as new capabilities that can balance out and really come up with a positive balance sheet, even after these large investments, but it will take some time to develop those, always the, the temptation to overestimate those. And so I wouldn’t get too out over the hood in your estimates of value in the first year, because it takes a lot of tuning, but by year two, there could be some significant savings coming back.
Max – 00:10:04: Yeah, you’re right. It all depends on the, the business model itself and how to recoup the cost. But I think the case will prove itself out. It certainly is for some of these larger firms that have already taken the step to do it. So that’s one piece, right? That’s kind of one of the first initiatives is going to be having an AI strategy baked into your, your model.
Joel – 00:10:24: And then on that, you know, I wanted to kind of pick up and say, that’s the first piece is the AI strategy, let’s put something in place. And a lot of that we just discussed was around procuring our own, you know, the solutions. And you alluded to a little bit about building internal capabilities. That’s the other way to go after this, you know, and that involves, you know, updating your internal development processes and your software development lifecycle. And Max, I know you have a lot of experience in this space. What are your thoughts on that being the second initiative? How do you develop and enhance your internal development capabilities around AI?
Max – 00:10:58: You know, I think for those that are listening, you know, when it comes to software development and security around software development We’ve all seen threat modeling, abuse case analysis But also we’re familiar with the organization OWASP. OWASP has put out a language model, kind of analysis tool and some of the top 10 things that you might want to look for when it comes for vulnerabilities and things like that. But largely, I think we still need to, we can’t abandon the traditional security practices of looking at abuse, looking at the threat that’s facing the internal machine learning operations, as well as the development environment itself. So, if somebody is going to be taking on doing this in house, I think a investment into doing AI level security and protecting the resources that are developing those is going to be the key. That’s usually not considered upfront, just like every other security problem. Hey, let’s go build something, let’s go figure out what we can do. And then all of a sudden, there’s so many different requirements that were not considered. I don’t think that should be the case here, hopefully after, you know, many decades of trying to embed security into software development, this should be one of the first things is how do we do a better job at threat modeling and just figuring out the potential abuse case scenarios.
Joel – 00:12:18: So I know you’ve been, again, in this space for quite a while. And you, you already alluded to that there was, we’ve had a, not a great track record of adopting some of these practices. So any inkling on how we’re going to do with AI, do you see early adoption happening for some of these?
Max – 00:12:35: You know, I, that’s why I mentioned the OWASP model because usually we don’t start to see those things until after we’ve got the top 10 types of issues, sql injection being one of them. But right now I’m seeing the community kind of come together and build some of these recommendations very early. So my hope is that we don’t follow kind of the past behavior. Right, it’s more of, we can be a little bit more proactive, so that’s the sense that I, I get. And I know that we had one of our speakers on the call here, that’s exactly what she was mentioning. Hey, we as practitioners need to go out there and learn and also start working early with the software development team very early to help them because at the end of the day they’re going to be writing the actual software, right? So, I think we’re going to start to see a lot of that and that should go into immediate planning if we’re not already doing those kinds of things.
Joel – 00:13:29: Gotcha. No, I think that’s fantastic. And, and I agree, there’s been a lot of, focus that’s been really good. Now we’ve talked about software development practices, but AI is different in that data is as important as the actual development side. Are you seeing data security extending and data collection, you know, security practices extended as well?
Max – 00:13:53: Yeah, I think this is where, Joel, I know you’ve done some research too, right? AI is a data hungry problem itself, right? So, I know there are differences between how you do software development, like we are a software development shop ourselves. And, you know, we use the Agile methodology, microservices, all of that, but have a focus on requirements first, right? So, I think there are key differences, Joel, between the two. Some of the research you’ve done, I think also suggests that as well.
Joel – 00:14:25: Absolutely. You know, I’m a long term developer or when I can anyway, I enjoy the practice. And going through the design phase and developing the program logic, a lot of times before you ever put code down on the paper, that’s, that’s the way it’s been done. And then you do some testing, and then eventually you get enough full data collection, do your full testing, but it’s kind of at the end of the development process. Well, AI turns that up on its head, really, because in AI, we don’t develop the logic that goes into it. We allow the learning model to mine the data, to figure out the connection points between the data. And that is a completely different paradigm and it causes some different problems. I mean, have you experienced that?
Max – 00:15:10: No, I mean, when you say it causes different kinds of problems, Joel. I mean, help me understand that you’re saying that, you know, what kind of problems are you anticipating? Because this, this will be very different from a traditional software development exercise, but then also a traditional security exercise that goes along with it.
Joel – 00:16:28: Right. Well, especially for highly regulated and software development or practices, projects that have a lot of rigor, we’re used to a development process. Even before you begin coding, You have a lot of requirements, a lot of design. You have a lot of details. The code is like late in the process, but that model breaks down. And any model that says you have to have those things, you’re not going to follow because you won’t be able to know those patterns. You’ll know the general flows, but you won’t be able to describe the logic because the process is you take the data and then you explore it, and then it builds the associations. And what that means is two things. One, you need the data really at about the same time you gather requirements. You need the full collection of data, not a sampling like before. You need a full representative data because it needs to train on it to find these patterns. So that’s one of the problems. The other is the development process in AI a lot of times starts with an exploration process. You don’t know what algorithm to use or what model or what parameters or even what format the data needs to be in. So there’s a lot of copies of data. You massage it, normalize it. There’s a lot of encodings you do to make it more efficient in the learning process. That upfront work means that there’s a much more unstructured approach to data in the beginning before coding ever begins.
Max – 00:16:57: I see what you’re saying. Yeah, I mean, in software, right, we’ve been trying for years to get structured, get a five step waterfall process. And even with Agile, there’s these pretty little circles, right? And I think… What this sounds like is almost like a continuous experiment. Until you get to value, and then you try to scale that value. I think software engineers are used to that, but I can totally see security practitioners and business leaders where we would struggle with that, because we’re used to either, as business leaders, getting to value quick. How much do I wanna experiment as a form of investment? And then almost all security certifications and security thinking is in line with some sort of a five step process. I think this totally breaks that sort of thinking to some.
Joel – 00:17:49: And I think one of the things that we all need to be aware of and cognizant in front of is that exploration has to happen and if you don’t plan ahead, here’s what could likely happen. Is the developer would use their workstation for this exploratory work. There’s technology like Jupyter notebooks is really popular. There’s different tools out there to go examine the data and do this upfront exploration. Well, as we just said, it needs to be the entire dataset. So what you could have in your organization. Is entire data sets that may be sensitive getting copied back to developer workstations, you know, outside of the safe data center so it can be, you know, massaged and reformatted and reprocessed. And the other thing is a lot of times there’s multiple copies of the data set because maybe trial one, then you do trial two, trial three, and you try these different paths. So not only do you have another copy of the data set local on a workstation, you may have 15 copies of it, which is all, you know, should be making every data security person fringe, right? So part of strategy is how to build that capability to where it doesn’t have to come back into the developer world that it can stay within the data center or in protected environments. So it, you at least have some data center level protections around that process. And that takes some upfront exploration and policy work, to be honest. I know we’re going to talk about that later, but it’s an update that certainly we haven’t tackled yet.
Max – 00:19:18: Yeah, I think we’re going to see this a lot, Joel, right? People are going to want to do development in house, whether they’re consuming from a third party source, or they’re, like you said, Jupyter Notebook, I’m going to start from scratch on my own. And so I think this is a key initiative for 2024, that almost every leader should be planning. Because it’s going to come at them, whether they want it or not. If they’ve got software engineers, software engineers are just like any other technology professionals. We all like learning new stuff. So if they’re not touching this, their, their skills are getting stale and they’re doing it on their own with your data, right?
Joel – 00:19:58: Right, right. And I think that the other touch point, as we were talking about, that I wanted to mention earlier is data governance. We’re breaking all kinds of data governance for this to happen because there’s data owners that are supposed to be notified and control the movement of this data, and that’s another thing that we got to look at as part of this is how do we enhance our data governance? To allow these processes to happen.
Max – 00:20:20: What do you think about this? I think there’s a lot of lessons we can learn from, you know, when big data was exploding as a term at least, right? And now we’re over that and now it’s all about, , about this new, how do we use that to create value with artificial intelligence? I think there’s a lot of lessons that we could apply from those types of environments in terms of how to collect information, track the lineage. I don’t know, that’s what it kind of reminds me of when you mentioned, data governance.
Joel – 00:20:51: Yeah, but certainly I think in a large extension of that, and we start to move in out of, we finally figured out how to secure data lakes and the process around it. Now we’ve got an AI farm where we’re shoving data in, and you got test and training environments and you got all the, all that goes into retraining. It’s suddenly we’ve changed the game again.
Max – 00:21:12: Some lessons will apply, others will probably be considered old bad habits, right?
Joel – 00:21:18: But the building of the in house is certainly something I think that we need to prepare for. Because as you said, even if there’s not active plans to put something in production, if you have any size of development team, the developers are doing this because it’s easy to do. It’s easy to attempt to do it, I should say, and it doesn’t take much to spin it up. So it’s happening. You should plan for it. But I think a lot of people are going to go after third party solutions. And you’ve talked about that a little already. So, Max, I know one of the, the other points that we wanted to talk about was how do we handle suppliers and procurement and all that goes into procuring AI? I know you’ve got a lot of experience in this space.
Max – 00:22:00: Yeah, you know, I think if I was sitting in the general counsel seat or working with the general counsel, I’d be scared. Because usually. Typically, when you procure a solution, it goes through a process of getting an MSA, a contract approval, some sort of an end user license agreement approval, and right now, what we’re seeing is, it’s more of a click through of all your third party applications. Hey, we’re enabling AI, we just wanted to let you know, click here, acknowledge that you know, not even I accept. All it’s that, you know, right? So that’s what’s happening right now.
Joel – 00:22:39: Yeah. And if you click accept, what, what are some of the worst things that’s happened? Let’s go ahead and do a worst case scenario. What are they doing in those situations?
Max – 00:22:45: I mean, we will find out. We’ll find out.
Joel – 00:22:50: I think some companies that had uploaded intellectual properties already found out some of that.
Max – 00:22:55: Yeah, they’re messing around. They’ll find out soon.
Joel – 00:23:00: So how do we avoid this?
Max – 00:23:01: I think we’re going to need to strengthen the supplier, risk process and legal review contracts process. Over the last decade when security, you know, was just becoming kind of a supplier problem, risk management problem, those kinds of things. When we started to think that way, we started to see a lot of attorneys who specialized in cyber security. We have a ton of attorneys that can help us, and I think Scott was, you know, one of the few that we met was credentialed both as an attorney and as a cyber practitioner. I think over the next decade we’re going to see something similar with artificial intelligence. Because we don’t even have the clauses ready. We don’t know how to write these things. But one of the things that we should prepare for is strengthening just technology, oversight, review, and contracting, and writing something proactive, like, you will not enable this without an actual written permission. Click throughs are not acceptable. Now, does that impede business? Absolutely. But it just depends on the type of information and what you’re dealing with as a business owner, and also the environment, you know, the kind of environment that you’re working in. Because, right now I don’t think most companies have a choice. They either use the product with AI, or they don’t use the product at all. Right? Nobody’s going to go up against Amazon and tell them, I’d redline your contract.
Joel – 00:24:26: Okay. Let me give you a hard one then. Well, I think it’s hard. Adobe. You’re a heavy Adobe Photoshop, Illustrator shop, and you do use it to build creative products. Or design, and now the new version of Adobe has built in, it’s one of the menu options to do generative AI, how do you handle that?
Max – 00:24:27: Yeah, I think, you know, you have to have intent and use behind and limitations, so if the creative team is using it, maybe it’s okay. But if you’re an architecture firm, engineering firm, and that’s your intellectual property, you kind of have to think twice about your copyrighted material, things like that. Real tough problem to solve. It just all depends on what that product is being used for within your company and what kind of business are you? Because yeah, I mean, if you hand over everything to Adobe and you know, you yourself are building Imagery and everything else like that is your intellectual property, you just have to be very cautious about over time, you could lose that. You know, you’re essentially training other architecture firms off of your data, you’re training your competitor, you’re helping your competitor. So, in my opinion, you can’t just look at it from a product, you have to look at it from an intent and use perspective. And that’s the kind of knowledge that a legal officer already has, but I don’t know if cyber security professionals think that way. I think a lot of scrutiny is going to come down, but what’s going to be in my opinion, like you Joel, right? We’re using Zoom right now. It’s a wonderful product to record, but we all know their latest policy. We didn’t get a choice.
Joel – 00:26:06: Yeah. I know that there’s some people that says they’ll never get back on Zoom. Actually, I have one call with a client where we had to hang up and go to a different client because of it. It is a strategy and, and the thing about this is who, who’s right or wrong, I guess time will tell. But as I used to say to my kids. Not making a decision is a decision as well. So you’ve got to just step up to the plate and make a decision about some of these things, I think.
Max – 00:26:33: And I’m actually starting to see that. I’m starting to see that within context of the U. S. government, but I really haven’t seen that with most of the clients we work with, right? And some of them are not making these choices deliberately. It’s for different reasons, but a lot of it, I think, is just lack of awareness. What did we just agree to? And so in 2024, a planning session around your supplier risk in context of artificial intelligence is going to be so critical and somehow the general counsel and the legal team has to be brought up to speed about this new animal. This new technology type, if we’re not doing that, right, as leaders, we’re not preparing, we’re almost like agreeing to things and then a couple years later, we’re going to have to either change out technology or somehow go back and say, well, we didn’t mean to do that.
Joel – 00:27:27: I think that. Well, my book, Secure Intelligent Machines, is going to be out in a few weeks and one of the points, and it’s a book on how to build cybersecurity protection of artificial intelligence. But in the book, I talk about executive oversight of AI. And I think this pivots well into this conversation. One of the very first thing that’s important for the executive team and really all decision makers and the general, I guess everyone in the company is education right now. AI is such a buzzword. Most people really don’t even know what it is and being able to spot it, let alone understand the risk associated with it. I think one of the things that’s going to be in very important in this next year is education. It’s not one of our major points or initiatives, but a general education on what comprises AI and. What are some of the ways to spot it in contracts or in products and some of the do’s and don’ts? I think that’s going to be an important one as well. What are your thoughts?
Max – 00:28:23: I think so as well, Joel. And that reminds me of another kind of popular by demand episode is most likely going to be around AI policy, which touches what is acceptable, what types of documents and policies we need to have. And I think as a key takeaway for audiences that are listening, I have seen some clauses within supplier risk community when it comes to AI, and those are not necessarily strong clauses, it’s just experimentation of what others have done, other general counsels have done, we’ll provide those in the transcripts and the show notes, but I think we’re going to see a lot of evolution of supplier risk, we’re going to see the evolution of education. And then even the policies that we’re going to write today, which I know we’ll talk through, we’re going to start to see those as evolving set of instructions and things as we learn.
Joel – 00:29:16: Yeah, I think you’re exactly right. And we’re so early in this, but yet it’s moving so fast. And so it’s as we’re developing these things. That’s going to be hitting us, the risk associated with it. For example, you talk about vendor security management right now, if we do get the right people involved to look at an AI contract, pretty much the only thing we’re looking at is around data security and data exfiltration. Are you going to be using my data to train your model publicly available model where I could lose the information? Or is there some other way that I could lose, you know, to model inference or inversion attacks where data can be extracted from a model. But as we move further into this, there’s a lot of other issues that can arise, especially as our dependency gets on this, things like data poisoning risk or model evasion risk. What does that mean for your implementation? And it’s going to mean different things, but we haven’t developed that yet. And what that means is that our vendor security management program right now, is just going to be a start to this. If you have an AI contract, we’ll know how to ask the data protection questions, but building out, how do you trust the model from these other cybersecurity aspects? That’s going to be a work that we’re going to develop over the next year. And hopefully we get them developed before we start having issues with it.
Max – 00:30:38: Yeah. Yeah. I, there’s so many dimensions to this. Security, data value, because, you know, we could be like, yep, we understand it’s perfect. We agree to all your clauses, but now the equation is upside down because somebody is really using your data to build something else, right? And so there are so many ways to look at this and slice and dice this. I think that whole fundamental of risk management, what are we willing to give up over the next few years to gain an advantage in another area is going to be key in terms of thinking about this. So, Joel, we’re coming up on our tail end here, but these are kind of the three things I’ve got out of this, Joel. I’ve got three key things that if you’re a leader, you want to hone in on AI strategy, secure development internally, how do you build and construct securely, and then of course supplier, legal reviews, supplier risk contracts. Those are some of my takeaways, Joel.
Joel – 00:31:34: The only thing that I would add to that list that’s kind of, it’s interwoven throughout those is an education piece. Making sure all along the way, everybody’s involved, understands, and is aware. I think that’s probably going to be an intentional education piece around AI.
Max – 00:31:48: That’s fantastic, Joel, and we want to invite you guys, or you guys that are listening, if you guys like to hear about any other important topics, feel free to share and give us your feedback. We’d love to consider any other items that you think are critical. These are our top.
Joel – 00:32:04: Absolutely. And if there are other important initiatives that you feel that people should be doing, please share it with us and we will share it with our audience.
Max – 00:32:15: Emerging cyber risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.
Joel – 00:32:26: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.