‹ All episodes

Emerging Cybersecurity Risks

Leveraging AI for Risk Management: Insights from Laura Whitt Winyard, VP of Security and IT at Hummingbird

👉 Why do you need to build guardrails around AI implementation?

👉 What is the role of AI in fraud detection?

👉 Why are education and awareness critical when implementing AI?

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, our guest is Laura Whitt Winyard, VP of Security and IT at Hummingbird, an anti-money laundering platform. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. 

Join us as we discuss the future of AI and its role in risk management. We explore the responsible use of AI, the collaboration between teams, the validation of AI models, and the potential risks and benefits of AI applications in society. Listen to the full podcast to gain valuable insights into leveraging AI while nurturing human intelligence!

Our conversational touchpoints include:

  • Why you need to build guardrails around AI implementation
  • Role of AI in fraud detection
  • Building trust and validation in AI models
  • The need for educating people on the responsible use of AI
  • Advise for organizations wanting to integrate AI in their workflows

 

Laura Whitt Winyard Bio:

Laura is an award-winning visionary and results-driven senior cybersecurity executive with 20 years of experience in the cybersecurity domain across global brands like Bloomberg, Comcast, DLL, and Malwarebytes. She has recorded notable accomplishments in cybersecurity, risk management, business continuity planning & disaster recovery, public speaking, media, and presentations. Her skill set includes incident response, security analysis, encryption, and strategic planning capabilities. Laura utilizes transformational leadership skills to liaise with cross-functional teams and key internal and external stakeholders.

Laura Whitt Winyard on LinkedIn

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a Data Security and Compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

 

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a Security Strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over 25 years of diverse Information Technology experience with an emphasis on Cybersecurity. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

 

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on Cyber Risk and Artificial Intelligence to help you prepare for risk management of emerging technologies. We’re your host, Max Aulakh.

 

Joel – 00:00:18: And Joel Yonts. Join us as we dive into the development of AI, the evolution in cybersecurity, and other topics driving change in the cyber risk outlook. 

 

I’m your host, Joel Yonts. And today, we got Laura Whitt-Winyard, a great Cybersecurity Executive with a tremendous background. I think it’s going to be a great show with some insightful conversations. And, of course, co-host Max Aulakh is on. Max, do you want to give us a little bit of details about this episode and introduce our guest?

 

Max – 00:00:48: Yeah, actually, I’m not going to steal Laura’s thunder. I’m going to let her introduce herself. I’m very impressed by her background and all the things she’s able to accomplish. But today, we’re going to talk about leveraging Artificial Intelligence to enhance risk management. We’re always talking about the downside of AI. We know that, but how can we use it to enhance cyber? We’ll get into machine learning for low-code, no-code, those kinds of things. And then, of course, as cybersecurity professionals, we’re always conservative in our thinking, thinking about the risks that are facing AI, so with some insightful conversation with Laura. Laura, I don’t want to introduce you because you’ve done quite a bit. So tell us a little bit about yourself, your journey as a leader, as well as how are you leveraging AI in your work. Where have you seen it? And let’s just get right into it.

 

Laura – 00:01:37: Okay, sure. So, thank you guys for having me. I really appreciate it. I’ve been in cybersecurity for twenty-two, twenty-three plus years. Time merges together now when you get to a certain age. I started out as an engineer. I was a security architect. I’ve been a hands-on keyboard for a good portion of my career. I’ve moved into leadership of security teams, managing teams globally and locally, teams at Comcast. If that gives you an idea of how big and teams as small as where I am now, which is myself and one other person. I stay really involved in the security community through coaching, mentoring, plus learning myself. I go to DEFCON every year, which, if you’ve never been, it’s the best possible conference you can ever go to. There’s no salesmanship going on, just a ton of learning and a ton of hacking. And that’s pretty much me in a nutshell.

 

Max – 00:02:34: Awesome, Laura. And I know recently you were at Malwarebytes and then at Hummingbird, which is awesome. We all know about Malwarebytes and then learned about Hummingbird, but tell us a little bit about how you have seen artificial intelligence or machine learning interact with cyber. What can you tell us about the use of those technologies, whether it’s malware analysis or workflow and GRC tools, what you’re dealing with now?

 

Laura – 00:02:59: The ramp-up of AI for the general population saw a boom rate in the last two years, especially with OpenAI’s ChatGPT, right? You can’t turn on the news nowadays without seeing it. Every single person within the company wants to use AI in some way, shape, or form.

 

Max – 00:03:19: I see.

 

Laura – 00:03:21: The most important thing for security, one, to get buy-in, to have people understand the importance of security is to enable the business. We don’t want to be a group. So how can we use AI in a responsible way, in a secure way? So, for instance, one thing a lot of teams are looking for is AI tools to help write code. There are inherent problems with that as well. Some code can be written with vulnerabilities baked in. Some code can be, you assume it’s great and you don’t have all your checks and balances in place. And that’s you’re pushing code to production, and it’s loaded with vulnerabilities, or your model has been tampered with. Those are things that you have to be cognizant of and enable the business, but also within guardrails, right? You still have to do the checks.

 

Joel – 00:04:15: Yeah. I mean, when I hear you talking, I’m having similar discussions with organizations around the globe and finding that balance because there’s a tremendous amount of risk. We’re seeing the news headlines, but there’s an even greater upscale to it. And people that decide that you’re going to set this out, they’re going to quickly find themselves at a competitive disadvantage because it’s that big of a differentiator. So is that the dialogue you’re having inside your company?

 

Laura – 00:04:39: Yeah, for sure. And you want to collaborate with the developers of the business because if you don’t, they will find ways around security. Next thing, you have shadow AI all over your environment that you weren’t aware happened. And so you want to encourage that dialogue and encourage them to use these tools. I mean, it’s a competitive advantage right now. In a couple of years from now, it’s going to be the cost of doing business. So either you get on board, or you get left behind.

 

Max – 00:05:08: On our side, Laura, same thing. It’s like whether I want to use it or not, but because of the efficiencies that are demanded by our customers. I’m like, go see what else is out there so we can get to the outcome very quickly, right? So there’s downward pressure on the teams to deliver. And then, of course, I think to your point, having a dialogue is like the first step. And you’re the first person I’ve heard say shadow AI. That’s a new term, but I think that’s what’s happening a lot right now.

 

Laura – 00:05:38: Yeah, I mean, Hummingbird. And for those of you who don’t know, Hummingbird is basically almost like a CRM, but for anti-money laundering, for financial fraud. So, a lot of our data is highly sensitive. We work with a lot of financial companies, cryptocurrencies, et cetera. And there’s a lot of talk within Hummingbird about using AI, whether it be to help us write code faster, help us be able to take all this disparate information from all of our customers and help the FBI or the Department of Homeland Security better aggregate all this information for investigations. We want to do that. Our goal at Hummingbird is to do good by helping fight financial crime. And the more we can provide information to those who investigate and better information to those who investigate, the farther we are on our journey to fight financial crime.

 

Joel – 00:06:35: Yeah. One of the things I wanted to comment on what you were just saying is the use of AI as a detective capability to find fraud or financial issues. That’s an area that I believe is going to be an early attack vector or motive for attackers in that not necessarily just disrupt AI in general but to hide other activities. For example, if there is some financial fraud that a threat actor is planning to do or maybe orchestrating in an environment, they would also need to compromise the AI system that watches that environment. So it’s like a cause-and-effect situation. Are you planning for those eventualities, or is that something that’s on your mind?

 

Laura – 00:07:15: Yeah, it’s definitely something that’s on my mind. I mean, one of the big concerns is model tampering, right? A lot of people use publicly available models, but how do you ensure that the model that you’re using hasn’t been tampered with? Especially, like you were saying, Max, the low-code, no-code AI, people are downloading models all over the place and using free ones everywhere, and some even paid models. But if you don’t have a good understanding of the way they work. Or you don’t have checks and balances in place to ensure that your model, it’s almost like you think about the olden days with tamper-evident tape.

 

Max – 00:07:56: Yeah.

 

Laura – 00:07:57: On your physical evidence, think about that on your model. How do you ensure that your model hasn’t been tampered with and is actually allowing a backdoor or even just inherently writing bad code for you? And there’s some tools that are coming out to help with that, shameless plug for my friend, Tito of HiddenLayer. They just won the RSA Innovation Sandbox. It’s basically machine learning detection and response, which I believe they’re the first company ever to come up with something like that. Their product is not possibly more timely, but kudos to them for really getting a jump on it and offering this assistance to us cybersecurity folks.

 

Max – 00:08:40: I think Laura, the Model Validation and Assurance is going to become really important. I know that we hear about all this AI work happening in the Department of Defense and things like that, but really industry has leaped ahead with OpenAI and those kinds of things. A lot of work that I’m seeing, they are worried about how we can be trusted, right? How do we ensure highly trusted models, right? Model validation. So, the company that you used as an example, I’m seeing as a parallel company, at least on the public sector side. Which usually when I start to see things happen on the public sector side, a few years later, the commercial side starts to pay attention, right? But I’m starting to see things where people are focused on high-assurance environments, classified environments. And the big part of that is just validating the model itself, trusting the information, and trusting the data. Joel, I know you’re working on this area, but have you seen some of that research peak where we’re validating the model so we can at least trust it and things like that?

 

Joel – 00:09:44: Yeah, Max, I mean, this whole conversation is resonating with me in some of the work I’m doing. Trying to understand and get assurance over a model is a complex task. And as you were saying, model integrity is a great place, and some of the products to detect that you were talking about, Laura, are great. But the reality is integrity starts much further up. The headwaters of AI is data. And so, if you talk about the public data set example from earlier, you don’t have to trojanize a public data set. The attacker needs to know that your model was built on public data, and they may be able to craft an invasion attack that just naturally gets around the data samples that you’ve sampled within it. So there’s a lot of complexity when you start looking at it. And what I’m finding, Max, going back to your question, is going back upstream and looking at those feeder processes like what’s data security and what’s the policies around it that’s going to be an important piece. Laura, back to you. Have you started to craft or change policies around data and some of the other things in your company that feed these models? Has that started to happen?

 

Laura – 00:10:48: Yeah, so we do already have a data classification policy, but we need to fine-tune that. We’re also working on an AI policy for any generative AI use within the company. So we’re developing a policy on that. Even if it’s not intentional, you know, think, oh, I’m just going to pull up ChatGPT and I’m going to have it do some work for me. Let me plug in all this information. Lo and behold, some of the information you’re plugging in is confidential or restricted. And they’re not trying to give that information away. They’re just trying to be more efficient at their job. So we’re crafting policy about using generative AI, what type of information you’re allowed to put in there, etc. If you don’t mind, I’d like to go back just one second when we were talking about models. And if you think about models, let’s look at the PyTorch model. I mean, it can perform like arbitrary code execution, right? I mean, like basically launching ransomware, pull bolts, right? Reverse shells. I mean, that type of stuff is, first and foremost, something that people need to start learning about. And you don’t have to be an expert, but you have to know enough to be paranoid. If that makes any sense. You know, the days of people saying, well, I understand it in theory, it’s long gone, especially now with machine learning, AI, different models, AI going at lightning speed. Like in the last two years, the ramp-up is insane, right? Everybody wants to be OpenAI because what? They’re probably IPO at some point. I don’t even know what their valuation is right now. I think like 5 billion or something.

 

Max – 00:12:27: Yeah.

 

Laura – 00:12:27: Yeah. It’s insane. So everybody wants to be OpenAI, And so it’s important for cybersecurity professionals, for leaders to understand what they’re recommending, why they’re recommending, and to be able to have an intelligent conversation with your data analysts, with your developers, all these people that are wanting to use it for their specific use case. I mean, Marketing sales huge benefit for Marketing and sales with AI, but you have to be able to have an intelligent conversation with them about that instead of just saying, I don’t know, ML is bad.

 

Max – 00:13:06: Right. Yeah. I agree with your point, right? Like when we’re seeing what’s happening with OpenAI and PyTorch and all of the different models out there, to some degree, what we’re saying is there’s like consumerization of AI, right? At scale beyond just B2B and tech people, which means the demand for tech depth for leadership, as well as the understanding the normal business things, how does Marketing work HR, that’s going to pick up in terms of the future of cybersecurity and how leaders need to be able to manage both sides of the conversations. That’s really hard. It’s extremely difficult. But yeah, I a hundred percent agree, right? If we pull back, it’s going to come back to the technical competencies.

 

Laura – 00:13:46: Well, and one of my biggest fears right now is because everybody else is trying to ramp up as quickly as OpenAI, and they’re making their stuff publicly available. You know, look at Bard when Microsoft integrated it with Bing, and it lost its mind and went a little bit psychotic when it first came out. It was insisting it was 2022 and getting, I don’t want to say, agitated because it doesn’t have feelings.

 

Max – 00:14:13: Yeah, it was upset about.

 

Laura – 00:14:15: It was getting insistent that it was, but we’re definitely cart before the horse everywhere. And the old saying is once the toothpaste is out of the tube, you can’t get it back in.

 

Joel – 00:14:27: Certainly. Yeah. I think it’s really fascinating where we find this fast-paced move. The tools have become so easy to use prompts, and you get all this data out of it. Even as a developer, I was making a model this morning. I can make a neural network model in two commands. It basically says to define the model and fit it, but behind the scenes, it creates a data structure that it’s like reading the matrix to see what’s in it. So we don’t have a lot of ways to do the traditional things that we once could do. For example, we have data forensics that we could do against a database to do digital forensics, even memory forensics. There is no forensics of a neural network model. It just doesn’t exist.

 

Laura – 00:15:05: Your new product if you want to launch a new company.

 

Max – 00:15:08: Laura, after you get done with Hummingbird, that’s what we’re going to do. Absolutely.

 

Laura – 00:15:13: I can do it at the same time. They’ll be fine as long as we let them have it for free.

 

Max – 00:15:18: Yeah, that’s right. That’s right. They get a discount, right? Which is like, yeah.

 

Joel – 00:15:23: Well, I think that’s certainly on the map. I think that when you’re talking about trying to figure out, also trying to know the boundaries of what our capabilities are to secure and at least articulate, those are important as well.

 

Laura – 00:15:36: Yeah, I mean, education and awareness never before really in security, I don’t think, has it been more important. And we talk a lot about phishing and warning people about phishing. And don’t get me started on AI crafting phishing exercises. I mean, it’s going to be bad. But educating our users on the use of generative AI, helping them to understand that everything you put in there is used to learn, and it is stored. Think about that. Think about the output that you’re receiving as well. You know, just because it came from AI doesn’t mean it’s correct. I mean, I’ve heard of AI giving harmful advice providing this information. A healthy dose of skepticism is warranted, and educating our user base on that, I think, is critical.

 

Max – 00:16:29: As we think about user education, we’ve always had this issue in our community, right? And we’ll say things like, well, it’s not the user; we just need to make it more usable, right? Build some of these protections in. And we’ve heard that battle back and forth. And it’s almost like our expectation for the user to be smarter or higher within a B2B environment. But taking a look at the consumerization of AI, right? I’m wondering, in the future, I don’t know what your thoughts are, but like in the future, if there needs to be a public campaign to educate, hey, you used to be able to pick up the phone and call and trust the voice, but now things have changed, right? There’s other mechanisms. So I don’t know what your thoughts are, but I wonder how far out we are from the general population at whatever age, regardless of what field you’re in, that needs to be educated about phone calls and emails, and because of this advanced technique.

 

Laura – 00:17:23: There needs to be a lot of education, especially with the upcoming presidential election. There is going to be so much misinformation out there. So many deep, fake videos. Adobe just released their AI yesterday or today, and the way you can edit photos and anyone could do it. You do not have to be technical to know how to do it. Whereas, like before, I remember trying to use Photoshop. You had to be someone who lived in Photoshop to know all the different features. Now you just tell it. You say, show me a picture of a deer. I’ll show you a picture of a deer. Take that deer and put it on a mountaintop, and it’ll do it for you. You don’t need to highlight; use the little lasso and edit it and all this. You can do it in like seconds. So imagine taking a photo of an XYZ candidate and putting him with this married woman on a yacht out in the Cayman Islands. Boom, next thing, it’s all over. And there’s gotta be, I want to say, a digital watermark, but that’s not really going to help. But we have to figure out something.

 

Joel – 00:18:33: Yeah, I think that the point, as I’m hearing you talk, is that we’ve got to get quicker inline detection of these capabilities, almost like a filter. We’re used to that. We have email filters and web filters today. We just need an AI filter because it needs to have advanced pattern detection that the human eye might not be able to see, but an AI analyzing it down to the digital level can tell the difference, for example.

 

Laura – 00:18:54: You’re still right, Joel, because the only way we’re going to combat bad AI is with good AI.

 

Max – 00:18:59: I’m thinking of it from a public perception management perspective. I think unfortunately some damage has to be done for the government to realize, hey, this is damaging the public. I can even see in the future where they have actual campaigns.

 

Laura – 00:19:14: They are drafting legislation now. I’m a fellow at the Institute for Critical Infrastructure Technology. And we’ve been asked to review some legislation and provide comments on that, as well as for offensive security. But the legislation regarding AI, I think if you watch the news, which I’m a little bit of a news junkie myself, there’s been a lot of conversation about, are we getting ahead of ourselves? I think that conversation is a moot point. We already are.

 

Max – 00:19:46: We already are.

 

Laura – 00:19:47: We can’t stop it, right? But I think more education, I wouldn’t mind seeing a national public service announcement campaign, think about like back in the day when they used to do the PSAs, you know, it’s 10 p.m. Do you know where your children are and all that something along those lines? It’ll also be really interesting to see what comes out of DEFCON. There’s an AI village now, and there’s also an election-hacking village. So it would be nice to see if those two villages get together and see what they can do.

 

Joel – 00:20:20: Yeah, it’s amazing how fast this is going in the education space. But when you start talking about educating the masses, what do you tell the masses?

 

Max – 00:20:28: Because from the government’s perspective, you want to provide certainty and calmness. You don’t want to be like, we don’t know what this thing is. Right. We could create mayhem. Right. But yeah. What do you think? How do you educate about this thing?

 

Laura – 00:20:42: I would say it needs to be a healthy dose of skepticism. Question everything. Do your homework. If you get like Facebook something, Google it. See if it’s a scam. See if it’s an AI attack. If you get a phone call. Well, if you’re like me, you just don’t ever answer your phone. And that’s no big deal. I figure it’s important to leave a message. And in my phone, the people that I want to talk to are in my contacts, and that’s like four people. But I think we’re going to get to a point where people will stop answering their phones.

 

Max – 00:21:19: Yeah.

 

Joel – 00:21:20: This is so fascinating because I asked that question for all of us, but I was internally asking myself that question, and I don’t know that I have a good answer for it. I think what you said, Laura, is really great. When I think about this problem, we’re getting further and further away from the actual systems that perform the task. The old-school people would have said that when we moved from the command line to graphical interfaces, we did that. But I’m a few generations ahead of that. But as AI improves the human-machine interaction more and more in Tillman, Adobe, and all these things, it means that we have to know less in order to get really powerful functionality out of it, so that it means that there are magnified effects, but also there’s a greater level of trust you have to give it, because I’m going to say these three words, and it’s going to go do something. I have really very little way to validate that. It becomes a really challenging proposition. Max, what are your thoughts on that?

 

Max – 00:22:15: The other day, we’ve had a few other leaders come on, and we keep pointing back to ethics and those kinds of things. But this nuance of educating, before we even know what’s right from wrong, I think education is going to be the key because, yes, we are so far away from the actual reality of the task. Even the developers are making assumptions off of data, which the data is being downloaded, and it’s not validated at all. And by the time we check it all, I mean, I’m just thinking out loud, but by the time we check it all, we may even lose the benefits of AI. Because if you gotta validate everything, then what’s the benefit of AI? But I think fundamentally, though, this idea, this concept of a public campaign, and the news, we’ll see that. We don’t know what it’s going to look like, but I foresee that happening because history repeats itself when it comes to crime. People who get hit up the most are older populations, and they’re like, hey, this is your son, this is your daughter, I’m filing your taxes, I need money, right? I think that’s going to cause the government to respond or somebody to respond in a way that we’re just going to have to manage it, right? If we don’t know what that response will be, but when I think through this, that’s what comes to my mind, right? What’s going to cause us to actually move faster?

 

Laura – 00:23:32: There’s going to have to be this catalyst, this defining moment where either the commercial sector or the government is going to say, all right, enough’s enough, too many people are getting hurt, we need to start educating people. I do have a question for you guys. Do you worry that AI is going to create the dumbification of humans?

 

Max – 00:23:54: Yeah, I think so. I mean, that’s what all technology has done. I know in an Indian household, we always say, why do I need to learn math if I have a calculator?

 

Joel – 00:24:04: Well, yes, I love it.

 

Max – 00:24:07: And then your mom beats you up about it, right? Like, yeah, you still need to do your calculus, right? What do you think, Joel? That’s from my cultural background. That’s what we try to prevent, but it almost happens without a question for us.

 

Joel – 00:24:18: Well, it seems that in every episode, we have to reference either Star Trek or Star Wars. So, I’m going to reference Star Trek on this one. I think we’ve probably all seen the episode where the Star Trek ship, Enterprise, goes to a world where everyone’s living under this computer system that’s just running things, devolving their knowledge, the basic, agrarian society, but they’re so dependent upon this system that they don’t know how it works and it stops working at some point. I mean, are we going to end up there? I mean, it’s possible, the utopian society, probably not. I think what that means is people are going to understand less about the fundamentals, less about how the core processes work, where the mastery will come in, are those people that are able to orchestrate these higher level technologies into even larger things. These building blocks of AI are massive already, but if you start to put them together across a series of technologies, who knows what you can invent with it? So I think human intelligence will move up the stack, and maybe now we got instead of this typical OSI layers, we may have like five more layers on top that stop at AI, but I think human intelligence will go up there, which will be fine unless we ever have to go back until with some of the basics. And I think that’s where we’ll get challenged in general.

 

Max – 00:25:32: As I hear you speak, Joel and Laura, that’s a fantastic question because I think about what advice I can give my kids who are looking at school, right? Traditionally, you’ll have kids that are more focused on liberal arts, and then you have the STEM kids, right? I’ve got one of each, but if we look at the Maslow Hierarchy of Needs, right? Where we sit in the United States, this will help us self-actualize even more, and the ability to be creative and how to be a little bit more cognitive, and all of the skills that we in technology typically don’t value are going to come to the forefront in my hypothesis. Like the creatives are going to love it because they don’t have to know the technical depth to create something amazing. At least that’s how I see it because traditionally, in our household, we value those technical skills, but now I’m seeing the flip of that, right? And that’s not to say they’re dumb about it; it’s just a very different skill altogether, right? It’s the ability to be creative. So Joel, that’s what it reminds me of is that we’re going to be asking for a skillset that could be very different in the future.

 

Joel – 00:26:40: Yeah, I like that. I’m going to look on the positive side and say that we’re going to do even greater things. Now, there are some bad folks out there that are going to try to undermine that and take advantage of it, and that’s the reason we’re cybersecurity practitioners, and that’s a whole other issue, which is one of the questions I wanted to ask you, Laura. We’ve talked about lots of different stuff, and we’ve talked about the need for education, the need for how we do this as a society. You’re an experienced cybersecurity executive. You lead cybersecurity and big organizations. Just so happens we have a number of listeners that have that exact same role. What would your advice to those people be? How would you advise our listeners inside corporations and organizations? How do you start building this capability, awareness, or whatever to protect in this age of AI?

 

Laura – 00:27:26: Well, I think you first start with some policies. Everybody rolls their eyes at policies, but there’s also the problem of if you tell people you shouldn’t be doing something, they say, well, where’s the policy that says I can’t? Or where’s the policy that tells me how I should? So start with your policies. Make sure you get your stakeholders involved in drafting those policies. Depending on your company, I mean, at Hummingbird, we’re a startup, right? So we have to take risks. We’re young, we’re new, trying to innovate as opposed to like a hundred-year-old bank, right? Maybe a little bit more risk-inverse. Talk to your stakeholders. Have them help you craft your policy. When you get your policy completed, share it with everybody, but don’t just spam it out, and oh, hey, go check this box that you read, right? Actually, schedule some webinars about it. Maybe even do like office hours, as they call it, right? Where you’re available from this two-hour block for any questions, maybe start a chat group at your company. If you use Microsoft Teams or Slack or whatever, create a chat group about AI so that everyone’s having a conversation. Always make sure there’s a security person in that chat group. I would say for the CISOs out there and the other cybersecurity leaders, educate yourself unless you’re maybe like two years away from retirement or something. It’s just going to get more and more. So, educate yourself as much as possible. If you’re reading a news article it talks about PyTorch models, go Google those and find out what they are. Just try to educate yourself as much as possible because your internal customers, marketing, sales, DevOps, engineering, everybody’s going to be wanting to use AI, and you want them to come to you and talk to you about it and have an intelligent conversation about it as opposed to going around you.

 

Joel – 00:29:23: Ciao. Perfect.

 

Max – 00:29:24: Joel, I know we’re at the tail end of this. Laura, we both wanted to thank you for coming on. I think that was some great fundamental advice to other leaders, but also, thank you for touching on this topic of education all around. So we’ve enjoyed the conversation, but thank you so much.

 

Laura – 00:29:40: Thank you. I’ve enjoyed it as well. Thanks for having me.

 

Max – 00:29:45: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:29:56: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

 

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More

X